I am using a python version of Selenium to capture comments on a Chinese website.
The website is https://v.douyu.com/show/kDe0W2q5bB2MA4Bz
I want to find this span element. In Chinese, this is called "弹幕列表".
I tried the absolute path like:
driver.find_elements_by_xpath('/body/demand-video-app/main/div[2]/demand-video-helper//div/div[1]/a[3]/span')
But it returns NoSuchElementException. I just thought that maybe this site has a protection mechanism. However, I don't know much about Selenium and would like to ask for help. Thanks in advance.
I guess you use Selenium because requests can't capture the value.
If it's not what you want to do, don’t read my answer.
Because you are requests.get(url='https://v.douyu.com/show/kDe0W2q5bB2MA4Bz')
You need to find the source of the data ApiUrl on F12 Network.
In fact, his source of information is
https://v.douyu.com/wgapi/vod/center/getBarrageListByPage + parameter
↓
https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset=-1
Although I can't help you solve the Selenium problem.
But I will use the following methods to get the data.
import requests
url = 'https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset=-1'
headers = {'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}
res = requests.get(url=url, headers=headers).json()
print(res)
for i in res['data']['list']:
print(i)
Get All Data
import requests
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36'}
url = 'https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset=-1'
while True:
res = requests.get(url=url, headers=headers).json()
next_json = res['data']['pre']
if next_json == -1:
break
for i in res['data']['list']:
print(i)
url = f'https://v.douyu.com/wgapi/vod/center/getBarrageListByPage?vid=kDe0W2q5bB2MA4Bz&forward=0&offset={next_json}'
Related
Goal:
I am trying to scrape the HTML from this page: https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d=.
(note - I will eventually want to paginate and scrape all job listings from this page)
My issue:
I get a 503 error when I try to scrape the page using Python and Requests. I am working out of Google Colab.
Initial Code:
import requests
url = 'https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d='
response = requests.get(url)
print(response)
Attempted solutions:
Using 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36'
Implementing this code I found in another thread:
import requests
def getUrl(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2228.0 Safari/537.36',
}
res = requests.get(url, headers=headers)
res.raise_for_status()
getUrl('https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d=')
I am able to access the website via my browser.
Is there anything else I can try?
Thank you
That page is protected by cloudflare, there's some options to try to bypass it, seems that using cloudscraper works:
import cloudscraper
scraper = cloudscraper.create_scraper()
url = 'https://www.doherty.jobs/jobs/search?q=&l=&lat=&long=&d='
response = scraper.get(url).text
print(response)
In order to use it, you'll need to install it:
pip install cloudscraper
Getting page 404 with Python Requests but I can access the page no problem through my browser. I can access other pages that are formatted exactly the same as this page and they load no problem.
Have already tried changing headers with no luck.
My Code:
string_page = str(page)
with requests.Session() as s:
resp = s.get('https://bscscan.com/token/generic-tokentxns2?m=normal&contractAddress=0x470862af0cf8d27ebfe0ff77b0649779c29186db&a=&sid=f58c1cdefacc680b799412c7645ed7f7&p='+string_page)
page_info = str(resp.text)
print(page_info)
I have also tried with urllib and the same thing happens
I'm not sure if this will fix it, but try adding this in the header maybe it might work
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36'
I'm brand new to coding and have finished making a simple program to web scrape some stock websites for particular data. The simplified code looks like this:
headers = {'User-Agent': 'Personal_User_Agent'}
fv = f"https://finviz.com/quote.ashx?t=JAGX"
r_fv = requests.get(fv, headers=headers)
soup_fv = BeautifulSoup(r_fv.text, 'html.parser')
fv_ticker_title = soup_fv.find('title')
print(fv_ticker_title)
The website would not work until I created a user agent, but then it worked fine. I then created a website through python's local host which also worked fine, and so I thought I was ready to make the website public via "python anywhere".
However, when I went to create the public website, the program shuts down every time I go to access information through web scraping (i.e. using the user_agent). I didn't like the idea of using my user agent for a public domain, but I couldn't find out how other people who web scrape go about this problem when a user agent is required for a public domain. Any advice!?
I would add some random headers to rotate through rather than my own headers. Something like this should work:
import random
header_list = [
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.1.1 Safari/605.1.15',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_18_3) AppleWebKit/537.34 (KHTML, like Gecko) Chrome/82.0.412.92 Safari/539.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 11.12; rv:87.0) Gecko/20170102 Firefox/78.0',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.32 (KHTML, like Gecko) Chrome/82.0.12.17 Safari/535.42'
]
fv = f"https://finviz.com/quote.ashx?t=JAGX"
user_agent = random.choice(header_list)
headers = {'User-Agent': user_agent}
r_fv = requests.get(fv, headers=headers)
soup_fv = BeautifulSoup(r_fv.text, 'html.parser')
fv_ticker_title = soup_fv.find('title')
print(fv_ticker_title)
or Option 2.
Use a library called fake-headers to generate them off the cuff:
from fake_headers import Headers
fv = f"https://finviz.com/quote.ashx?t=JAGX"
headers = Headers(os="mac", headers=True).generate()
r_fv = requests.get(fv, headers=headers)
soup_fv = BeautifulSoup(r_fv.text, 'html.parser')
fv_ticker_title = soup_fv.find('title')
print(fv_ticker_title)
Really depends on whether you want to use a library or not...
Can someone help me to solve my problem in this code?
CODE:
from bs4 import BeautifulSoup
import requests
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like
Gecko) Chrome/83.0.4103.61 Safari/537.36"}
url = "https://www.amazon.com/RUNMUS-Surround-Canceling-Compatible-Controller/dp/B07GRM747Y"
resp = requests.get(url, headers=headers)
s = BeautifulSoup(resp.content, features='lxml')
product_title = s.select("#productTitle")[0].get_text().strip()
print(product_title)
If you try to print what you get as response, you will not encounter related errors.
import requests
headers = {'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36"}
url = "https://www.amazon.com/RUNMUS-Surround-Canceling-Compatible-Controller/dp/B07GRM747Y"
resp = requests.get(url, headers=headers)
print(resp.content)
The output you are getting from this request:
b'<!--\n To discuss automated access to Amazon data please contact api-services-support#amazon.com.\n For information about migrating to our APIs refer to our Marketplace APIs...
The site you are sending requests is not allowing you to access content with provided headers. So your s.select("#productTitle") creates empty list therefore you are getting an index error.
I'm trying to retrieve and process the results of a web search using requests and beautifulsoup.
I've written some simple code to do the job, and it returns successfully (status = 200), but the content of the request is just an error message "We're sorry for any inconvenience, but the site is currently unavailable.", and has been the same for the last several days. Searching within Firefox returns results without issue, however. I've run the code using a URL for the UK-based site and it works without issue so I wonder if the US site is set up to block attempts to scrape web searches.
Are there ways to mask the fact I'm attempting to retrieve search results from within Python (eg, masquerading as a standard search within Firefox) or some other work around to allow access to the search results?
Code included for reference below:
import pandas as pd
from requests import get
import bs4 as bs
import re
# works
# baseURL = 'https://www.autotrader.co.uk/car-search?sort=sponsored&radius=1500&postcode=ky119sb&onesearchad=Used&onesearchad=Nearly%20New&onesearchad=New&make=TOYOTA&model=VERSO&year-from=1990&year-to=2017&minimum-mileage=0&maximum-mileage=200000&body-type=MPV&fuel-type=Diesel&minimum-badge-engine-size=1.6&maximum-badge-engine-size=4.5&maximum-seats=8'
# doesn't work
baseURL = 'https://www.autotrader.com/cars-for-sale/Certified+Cars/cars+under+50000/Jeep/Grand+Cherokee/Seattle+WA-98101?extColorsSimple=BURGUNDY%2CRED%2CWHITE&maxMileage=45000&makeCodeList=JEEP&listingTypes=CERTIFIED%2CUSED&interiorColorsSimple=BEIGE%2CBROWN%2CBURGUNDY%2CTAN&searchRadius=0&modelCodeList=JEEPGRAND&trimCodeList=JEEPGRAND%7CSRT%2CJEEPGRAND%7CSRT8&zip=98101&maxPrice=50000&startYear=2015&marketExtension=true&sortBy=derivedpriceDESC&numRecords=25&firstRecord=0'
a = get(baseURL)
soup = bs.BeautifulSoup(a.content,'html.parser')
info = soup.find_all('div', class_ = 'information-container')
price = soup.find_all('div', class_ = 'vehicle-price')
d = []
for idx, i in enumerate(info):
ii = i.find_next('ul').find_all('li')
year_ = ii[0].text
miles = re.sub("[^0-9\.]", "", ii[2].text)
engine = ii[3].text
hp = re.sub("[^\d\.]", "", ii[4].text)
p = re.sub("[^\d\.]", "", price[idx].text)
d.append([year_, miles, engine, hp, p])
df = pd.DataFrame(d, columns=['year','miles','engine','hp','price'])
By default, Requests sends a unique user agent when making requests.
>>> r = requests.get('https://google.com')
>>> r.request.headers
{'User-Agent': 'python-requests/2.22.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
It is possible that the website you are using is trying to avoid scrapers by denying any request with a user agent of python-requests.
To get around this, you can change your user agent when sending a request. Since it's working on your browser, simply copy your browser user agent (you can Google it, or record a request to a webpage and copy your user agent like that). For me, it's Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36 (what a mouthful), so I'd set my user agent like this:
>>> headers = {
... 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
... }
and then send the request with the new headers (the new headers are added to the default headers, they don't replace them unless they have the same name):
>>> r = requests.get('https://google.com', headers=headers) # Using the custom headers we defined above
>>> r.request.headers
{'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
Now we can see that the request was sent with our preferred headers, and hopefully the site won't be able to tell the difference between Requests and a browser.