I want to scrape the contact numbers from the website with the respective details of the Courier Services. I am not able to scrape the Contact numbers and other details like name address and rating from all the Courier services. I analyzed the data is in the script tag. Please suggest a fix for this
import requests
import pandas as pd
import json
import csv
from lxml import html
import re
headers ={'authority': 'www.justdial.com',
'accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 ',
'accept-encoding': 'gzip, deflate, br',
'accept-language':'en-US,en;q=0.9',
'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.0.0 Safari/537.36" }
produrl = 'https://www.justdial.com/Mumbai/Courier-Services-in-Mumbai-Bazar-Nalasopara-East/nct-10142628'
prodresp = requests.get(produrl, headers=headers, timeout=30)
prodResphtml = html.fromstring(prodresp.text)
partjson = prodResphtml.xpath('/html/head/script[9]/text()')
print(partjson)
That's data are coming with ajax api call from there;
https://www.justdial.com/api/india_api_write/20march2020/searchziva.php?city=Mumbai&area=Mumbai-Bazar-Nalasopara-East&lat=&long=&darea_flg=0&case=spcall&stype=category_list&search=Courier-Services&national_catid=10142628&nextdocid=&attribute_values=&basedon=&sortby=&nearme=0&max=100&pg_no=1
Related
Apologies if this is a bit website specific (barchart.com). I used the guidance provided here for properly connecting and scraping barchart.com for Futures data. However, after hours of trying, I am at a loss as to how to pull off this same trick for their pre-market data table: Barchart_Premarket_Site.
Anyone know the trick to get the pre-market data?
Here is the basic connection, for which i get a 403:
import requests
geturl=r'https://www.barchart.com/stocks/pre-market-trading/volume-advances?orderBy=preMarketVolume&orderDir=desc'
s=requests.Session()
r=s.get(geturl)
#j=r.json()
print(r)`
All that was required was to add more headers to the request. You can find your own headers using chrome > developer tools; and then just find the api request for the table and slam in a few of the headers associated with that request.
import requests
request_url = "https://www.barchart.com/proxies/core-api/v1/quotes/get?lists=stocks.us.premarket.volume_advances&orderDir=desc&fields=symbol%2CsymbolName%2CpreMarketLastPrice%2CpreMarketPriceChange%2CpreMarketPercentChange%2CpreMarketVolume%2CpreMarketAverage5dVolume%2CpreMarketPreviousLast%2CpreMarketPreviousChange%2CpreMarketPreviousPercentChange%2CpreMarketTradeTime%2CnextEarningsDate%2CnextEarningsDate%2CtimeCode%2CsymbolCode%2CsymbolType%2ChasOptions&orderBy=preMarketVolume&meta=field.shortName%2Cfield.type%2Cfield.description%2Clists.lastUpdate&hasOptions=true&page=1&limit=100&raw=1"
headers = {
'accept': 'application/json',
'cookie': '_gcl_au=1.1.685644914.1670446600; _fbp=fb.1.1670446600221.1987872306; _pbjs_userid_consent_data=3524755945110770; _pubcid=e7cf9178-59bc-4a82-b6c4-a2708ed78b8d; _admrla=2.2-1e3aed0d7d9d2975-a678aeef-7671-11ed-803e-d12e87d011f0; _lr_env_src_ats=false; _cc_id=6c9e21e7f9c269f8501e2616f9e68632; __browsiUID=c0174d21-a0ab-4dfe-8978-29ae08f44964; __qca=P0-531499020-1670446603686; __gads=ID=220b766bf87e15f9-22fa0316ded8001f:T=1670446598:S=ALNI_MaEWcBqESsJKLF0AwoIVvrKjpjZ_g; panoramaId_expiry=1673549551401; panoramaId=9aa5615403becfbc8adf14a3024816d53938b8cdbea6c8f5cabb60112755d70c; udmsrc=%7B%7D; _pk_id.1.73a4=1aee00a1c66e897b.1672997455.; _ccm_inf=1; bcPremierAdsListScreen=true; _hjSessionUser_2563157=eyJpZCI6ImI2MTM5NTQ4LWUxYzMtNTU2NS04MmM3LTk4ODQ5MWNjY2YxZCIsImNyZWF0ZWQiOjE2NzMwMzQ3OTY0NDAsImV4aXN0aW5nIjp0cnVlfQ==; bcFreeUserPageView=0; _gid=GA1.2.449489725.1673276404; _ga_4HQ9CY2XKK=GS1.1.1673303248.3.0.1673303248.0.0.0; _ga=GA1.2.606341620.1670446600; __aaxsc=2; aasd=5%7C1673314072749; webinar131WebinarClosed=true; _lr_geo_location_state=NC; _lr_geo_location=US; udm_edge_floater_fcap=%5B1673397095403%2C1673392312561%2C1673078162569%2C1673076955809%2C1673075752582%2C1673066137343%2C1673056514808%2C1673051706099%2C1673042087115%2C1673037276340%2C1672960427551%2C1672952009965%2C1672947201101%5D; pbjs-unifiedid=%7B%22TDID%22%3A%2219345091-e7fd-4323-baeb-4627c879c6ba%22%2C%22TDID_LOOKUP%22%3A%22TRUE%22%2C%22TDID_CREATED_AT%22%3A%222022-12-05T19%3A48%3A10%22%7D; __gpi=UID=000008c6d06e1e0d:T=1670446598:RT=1673433090:S=ALNI_MZS6mLx8CJg9iN6kzx4JeDFHPOMjg; market=eyJpdiI6InJvcVNudkprUjQ1bE0yWWQrSTlYY1E9PSIsInZhbHVlIjoieUpabHpmSnJGSkIxc0o1enpyb1dLdENBSWp4UE5NYUZwUFg3OGs0TGJSL0dQWUNpTDU0a2hZbklOQTFNd09OVSIsIm1hYyI6IjBjMjJkNDExZjRhOTc2M2QwYWU3NGUyNmVlZTgyMzY2NWM2MjQyOTY2MjY2YmUxODI2Y2RkY2FlNzI3MjNkOTIifQ%3D%3D; _lr_retry_request=true; __browsiSessionID=c02dadca-6355-415f-aa80-926cccd94759&true&false&DEFAULT&us&desktop-4.11.12&false; IC_ViewCounter_www.barchart.com=2; cto_bundle=dxDlRl90VldIJTJGa0VaRzRIS0xnQmdQOXVWVlhybWJ3NDluY29PelBnM0prMkFxZkxyZWh4dkZNZG9LcyUyRjY1VWlIMWRldkRVRlJ5QW05dHlsQU1xN2VmbzlJOXZFSTNlcFRxUkRxYiUyRlp6Z3hhUHpBekdReU5idVV0WnkxVll0eGp5TyUyQlVzJTJCVDVoRkpWWlZ4R0hOSUl2YTVJVDhBJTNEJTNE; cto_bidid=51ixCl92dkhqbmVmdnlTZHVYS25nWTk2eDVMUnVRNjhEMUhxa3FlcmFzRHVNSERUQkd5cFZrM0QyQyUyRkVNNkV6S0ZHOUZPcTBTR2lBUjA5QUc5YU1ucW9GMFZBWHB4aU9sMlo3WHAlMkJYWjZmJTJGWkpsWSUzRA; _awl=2.1673451629.5-df997ba8dc13bee936d8d14a9771e587-6763652d75732d6561737431-0; laravel_token=eyJpdiI6IjR2YStGblAxWlZoZzllcEtPUUFLNlE9PSIsInZhbHVlIjoiY3E2bHdQWFkyT1FFUHFka2NMMVoyREFvQlZwWXlxc3F0SlRuZnIyTHJsSWtNVFA0K1czcDloWFF2d0lVZys3azZyelkrWks5SWxuRW05MGlqV1I4QmViMU9KKzArVXJOTWNVK2hqZVRocVNHM3NZa1dNeStQbnNyYVBtcjlUeTZzT2lpV2t1ek1UOE1wSUFudmg0NzFTQ3VPeDJiYk16bGNBTzVqVHBCcFRZdTFsZjBVREVyUEhLeThjZm9wSGIzQ2NDVE0ya0xOQWx1VGx0aUlEUE9yakU4Q3RicWFmNDdkYjJSWHlsSWYwajlSUkozVmQ4OVNGNzZEeWhtUExtcXB6VnNrY2NsUzRFQnJyMlhiejFtc0l3U2p5SW5BbFFDZTN0dk9EUWNOR2hVYUdMbmhFUFZVT24xOFFGVkM3L2giLCJtYWMiOiIxYzM5Yzk1ZWNjNjM0NzdjMmM4YTJkZDg0ZmY5MWQwNWUzOTlhNTAwNjg2MTNmNTNlYzY4M2MzYWQ3MDA4MThlIn0%3D; XSRF-TOKEN=eyJpdiI6Ik1PMGEvOGFkZ1p1ekpNcXIvZWZtcHc9PSIsInZhbHVlIjoiMVZYQ3NCV1hjcWREdG5uSDVqYXZVVy91U29USys1dkJJeFNZZG9QVGNRNDhmMTJIeitVV2NUV0xSUC9ZTThVM3FDQWZBcVdNclhqSkx4MG1NTGhadUNlNXRLMEdUc3RDcEVwNnJVYU9FNTBub2NKRWxlekxBZmZEVXNhZUlwWnoiLCJtYWMiOiIxYTI0N2E2OGMxMzRhNmFiYTliMzBlYTdjYWZlNzUwMDRlY2Q5YjI2YzY4OGZlMWIxYmM0YTE3YzZkMTdhMGM3In0%3D; laravel_session=eyJpdiI6InJIcmMxRWVacmtGc2tENS9zYUFFOVE9PSIsInZhbHVlIjoibG1vQWh1d1dmaUNBZTV4dGdJbWhTVEoyMWVrblBFNTBycTBPai9ad2llcHRkd0hHUTI4ZS8rUFNFVm5LNEcvd1RXY1RwOHdVZHplNU92Vk9xUHZjYmMrUC9Cc3hJUkJNWE54OVR1UHFaTExpM1BRcWRSWEJ5Q3gvVVNzajdHZUoiLCJtYWMiOiI5NDVkOGU4NGM5Y2MwMThmMTgwMzQyOWQ1Yzc5MzU5ZGU2ZjkwMWRjYzBjZWJiZDFhMTQzODMzZmE2NWExMGQ3In0%3D',
'referer': 'https://www.barchart.com/stocks/pre-market-trading/volume-advances?orderBy=preMarketVolume&orderDir=desc',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
'x-xsrf-token': 'eyJpdiI6Im1LQVRpVEJONzZwMDRVQnhYK0I5SWc9PSIsInZhbHVlIjoiMkRIMnJBb1VDQmRscjNlajF1dVR2eWxRbGNJTGZCNWxMaWk3N0EzQWlyOWk0cXJBK2oyUVJ1N282R2VOVWh6WlhJcXdZdFplZmRqaFhPa203bi9HeFBxckJKeUVzVDRETHI5OHlxNDZnOEF5WVV5NXdNSWJiWk95UlFHRXQwN2siLCJtYWMiOiI1NTkyZjk2M2FlNTE0NDI0ODQ3YmE4ZjIyZDY1MzM2MTA3ZTY4NDA5NzA5YzViMjhiN2UwYTFhNTM1Y2ZkMjk5In0='
}
r = requests.get(request_url,headers=headers)
I'm trying to write a script to scrape some data off a Zillow page (https://www.zillow.com/homes/for_rent/38.358484,-77.27869,38.218627,-77.498417_rect/X1-SSc26hnbsm6l2u1000000000_8e08t_sse/). Obviously I'm just trying to gather data from every listing. However I cannot grab the data from every listing as it only finds 9 instances of the class I'm searching for ('list-card-addr') even though I've checked the html from listings it does not find and the class exists. Anyone have any ideas for why this is? Here's my simple code
from bs4 import BeautifulSoup
import requests
req_headers = {
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'accept-encoding': 'gzip, deflate, br',
'accept-language': 'en-US,en;q=0.8',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'
}
url="https://www.zillow.com/homes/for_rent/38.358484,-77.27869,38.218627,-77.498417_rect/X1-SSc26hnbsm6l2u1000000000_8e08t_sse/"
response = requests.get(url, headers=req_headers)
data = response.text
soup = BeautifulSoup(data,'html.parser')
address = soup.find_all(class_='list-card-addr')
print(len(address))
Data is stored within a comment. You can regex it out easily as string defining JavaScript object you can handle with json
import requests, re, json
r = requests.get('https://www.zillow.com/homes/for_rent/?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22savedSearchEnrollmentId%22%3A%22X1-SSc26hnbsm6l2u1000000000_8e08t%22%2C%22mapBounds%22%3A%7B%22west%22%3A-77.65840518457031%2C%22east%22%3A-77.11870181542969%2C%22south%22%3A38.13250414385234%2C%22north%22%3A38.444339281260426%7D%2C%22isMapVisible%22%3Afalse%2C%22filterState%22%3A%7B%22sort%22%3A%7B%22value%22%3A%22mostrecentchange%22%7D%2C%22fsba%22%3A%7B%22value%22%3Afalse%7D%2C%22fsbo%22%3A%7B%22value%22%3Afalse%7D%2C%22nc%22%3A%7B%22value%22%3Afalse%7D%2C%22fore%22%3A%7B%22value%22%3Afalse%7D%2C%22cmsn%22%3A%7B%22value%22%3Afalse%7D%2C%22auc%22%3A%7B%22value%22%3Afalse%7D%2C%22fr%22%3A%7B%22value%22%3Atrue%7D%2C%22ah%22%3A%7B%22value%22%3Atrue%7D%7D%2C%22isListVisible%22%3Atrue%2C%22mapZoom%22%3A11%7D',
headers = {'User-Agent':'Mozilla/5.0'})
data = json.loads(re.search(r'!--(\{"queryState".*?)-->', r.text).group(1))
print(data['cat1'])
print(data['cat1']['searchList'].keys())
Within this are details on pagination and the next url, if applicable, to get all results. You have only asked for page 1 here.
For example, print addresses
for i in data['cat1']['searchResults']['listResults']:
print(i['address'])
I've seen some similar threads but neither gave me the answer. I simply need to get html content from one website. I'm sending the POST request with data for particular case and then using GET requests I want to scrape the text from html. The problem is that I always receive the first page's content. Not sure what I am doing wrong.
import requests
headers = {
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'Accept-Encoding':'gzip, deflate, br',
'Accept-Language':'pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7',
'Connection':'keep-alive',
'Content-Type':'application/x-www-form-urlencoded',
'Origin':'https://przegladarka-ekw.ms.gov.pl',
'Referer':'https://przegladarka-ekw.ms.gov.pl/eukw_prz/KsiegiWieczyste/wyszukiwanieKW',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36',
}
data = {
'kodWydzialu':'PT1R',
'nrKw':'00037314',
'cyfraK':'9',
}
url = 'https://przegladarka-ekw.ms.gov.pl/eukw_prz/KsiegiWieczyste/wyszukiwanieKW'
r = requests.session()
r.post(url, data=data, headers=headers)
final_content = r.get(url, headers=headers)
print(final_content.text)
The GET requests come from ("https://przegladarka-ekw.ms.gov.pl/eukw_prz/eukw201906070952/js/jquery-1.11.0_min.js
") but it returns a wall of code. My goal is to scrape the page which appears after providing the data from above to search menu.
try this
import json
import urllib.request
headers = {
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'Accept-Encoding':'gzip, deflate, br',
'Accept-Language':'pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7',
'Connection':'keep-alive',
'Content-Type':'application/x-www-form-urlencoded',
'Origin':'https://przegladarka-ekw.ms.gov.pl',
'Referer':'https://przegladarka-ekw.ms.gov.pl/eukw_prz/KsiegiWieczyste/wyszukiwanieKW',
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36',
}
data = {
'kodWydzialu':'PT1R',
'nrKw':'00037314',
'cyfraK':'9',
}
url = 'https://przegladarka-ekw.ms.gov.pl/eukw_prz/KsiegiWieczyste/wyszukiwanieKW'
r=urllib.request.urlopen(url, data=bytes(json.dumps(data), encoding="utf-8"))
final_content = r
for i in r:
print(i)
I'm trying to retrieve and process the results of a web search using requests and beautifulsoup.
I've written some simple code to do the job, and it returns successfully (status = 200), but the content of the request is just an error message "We're sorry for any inconvenience, but the site is currently unavailable.", and has been the same for the last several days. Searching within Firefox returns results without issue, however. I've run the code using a URL for the UK-based site and it works without issue so I wonder if the US site is set up to block attempts to scrape web searches.
Are there ways to mask the fact I'm attempting to retrieve search results from within Python (eg, masquerading as a standard search within Firefox) or some other work around to allow access to the search results?
Code included for reference below:
import pandas as pd
from requests import get
import bs4 as bs
import re
# works
# baseURL = 'https://www.autotrader.co.uk/car-search?sort=sponsored&radius=1500&postcode=ky119sb&onesearchad=Used&onesearchad=Nearly%20New&onesearchad=New&make=TOYOTA&model=VERSO&year-from=1990&year-to=2017&minimum-mileage=0&maximum-mileage=200000&body-type=MPV&fuel-type=Diesel&minimum-badge-engine-size=1.6&maximum-badge-engine-size=4.5&maximum-seats=8'
# doesn't work
baseURL = 'https://www.autotrader.com/cars-for-sale/Certified+Cars/cars+under+50000/Jeep/Grand+Cherokee/Seattle+WA-98101?extColorsSimple=BURGUNDY%2CRED%2CWHITE&maxMileage=45000&makeCodeList=JEEP&listingTypes=CERTIFIED%2CUSED&interiorColorsSimple=BEIGE%2CBROWN%2CBURGUNDY%2CTAN&searchRadius=0&modelCodeList=JEEPGRAND&trimCodeList=JEEPGRAND%7CSRT%2CJEEPGRAND%7CSRT8&zip=98101&maxPrice=50000&startYear=2015&marketExtension=true&sortBy=derivedpriceDESC&numRecords=25&firstRecord=0'
a = get(baseURL)
soup = bs.BeautifulSoup(a.content,'html.parser')
info = soup.find_all('div', class_ = 'information-container')
price = soup.find_all('div', class_ = 'vehicle-price')
d = []
for idx, i in enumerate(info):
ii = i.find_next('ul').find_all('li')
year_ = ii[0].text
miles = re.sub("[^0-9\.]", "", ii[2].text)
engine = ii[3].text
hp = re.sub("[^\d\.]", "", ii[4].text)
p = re.sub("[^\d\.]", "", price[idx].text)
d.append([year_, miles, engine, hp, p])
df = pd.DataFrame(d, columns=['year','miles','engine','hp','price'])
By default, Requests sends a unique user agent when making requests.
>>> r = requests.get('https://google.com')
>>> r.request.headers
{'User-Agent': 'python-requests/2.22.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
It is possible that the website you are using is trying to avoid scrapers by denying any request with a user agent of python-requests.
To get around this, you can change your user agent when sending a request. Since it's working on your browser, simply copy your browser user agent (you can Google it, or record a request to a webpage and copy your user agent like that). For me, it's Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36 (what a mouthful), so I'd set my user agent like this:
>>> headers = {
... 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36'
... }
and then send the request with the new headers (the new headers are added to the default headers, they don't replace them unless they have the same name):
>>> r = requests.get('https://google.com', headers=headers) # Using the custom headers we defined above
>>> r.request.headers
{'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.142 Safari/537.36', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
Now we can see that the request was sent with our preferred headers, and hopefully the site won't be able to tell the difference between Requests and a browser.
It's about scraping a hidden table with beautifulsoup.
As you can see in this website, there is a button "choisissez votre séance" and when we click on it a table will be shown.
When I click on inspect the table element i can see the tag that contains attributes like price. However, when I view the website's source code, I can't find this information.
There is something in the code of the table 'display : none' which I think affects this, but I can't find a solution.
It would appear the page is using AJAX and loading the data for pricing in the background. Using Chrome I pressed F12 and had a look under the network tab. When I clicked the "choisissez votre séance" button I noticed a POST to this address:
'https://www.ticketmaster.fr/fr/manifestation/holiday-on-ice-billet/idmanif/446304'
This is great news for you as you do not need to scrape the HTML data, you simply need to provide the ID (in page source) to the API.
In the below code I am
Requesting the initial page
Collecting the cookie
Posting the ID (data) and the cookie we collected
Returning the JSON data you require to further process (variable J)
Hope the below helps out!
Cheers,
Adam
import requests
from bs4 import BeautifulSoup
h = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36'}
s = requests.session()
initial_page_request = s.get('https://www.ticketmaster.fr/fr/manifestation/holiday-on-ice-billet/idmanif/446304',headers=h)
soup = BeautifulSoup(initial_page_request.text,'html.parser')
idseanc = soup.find("select",{"id":"sessionsSelect"})("option")[0]['value'].split("_")[1]
cookies = initial_page_request.cookies.get_dict()
headers = {
'Origin': 'https://www.ticketmaster.fr',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
'Content-Type': 'application/json; charset=UTF-8',
'Accept': '*/*',
'Referer': 'https://www.ticketmaster.fr/fr/manifestation/holiday-on-ice-billet/idmanif/446304',
'X-Requested-With': 'XMLHttpRequest',
'Connection': 'keep-alive',
}
data = {'idseanc':str(idseanc)}
response = s.post('https://www.ticketmaster.fr/planPlacement/FindPrices/connected/false/idseance/2870471', headers=headers, cookies=cookies, data=data)
j = response.json()