I want to scrape data from the website https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx. I have to select the State as Delhi, District as Adarsh Nagar (4) & click on Search button, and scrape all the information.
So far I tried using the given below code as
import requests
from bs4 import BeautifulSoup
Error was coming as 'HTTPS 443 SSL', which I ressolved using 'verify = False
resp = requests.get('https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx',verify=False)
soup = BeautifulSoup(resp.text,"lxml")
dictinfo = {i['name']:i.get('value','') for i in soup.select('input[name]')}
dictinfo['ddlState']='Delhi'
dictinfo['ddldistrict']='Adarsh Nagar (4)'
dictinfo['__EVENTTARGET']='btnSearch'
dictinfo = {k:(None,str(v)) for k,v in dictinfo.items()}
r=requests.post('https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx',verify=False,files=dictinfo)
r
Error: Response [500]
soup2
Error:
Invalid postback or callback
argument. Event validation is enabled using <pages
enableEventValidation="true"/> in configuration or <%# Page
EnableEventValidation="true" %> in a page. For security purposes,
this feature verifies that arguments to postback or callback events
originate from the server control that originally rendered them. If
the data is valid and expected, use the
ClientScriptManager.RegisterForEventValidation method in order to
register the postback or callback data for validation.
Can someone please help me to scrape it or get it done.
(I can only use REQUEST & BEAUTIFULSOUP library, no SELENIUM, MECHANIZE,etc. libraries. )
Try the script below to get the tabular results meant to be populated choosing two dropdown items as you stated above from that webpage. Turn out that you have to make two subsequent post requests to populate the results.
import requests
from bs4 import BeautifulSoup
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
url = 'https://xlnindia.gov.in/frm_G_Cold_S_Query.aspx'
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0'
resp = s.get(url,verify=False)
soup = BeautifulSoup(resp.text,"lxml")
dictinfo = {i['name']:i.get('value','') for i in soup.select('input[name]')}
dictinfo['ddlState'] = 'DL'
res = s.post(url,data=dictinfo)
soup_obj = BeautifulSoup(res.text,"lxml")
payload = {i['name']:i.get('value','') for i in soup_obj.select('input[name]')}
payload['ddldistrict'] = 'ADN'
r = s.post(url,data=payload)
sauce = BeautifulSoup(r.text,"lxml")
for items in sauce.select("#dgDisplay tr"):
data = [item.get_text(strip=True) for item in items.select("td")]
print(data)
Output you may see in the console like:
['Firm Name', 'City', 'Licences', 'Reg. Pharmacists / Comp. Person']
['A ONE MEDICOS', 'DELHI-251/1, GALI NO.1, KH, NO, 739/251/1, NEAR HIMACHAL BHAWAN,SARAI PIPAL THALA, VILLAGE AZAD PUR,', 'R - 2', 'virender kumar, DPH, [22295-17/10/2013]']
['AAROGYAM', 'DELHI-PVT. SHOP NO. 1, GF, 121,VILLAGE BHAROLA', 'R - 2', 'avinesh bhadoriya, DPH, [27033-]']
['ABCO INDIA', 'DELHI-SHOP NO-452/22,BHUSHAN BHAWAN RING ROAD,FLYOVER AZAD PUR', 'W - 2', 'sanjay dubey , SSC, [C-P-03/01/1997]']
['ADARSH MEDICOS', 'DELHI-NORTHERN SIDE B-107, GALI NO. 1,,MAJLIS PARK, VILLAGE BHAROLA,', 'R - 2', 'dilip kumar, BPH, [28036-11/01/2018]']
Related
While attempting to scrape this website: https://dining.umich.edu/menus-locations/dining-halls/mosher-jordan/ I have located the food item names by doing the following:
import requests
from bs4 import BeautifulSoup
url = "https://dining.umich.edu/menus-locations/dining-halls/mosher-jordan/"
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, 'html.parser')
foodLocation = soup.find_all('div', class_='item-name')
for singleFood in foodLocation:
food = singleFood.text
print(food)
The problem is, I only want to print the food inside of the "World Palate Maize" section seen in the Lunch portion of the link. In the HTML, there are multiple divs that all contain the foods within a certain type (World Palate Maize, Hot Cereal, MBakery etc.) I'm having trouble figuring out how to tell the loop to only print inside of a certain section (certain div?). This may require an if statement or condition in the for loop but I am unsure about how to format/what to use as a condition to ensure this loop only prints the content from one section.
One strategy could be to select more specific by text e.g. with css selectors:
soup.select('h3:-soup-contains("Lunch")+div h4:-soup-contains("World Palate Maize") + ul .item-name')
Example
import requests
from bs4 import BeautifulSoup
url = "https://dining.umich.edu/menus-locations/dining-halls/mosher-jordan/"
req = requests.get(url)
soup = BeautifulSoup(req.content, 'html.parser')
foodLocation = soup.select('h3:-soup-contains("Lunch")+div h4:-soup-contains("World Palate Maize") + ul .item-name')
for singleFood in foodLocation:
food = singleFood.text
print(food)
Output
Mojo Grilled Chicken
Italian White Bean Salad
Seems like "Lunch" would always be the second div, so you can probably do
import requests
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla'
}
url = "https://dining.umich.edu/menus-locations/dining-halls/mosher-jordan/"
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, 'html.parser')
[breakfast, lunch, dinner] = soup.select('div#mdining-items div.courses')
foods = lunch.select('div.item-name')
for food in foods:
print(food.text)
I'm trying to scrape the search result on this link: https://www.inecnigeria.org/elections/polling-units/ which requires that I select a dropdown value and then another shows up which I have to select from before searching. I am able to get the values from the first dropdown selection but not from the others. Here's what I have currently:
from bs4 import BeautifulSoup
import requests
base = 'https://www.inecnigeria.org/elections/polling-units/'
base_req = requests.get(base, verify=False)
soup = BeautifulSoup( base_req.text, "html.parser" )
# states
states = soup.find('select', id = "statePoll")
stateItems = states.select('option[value]')
stateValues = [ stateItem.text for stateItem in stateItems ]
# print(stateValues)
lgas = soup.find('select', id = "lgaPoll")
lgaItems = lgas.select('option[value]')
lgaValues = [ lgaItem.text for lgaItem in lgaItems ]
print(lgas)
Indeed you can't get those values by scraping the HTML on that page. The page uses JavaScript to request the options from another page and dynamically insert them into the page. You will have to use the information you can scrape to make such requests yourself. Here is an example of how to get the next step that should show you the general idea:
from bs4 import BeautifulSoup
import requests
base = 'https://www.inecnigeria.org/elections/polling-units/'
lga_view = 'https://www.inecnigeria.org/wp-content/themes/independent-national-electoral-commission/custom/views/lgaView.php'
base_req = requests.get(base, verify=False)
soup = BeautifulSoup(base_req.text, "html.parser" )
states = soup.find('select', id = "statePoll")
state_options = states.find_all('option')
states = {opt.text: int(opt['value']) for opt in state_options if 'value' in opt.attrs}
lga = {k: requests.post(lga_view, {'state_id': v}, verify=False).json() for k,v in states.items()}
print(lga)
I am trying to mimic the following browser actions via python's requests:
Land on https://www.bundesanzeiger.de/pub/en/to_nlp_start
Click "More search options"
Click checkbox "Also find historicised data" (corresponds to POST param: isHistorical: true)
Click button "Search net short positions"
Click button "Als CSV herunterladen" to download csv file
This is the code I have to simulate this:
import requests
import re
s = requests.Session()
r = s.get("https://www.bundesanzeiger.de/pub/en/to_nlp_start", verify=False, allow_redirects=True)
matches = re.search(
r'form class="search-form" id=".*" method="post" action="\.(?P<appendtxt>.*)"',
r.text
)
request_url = f"https://www.bundesanzeiger.de/pub/en{matches.group('appendtxt')}"
sr = session.post(request_url, data={'isHistorical': 'true', 'nlp-search-button': 'Search net short positions'}, allow_redirects=True)
However, even though sr gives me a status_code 200, it's really an error when I check sr.url, which shows https://www.bundesanzeiger.de/pub/en/error-404?9
Digging a bit deeper, I noticed that request_url above resolves to something like
https://www.bundesanzeiger.de/pub/en/nlp;wwwsid=EFEB15CD4ADC8932A91BA88B561A50E9.web07-pub?0-1.-nlp~filter~form~panel-form
but when I check the request url in Chrome, it's actually
https://www.bundesanzeiger.de/pub/en/nlp?87-1.-nlp~filter~form~panel-form`
The 87 here seems to change, suggesting it's some session ID, but when I'm doing this using requests it doesn't appear to resolve properly.
Any idea what I'm missing here?
You can try this script to download the CSV file:
import requests
from bs4 import BeautifulSoup
url = 'https://www.bundesanzeiger.de/pub/en/to_nlp_start'
data = {
'fulltext': '',
'positionsinhaber': '',
'ermittent': '',
'isin': '',
'positionVon': '',
'positionBis': '',
'datumVon': '',
'datumBis': '',
'isHistorical': 'true',
'nlp-search-button': 'Search+net+short+positions'
}
headers = {
'Referer': 'https://www.bundesanzeiger.de/'
}
with requests.session() as s:
soup = BeautifulSoup(s.get(url).content, 'html.parser')
action = soup.find('form', action=lambda t: 'nlp~filter~form~panel-for' in t)['action']
u = 'https://www.bundesanzeiger.de/pub/en' + action.strip('.')
soup = BeautifulSoup( s.post(u, data=data, headers=headers).content, 'html.parser' )
a = soup.select_one('a[title="Download as CSV"]')['href']
a = 'https://www.bundesanzeiger.de/pub/en' + a.strip('.')
print( s.get(a, headers=headers).content.decode('utf-8-sig') )
Prints:
"Positionsinhaber","Emittent","ISIN","Position","Datum"
"Citadel Advisors LLC","LEONI AG","DE0005408884","0,62","2020-08-21"
"AQR Capital Management, LLC","Evotec SE","DE0005664809","1,10","2020-08-21"
"BlackRock Investment Management (UK) Limited","thyssenkrupp AG","DE0007500001","1,50","2020-08-21"
"BlackRock Investment Management (UK) Limited","Deutsche Lufthansa Aktiengesellschaft","DE0008232125","0,75","2020-08-21"
"Citadel Europe LLP","TAG Immobilien AG","DE0008303504","0,70","2020-08-21"
"Davidson Kempner European Partners, LLP","TAG Immobilien AG","DE0008303504","0,36","2020-08-21"
"Maplelane Capital, LLC","VARTA AKTIENGESELLSCHAFT","DE000A0TGJ55","1,15","2020-08-21"
...and so on.
If you check https://www.bundesanzeiger.de/robots.txt, this website does not like to be indexed. The website could be denying access to the default user agent used by bots. This might help : Python requests vs. robots.txt
<div class="columns small-5 medium-4 cell header">Ref No.</div>
<div class="columns small-7 medium-8 cell">110B60329</div>
Website is https://www.saa.gov.uk/search/?SEARCHED=1&ST=&SEARCH_TERM=city+of+edinburgh%2C+BOSWALL+PARKWAY%2C+EDINBURGH&ASSESSOR_ID=&SEARCH_TABLE=valuation_roll_cpsplit&DISPLAY_COUNT=10&TYPE_FLAG=CP&ORDER_BY=PROPERTY_ADDRESS&H_ORDER_BY=SET+DESC&DRILL_SEARCH_TERM=BOSWALL+PARKWAY%2C+EDINBURGH&DD_TOWN=EDINBURGH&DD_STREET=BOSWALL+PARKWAY&UARN=110B60329&PPRN=000000000001745&ASSESSOR_IDX=10&DISPLAY_MODE=FULL#results
I would like to run a loop and return '110B60329'. I have ran beautiful soup and done a find_all(div), I then define the 2 different tags as head and data based on their class. I then ran iteration through the 'head' tags hoping it would return the info in the div tag i have defined as data .
Python returns a blank (cmd prompt reprinted the filepth).
Would anyone kindly know how i might fix this. My full code is.....thanks
import requests
from bs4 import BeautifulSoup as soup
import csv
url = 'https://www.saa.gov.uk/search/?SEARCHED=1&ST=&SEARCH_TERM=city+of+edinburgh%2C+BOSWALL+PARKWAY%2C+EDINBURGH&ASSESSOR_ID=&SEARCH_TABLE=valuation_roll_cpsplit&DISPLAY_COUNT=10&TYPE_FLAG=CP&ORDER_BY=PROPERTY_ADDRESS&H_ORDER_BY=SET+DESC&DRILL_SEARCH_TERM=BOSWALL+PARKWAY%2C+EDINBURGH&DD_TOWN=EDINBURGH&DD_STREET=BOSWALL+PARKWAY&UARN=110B60329&PPRN=000000000001745&ASSESSOR_IDX=10&DISPLAY_MODE=FULL#results'
baseurl = 'https://www.saa.gov.uk'
session = requests.session()
response = session.get(url)
# content of search page in soup
html= soup(response.content,"lxml")
properties_col = html.find_all('div')
for col in properties_col:
ref = 'n/a'
des = 'n/a'
head = col.find_all("div",{"class": "columns small-5 medium-4 cell header"})
data = col.find_all("div",{"class":"columns small-7 medium-8 cell"})
for i,elem in enumerate(head):
#for i in range(elems):
if head [i].text == "Ref No.":
ref = data[i].text
print ref
You can do this by two ways.
1) If you are sure that the website that your are scraping won't change its content you can find all divs by that class and get the content by providing an index.
2) Find all left side divs (The titles) and if one of them matches what you want get the next sibling to get the text.
Example:
import requests
from bs4 import BeautifulSoup as soup
url = 'https://www.saa.gov.uk/search/?SEARCHED=1&ST=&SEARCH_TERM=city+of+edinburgh%2C+BOSWALL+PARKWAY%2C+EDINBURGH&ASSESSOR_ID=&SEARCH_TABLE=valuation_roll_cpsplit&DISPLAY_COUNT=10&TYPE_FLAG=CP&ORDER_BY=PROPERTY_ADDRESS&H_ORDER_BY=SET+DESC&DRILL_SEARCH_TERM=BOSWALL+PARKWAY%2C+EDINBURGH&DD_TOWN=EDINBURGH&DD_STREET=BOSWALL+PARKWAY&UARN=110B60329&PPRN=000000000001745&ASSESSOR_IDX=10&DISPLAY_MODE=FULL#results'
baseurl = 'https://www.saa.gov.uk'
session = requests.session()
response = session.get(url)
# content of search page in soup
html = soup(response.content,"lxml")
#Method 1
LeftBlockData = html.find_all("div", class_="columns small-7 medium-8 cell")
Reference = LeftBlockData[0].get_text().strip()
Description = LeftBlockData[2].get_text().strip()
print(Reference)
print(Description)
#Method 2
for column in html.find_all("div", class_="columns small-5 medium-4 cell header"):
RightColumn = column.next_sibling.next_sibling.get_text().strip()
if "Ref No." in column.get_text().strip():
print (RightColumn)
if "Description" in column.get_text().strip():
print (RightColumn)
The prints will output (in order):
110B60329
STORE
110B60329
STORE
Your problem is that you are trying to match a node text that have a lot of tabs with a non-spaced string.
For example your head [i].textvariable contains
Ref No., so if you compare it with Ref No. it'll give a false result. Striping it will solve.
import requests
from bs4 import BeautifulSoup
r = requests.get("https://www.saa.gov.uk/search/?SEARCHED=1&ST=&SEARCH_TERM=city+of+edinburgh%2C+BOSWALL+PARKWAY%2C+EDINBURGH&ASSESSOR_ID=&SEARCH_TABLE=valuation_roll_cpsplit&DISPLAY_COUNT=10&TYPE_FLAG=CP&ORDER_BY=PROPERTY_ADDRESS&H_ORDER_BY=SET+DESC&DRILL_SEARCH_TERM=BOSWALL+PARKWAY%2C+EDINBURGH&DD_TOWN=EDINBURGH&DD_STREET=BOSWALL+PARKWAY&UARN=110B60329&PPRN=000000000001745&ASSESSOR_IDX=10&DISPLAY_MODE=FULL#results")
soup = BeautifulSoup(r.text, 'lxml')
for row in soup.find_all(class_='table-row'):
print(row.get_text(strip=True, separator='|').split('|'))
out:
['Ref No.', '110B60329']
['Office', 'LOTHIAN VJB']
['Description', 'STORE']
['Property Address', '29 BOSWALL PARKWAY', 'EDINBURGH', 'EH5 2BR']
['Proprietor', 'SCOTTISH MIDLAND CO-OP SOCIETY LTD.']
['Tenant', 'PROPRIETOR']
['Occupier']
['Net Annual Value', '£1,750']
['Marker']
['Rateable Value', '£1,750']
['Effective Date', '01-APR-10']
['Other Appeal', 'NO']
['Reval Appeal', 'NO']
get_text() is very powerful tool, you can strip the white space and put separator in the text.
You can use this method to get clean data and filter it.
I have been writing a piece of code that will retrieve a list of items and their corresponding prices from the Steam Marketplace (for the game Unturned). I am using BeautifulSoup (bs4) and requests library. This is my code so far:
for page_num in range(1,10):
website = 'http://steamcommunity.com/market/search?appid=304930#p'+str(page_num)+'_popular_desc'
r = requests.get(website)
doc = r.text.split('\n')
soup = BeautifulSoup(''.join(doc), "html.parser")
names = soup.findAll("span", { "class" : "market_listing_item_name" })
for item in range(len(names)):
items.append(names[item].contents[0])
costs = soup.findAll("span", { "class" : "normal_price" })
for cost in range(len(costs)):
prices.append(costs[cost].contents[0])
Expected Output:
Festive Gift Present : $0.32 USD
Halloween Gift Present : $0.26 USD
Carbon Fiber Mystery Box : $0.47 USD
Festive Hat : $1.67 USD
Nuclear Matamorez : $0.39 USD
... and so on
The problem with this code is, it is only getting the names of the first page. If I type the URL manually with different numbers in place of page_num it changes the page, and also the HTML document changes. However, the code doesn't seem to get the results from the second page and so on. requests is fetching the correct URL each time, but the HTML doc returns the same?
Page 2, 3, etc, are requested via ajax (or similar), so the source code isn't present when you first load the page. To bypass this we can sniff the ajax url and parse the source directly, in this case, json encoded, i.e:
import json
from bs4 import BeautifulSoup
from urllib2 import urlopen
output = ""
items =[]
prices =[]
for page_num in range(0,100, 10): #
start = page_num
count = page_num + 10
url = urlopen("http://steamcommunity.com/market/search/render/?query=&start={}&count={}&search_descriptions=0&sort_column=popular&sort_dir=desc&appid=304930".format(start, count))
jsonCode = json.loads(url.read())
output += jsonCode['results_html']
soup = BeautifulSoup(output, "html.parser")
names = soup.findAll("span", { "class" : "market_listing_item_name" })
for item in range(len(names)):
items.append(names[item].contents[0])
costs = soup.findAll("span", { "class" : "normal_price" })
for cost in range(len(costs)):
if "Starting at" not in costs[cost].contents[0]: # we just get the first price
prices.append(costs[cost].contents[0])
print items
[u'Festive Gift Present', u'Halloween Gift Present', u'Hypertech Timberwolf', u'Holiday Scarf', u'Chill Honeybadger', etc...]
print prices
[u'$0.34 USD', u'$0.28 USD', u'$1.77 USD', u'$0.31 USD', u'$0.65 USD', etc...]
PS: Steam will temporary ban your ip after ~50 requests