So Im new to the world of web scraping and so far I've only really been using beautifulsoup to scrape text and images off websites. I thought Id try and scrape some data points off a graph to test my understanding but I got a bit confused at this graph.
After inspecting the element of the piece of data I wanted to extract, I saw this:
<span id="TSMAIN">: 100.7490637</span>
The problem is, my original idea for scraping the data points would be to have iterated through some sort of id list containing all the different data points (if that makes sense?).
Instead, it seems that all the data points are contained within this same element, and the value depends on where your cursor is on the graph.
My problem is, If I use beautifulsoups find function and type in that specific element with that attribute of id = TSMAIN, I get a none type return, because I am guessing unless I have my cursor on the actual graph nothing will show up there.
Code:
from bs4 import BeautifulSoup
import requests
headers={"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36"}
url = "https://www.morningstar.co.uk/uk/funds/snapshot/snapshot.aspx?id=F0GBR050AQ&tab=13"
source=requests.get(url,headers=headers)
soup = BeautifulSoup(source.content,'lxml')
data = soup.find("span",attrs={"id":"TSMAIN"})
print(data)
Output
None
How can I extract all the data points of this graph?
Seems like the data can be pulled form API. Only thing is the values it returns is relative to the start date entered in the payload. It'll set the out put of the start date to 0, then the numbers after are relative to that date.
import requests
import pandas as pd
from datetime import datetime
from dateutil import relativedelta
userInput = input('Choose:\n\t1. 3 Month\n\t2. 6 Month\n\t3. 1 Year\n\t4. 3 Year\n\t5. 5 Year\n\t6. 10 Year\n\n -->: ')
userDict = {'1':3,'2':6,'3':12,'4':36,'5':60,'6':120}
n = datetime.now()
n = n - relativedelta.relativedelta(days=1)
n = n - relativedelta.relativedelta(months=userDict[userInput])
dateStr = n.strftime('%Y-%m-%d')
url = 'https://tools.morningstar.co.uk/api/rest.svc/timeseries_cumulativereturn/t92wz0sj7c'
data = []
idDict = {
'Schroder Managed Balanced Instl Acc':'F0GBR050AQ]2]0]FOGBR$$ALL',
'GBP Moderately Adventurous Allocation':'EUCA000916]8]0]CAALL$$ALL',
'Mixed Investment 40-85% Shares':'LC00000012]8]0]CAALL$$ALL',
'':'F00000ZOR1]7]0]IXALL$$ALL',}
for k, v in idDict.items():
payload = {
'encyId': 'GBP',
'idtype': 'Morningstar',
'frequency': 'daily',
'startDate': dateStr,
'performanceType': '',
'outputType': 'COMPACTJSON',
'id': v,
'decPlaces': '8',
'applyTrackRecordExtension': 'false'}
temp_data = requests.get(url, params=payload).json()
df = pd.DataFrame(temp_data)
df['timestamp'] = pd.to_datetime(df[0], unit='ms')
df['date'] = df['timestamp'].dt.date
df = df[['date',1]]
df.columns = ['date', k]
data.append(df)
final_df = pd.concat(
(iDF.set_index('date') for iDF in data),
axis=1, join='inner'
).reset_index()
final_df.plot(x="date", y=list(idDict.keys()), kind="line")
Output:
print (final_df.head(5).to_string())
date Schroder Managed Balanced Instl Acc GBP Moderately Adventurous Allocation Mixed Investment 40-85% Shares
0 2019-12-22 0.000000 0.000000 0.000000 0.000000
1 2019-12-23 0.357143 0.406784 0.431372 0.694508
2 2019-12-24 0.714286 0.616217 0.632422 0.667586
3 2019-12-25 0.714286 0.616217 0.632422 0.655917
4 2019-12-26 0.714286 0.612474 0.629152 0.664124
....
To get those Ids, it took a little investigating of the requests. Searching through those, I was able to find the corresponding id values and with a little bit of trial and error to work out what values meant what.
Those "alternate" ids used. And where those line graphs get the data from (inthose 4 request, look at the Preview pane, and you'll see the data in there.
Here's the final output/graph:
Related
I am attempting to scrape individual fund holdings from the SEC's N-PORT-P/A form using beautiful soup and xml. A typical submission, outlined below and [linked here][1], looks like:
<edgarSubmission xmlns="http://www.sec.gov/edgar/nport" xmlns:com="http://www.sec.gov/edgar/common" xmlns:ncom="http://www.sec.gov/edgar/nportcommon" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<headerData>
<submissionType>NPORT-P/A</submissionType>
<isConfidential>false</isConfidential>
<accessionNumber>0001145549-23-004025</accessionNumber>
<filerInfo>
<filer>
<issuerCredentials>
<cik>0001618627</cik>
<ccc>XXXXXXXX</ccc>
</issuerCredentials>
</filer>
<seriesClassInfo>
<seriesId>S000048029</seriesId>
<classId>C000151492</classId>
</seriesClassInfo>
</filerInfo>
</headerData>
<formData>
<genInfo>
...
</genInfo>
<fundInfo>
...
</fundInfo>
<invstOrSecs>
<invstOrSec>
<name>ARROW BIDCO LLC</name>
<lei>549300YHZN08M0H3O128</lei>
<title>Arrow Bidco LLC</title>
<cusip>042728AA3</cusip>
<identifiers>
<isin value="US042728AA35"/>
</identifiers>
<balance>115000.000000000000</balance>
<units>PA</units>
<curCd>USD</curCd>
<valUSD>114754.170000000000</valUSD>
<pctVal>0.3967552449</pctVal>
<payoffProfile>Long</payoffProfile>
<assetCat>DBT</assetCat>
<issuerCat>CORP</issuerCat>
<invCountry>US</invCountry>
<isRestrictedSec>N</isRestrictedSec>
<fairValLevel>2</fairValLevel>
<debtSec>
<maturityDt>2024-03-15</maturityDt>
<couponKind>Fixed</couponKind>
<annualizedRt>9.500000000000</annualizedRt>
<isDefault>N</isDefault>
<areIntrstPmntsInArrs>N</areIntrstPmntsInArrs>
<isPaidKind>N</isPaidKind>
</debtSec>
<securityLending>
<isCashCollateral>N</isCashCollateral>
<isNonCashCollateral>N</isNonCashCollateral>
<isLoanByFund>N</isLoanByFund>
</securityLending>
</invstOrSec>
With Arrow Bidco LLC being a bond within the portfolio, with some of its characteristics included within the filing (CUSIP, CIK, balance, maturity date, etc.). I am looking for the best way to iterate through each individual security (investOrSec) and collect the characteristics of each security in a dataframe.
The code I am currently using is:
import numpy as np
import pandas as pd
import requests
from bs4 import BeautifulSoup
header = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36", "X-Requested-With": "XMLHttpRequest"}
n_port_file = requests.get("https://www.sec.gov/Archives/edgar/data/1618627/000114554923004968/primary_doc.xml", headers=header, verify=False)
n_port_file_xml = n_port_file.content
soup = BeautifulSoup(n_port_file_xml,'xml')
names = soup.find_all('name')
lei = soup.find_all('lei')
title = soup.find_all('title')
cusip = soup.find_all('cusip')
....
maturityDt = soup.find_all('maturityDt')
couponKind = soup.find_all('couponKind')
annualizedRt = soup.find_all('annualizedRt')
Then iterating through each list to create a dataframe based on the values in each row.
fixed_income_data = []
for i in range(0,len(names)):
rows = [names[i].get_text(),lei[i].get_text(),
title[i].get_text(),cusip[i].get_text(),
balance[i].get_text(),units[i].get_text(),
pctVal[i].get_text(),payoffProfile[i].get_text(),
assetCat[i].get_text(),issuerCat[i].get_text(),
invCountry[i].get_text(),couponKind[i].get_text()
]
fixed_income_data.append(rows)
fixed_income_df = pd.DataFrame(equity_data,columns = ['name',
'lei',
'title',
'cusip',
'balance',
'units',
'pctVal',
'payoffProfile',
'assetCat',
'issuerCat',
'invCountry'
'maturityDt',
'couponKind',
'annualizedRt'
], dtype = float)
This works fine when all pieces of information are included, but often there is one variable that is not accounted for. A piece of the form might be blank, or an issuer category might not have been filled out incorrectly, leading to an IndexError. This portfolio has 127 securities that I was able to parse, but might be missing an annualized return for a single security, throwing off the ability to neatly create a dataframe.
Additionally, for portfolios that hold both fixed income and equity securities, the equity securities do not return information for the debtSecs child. Is there a way to iterate through this data while simultaneously cleaning it in the easiest way possible? Even adding "NaN" for the debtSec children that equity securities don't reference would be a valid response. Any help would be much appreciated!
[1]: https://www.sec.gov/Archives/edgar/data/1618627/000114554923004968/primary_doc.xml
Here is the best way, in my opinion, to handle the problem. Generally speaking, EDGAR filings are notoriously difficult to parse, so the following may or may not work on other filings, even from the same filer.
To make it easier on yourself, since this is an XML file, you should use an xml parser and xpath. Given that you're looking to create a dataframe, the most appropriate tool would be the pandas read_xml() method.
Because the XML is nested, you will need to create two different dataframes and concatenate them (maybe others will have a better idea on how to approach it). And, finally, although read_xml() can read directly from a url, in this case, EDGAR requires using a user-agent, meaning you also need to use the requests library as well.
So, all together:
#import required libraries
import pandas as pd
import requests
url = 'https://www.sec.gov/Archives/edgar/data/1618627/000114554923004968/primary_doc.xml'
#set headers with a user-agent
headers = {"User-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36"}
req = requests.get(url, headers=headers)
#define the columns you want to drop (based on the data in your question)
to_drop = ['identifiers', 'curCd','valUSD','isRestrictedSec','fairValLevel','debtSec','securityLending']
#the filing uses namespaces (too complicated to get into here), so you need to define that as well
namespaces = {"nport": "http://www.sec.gov/edgar/nport"}
#create the first df, for the securities which are debt instruments
invest = pd.read_xml(req.text,xpath="//nport:invstOrSec[.//nport:debtSec]",namespaces=namespaces).drop(to_drop, axis=1)
#crete the 2nd df, for the debt details:
debt = pd.read_xml(req.text,xpath="//nport:debtSec",namespaces=namespaces).iloc[:,0:3]
#finally, concatenate the two into one df:
pd.concat([invest, debt], axis=1)
This should output your 126 debt securities (pardon the formatting):
lei title cusip balance units pctVal payoffProfile assetCat issuerCat invCountry maturityDt couponKind annualizedRt
0 ARROW BIDCO LLC 549300YHZN08M0H3O128 Arrow Bidco LLC 042728AA3 115000.00 PA 0.396755 Long DBT CORP US 2024-03-15 Fixed 9.50000
1 CD&R SMOKEY BUYER INC NaN CD&R Smokey Buyer Inc 12510CAA9 165000.00 PA 0.505585 Long DBT CORP US 2025-07-15 Fixed 6.75000
You can then play with the final df, add or drop columns, etc
I've cobbled together the following code that scrapes a website table using Beautiful Soup.
The script is working as intended except for the first two entries.
Q1: The first entry consists of two empty brackets... how do I omit them?
Q2: The second entry has a hiden tab creating whitespace in the second element that I can't get rid of. How do I remove it?
Code:
import requests
import pandas as pd
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
testlink = "https://www.crutchfield.com/p_13692194/JL-Audio-12TW3-D8.html?tp=64077"
r = requests.get(testlink, headers=headers)
soup = BeautifulSoup(r.content, 'lxml')
table = soup.find('table', class_='table table-striped')
df = pd.DataFrame(columns=['col1', 'col2'])
rows = []
for i, row in enumerate(table.find_all('tr')):
rows.append([el.text.strip() for el in row.find_all('td')])
for row in rows:
print(row)
Results:
[]
['Size', '12 -inch']
['Impedance (Ohms)', '4, 16']
['Cone Material', 'Mica-Filled IMPP']
['Surround Material', 'Rubber']
['Ideal Sealed Box Volume (cubic feet)', '1']
['Ideal Ported Box Volume (cubic feet)', '1.3']
['Port diameter (inches)', 'N/A']
['Port length (inches)', 'N/A']
['Free-Air', 'No']
['Dual Voice Coil', 'Yes']
['Sensitivity', '84.23 dB at 1 watt']
['Frequency Response', '24 - 200 Hz']
['Max RMS Power Handling', '400']
['Peak Power Handling (Watts)', '800']
['Top Mount Depth (inches)', '3 1/2']
['Bottom Mount Depth (inches)', 'N/A']
['Cutout Diameter or Length (inches)', '11 5/8']
['Vas (liters)', '34.12']
['Fs (Hz)', '32.66']
['Qts', '0.668']
['Xmax (millimeters)', '15.2']
['Parts Warranty', '1 Year']
['Labor Warranty', '1 Year']
Let's simplify, shall we?
import pandas as pd
df = pd.read_html('https://www.crutchfield.com/S-f7IbEJ40aHd/p_13692194/JL-Audio-12TW3-D8.html?tp=64077')[0]
df.columns = ['Property', 'Value', 'Not Needed']
print(df[['Property', 'Value']])
Result in terminal:
Property Value
0 Size 12 -inch
1 Impedance (Ohms) 4, 16
2 Cone Material Mica-Filled IMPP
3 Surround Material Rubber
4 Ideal Sealed Box Volume (cubic feet) 1
5 Ideal Ported Box Volume (cubic feet) 1.3
6 Port diameter (inches) NaN
7 Port length (inches) NaN
8 Free-Air No
9 Dual Voice Coil Yes
10 Sensitivity 84.23 dB at 1 watt
11 Frequency Response 24 - 200 Hz
12 Max RMS Power Handling 400
13 Peak Power Handling (Watts) 800
14 Top Mount Depth (inches) 3 1/2
15 Bottom Mount Depth (inches) NaN
16 Cutout Diameter or Length (inches) 11 5/8
17 Vas (liters) 34.12
18 Fs (Hz) 32.66
19 Qts 0.668
20 Xmax (millimeters) 15.2
21 Parts Warranty 1 Year
22 Labor Warranty 1 Year
Pandas documentation can be found here.
You can clean the results like this if you want.
rows = []
for i, row in enumerate(table.find_all('tr')):
cells = [
el.text.strip().replace("\t", "") ## remove tabs
for el
in row.find_all('td')
]
## don't add a row with no tds
if cells:
rows.append(cells)
I think you can further simplify this with a walrus :=
rows = [
[cell.text.strip().replace("\t", "") for cell in cells]
for row in table.find_all('tr')
if (cells := row.find_all('td'))
]
Let Pandas do it all
No need for anything else
pandas can read tables inside html
url='https://www.crutchfield.com/p_13692194/JL-Audio-12TW3-D8.html?tp=64077'
df=pd.read_html(url,attrs={'class':'table table-striped'})[0]
df.columns=['Features','Specs','Blank']
df.drop('Blank',axis=1,inplace=True) # get rid of the hidden column
Thats it
Seems to me all good no spaces
if you still feel there are spaces left in some column
df['Features']=df['Features'].apply(lambda x:x.strip()) #Not Needed
if you need to pass headers in request..(you can pass requests response to pd.read_html)
ps: it works without headers for the given URL
df=pd.read_html(requests.get(url,headers=headers).content,
attrs={'class':'table table-striped'})[0]
I want to download the data from USDA site with custom queries. So instead of manually selecting queries in the website, I am thinking about how should I do this handier in python. To do so, I used request, http to access the url and read the content, it is not intuitive for me how should I pass the queries then make a selection and download the data as csv. Does anyone knows of doing this easily in python? Is there any workaround we could download the data from url with specific queries? Any idea?
this is my current attempt
here is the url that I am going to select data with custom queries.
import io
import requests
import pandas as pd
url="https://www.marketnews.usda.gov/mnp/ls-report-retail?&repType=summary&portal=ls&category=Retail&species=BEEF&startIndex=1"
s=requests.get(url).content
df=pd.read_csv(io.StringIO(s.decode('utf-8')))
so before reading the requested json in pandas, I need to pass following queries for correct data selection:
Category = "Retail"
Report Type = "Item"
Species = "Beef"
Region(s) = "National"
Start Dates = "2020-01-01"
End Date = "2021-02-08"
it is not intuitive for me how should I pass the queries with requested json then download the filtered data as csv. Is there any efficient way of doing this in python? Any thoughts? Thanks
A few details
simplest format is text rather that HTML. Got URL from HTML page for text download
requests(params=) is a dict. Built it up and passed, no need to deal with building complete URL string
clearly text is space delimited, found minimum of double space
import io
import requests
import pandas as pd
url="https://www.marketnews.usda.gov/mnp/ls-report-retail"
p = {"repType":"summary","species":"BEEF","portal":"ls","category":"Retail","format":"text"}
r = requests.get(url, params=p)
df = pd.read_csv(io.StringIO(r.text), sep="\s\s+", engine="python")
Date
Region
Feature Rate
Outlets
Special Rate
Activity Index
0
02/05/2021
NATIONAL
69.40%
29,200
20.10%
81,650
1
02/05/2021
NORTHEAST
75.00%
5,500
3.80%
17,520
2
02/05/2021
SOUTHEAST
70.10%
7,400
28.00%
23,980
3
02/05/2021
MIDWEST
75.10%
6,100
19.90%
17,430
4
02/05/2021
SOUTH CENTRAL
57.90%
4,900
26.40%
9,720
5
02/05/2021
NORTHWEST
77.50%
1,300
2.50%
3,150
6
02/05/2021
SOUTHWEST
63.20%
3,800
27.50%
9,360
7
02/05/2021
ALASKA
87.00%
200
.00%
290
8
02/05/2021
HAWAII
46.70%
100
.00%
230
Just format the query data in the url - it's actually a REST API:
To add more query data, as #mullinscr said, you can change the values on the left and press submit, then see the query name in the URL (for example, start date is called repDate).
If you hover on the Download as XML link, you will also discover you can specify the download format using format=<format_name>. Parsing the tabular data in XML using pandas might be easier, so I would append format=xml at the end as well.
category = "Retail"
report_type = "Item"
species = "BEEF"
regions = "NATIONAL"
start_date = "01-01-2019"
end_date = "01-01-2021"
# the website changes "-" to "%2F"
start_date = start_date.replace("-", "%2F")
end_date = end_date.replace("-", "%2F")
url = f"https://www.marketnews.usda.gov/mnp/ls-report-retail?runReport=true&portal=ls&startIndex=1&category={category}&repType={report_type}&species={species}®ion={regions}&repDate={start_date}&endDate={end_date}&compareLy=No&format=xml"
# parse with pandas, etc...
I am looking to scrape Division 3 College Basketball stats from the following NCAA stats page:
https://stats.ncaa.org/rankings/change_sport_year_div
To get to the page I am on, after clicking the link, Select Sport = Men's Basketball, Year = 2019-2020, and Div = III
Upon clicking the link, there is a dropdown above the top left corner table. It is labeled "Additional Stats". For each stat there is a table which you can get an excel file of, but I want to be more efficient. I was thinking there could be a way to iterate through the dropdown bar using BeautifulSoup (or perhaps even pd.read_html) to get a dataframe for every stat listed. Is there a way to do this? Going through each stat manually, downloading the excel file, and reading the excel file into pandas would be a pain. Thank you.
Here is my suggestion, to use a combination of requests, beautifulsoup and a great html table parser from Scott Rome(I modified a bit the parse_html_table function to remove \n and strip whitespaces).
First, you can see when you inspect the source code of the page that it takes the form : "https://stats.ncaa.org/rankings/national_ranking?academic_year=2020.0&division=3.0&ranking_period=110.0&sport_code=MBB&stat_seq=145.0"
for instance for the stat 145 ie "Scoring Offense".
You can therefore use the following code on each of these urls by replacing the 145.0 with values corresponding to the different stats, which you can see when you inspect the source code of the page.
# <option value="625">3-pt Field Goal Attempts</option>
# <option value="474">Assist Turnover Ratio</option>
# <option value="216">Assists Per Game</option>
# ...
For a specific stat, here for instance scoring offense, you can use the following code to extract the table as a pandas DataFrame:
import pandas as pd
from bs4 import BeautifulSoup
import requests
el = "https://stats.ncaa.org/rankings/national_ranking?academic_year=2020.0&division=3.0&ranking_period=110.0&sport_code=MBB&stat_seq=145.0"
page = requests.get(el).content.decode('utf-8')
soup = BeautifulSoup(page, "html.parser")
ta = soup.find_all('table', {"id": "rankings_table"})
# Scott Rome function tweaked a bit
def parse_html_table(table):
n_columns = 0
n_rows = 0
column_names = []
# Find number of rows and columns
# we also find the column titles if we can
for row in table.find_all('tr'):
# Determine the number of rows in the table
td_tags = row.find_all('td')
if len(td_tags) > 0:
n_rows += 1
if n_columns == 0:
# Set the number of columns for our table
n_columns = len(td_tags)
# Handle column names if we find them
th_tags = row.find_all('th')
if len(th_tags) > 0 and len(column_names) == 0:
for th in th_tags:
column_names.append(th.get_text())
# Safeguard on Column Titles
if len(column_names) > 0 and len(column_names) != n_columns:
raise Exception("Column titles do not match the number of columns")
columns = column_names if len(column_names) > 0 else range(0, n_columns)
df = pd.DataFrame(columns=columns,
index=range(0, n_rows))
row_marker = 0
for row in table.find_all('tr'):
column_marker = 0
columns = row.find_all('td')
for column in columns:
df.iat[row_marker, column_marker] = column.get_text()
column_marker += 1
if len(columns) > 0:
row_marker += 1
# remove \n
for col in df:
try:
df[col] = df[col].str.replace("\n", "")
df[col] = df[col].str.strip()
except ValueError:
pass
# Convert to float if possible
for col in df:
try:
df[col] = df[col].astype(float)
except ValueError:
pass
return df
example = parse_html_table(ta[0])
The result is
Rank Team GM W-L PTS PPG
0 1 Greenville (SLIAC) 27.0 14-13 3,580 132.6
1 2 Grinnell (Midwest Conference) 25.0 13-12 2,717 108.7
2 3 Pacific (OR) (NWC) 25.0 7-18 2,384 95.4
3 4 Whitman (NWC) 28.0 20-8 2,646 94.5
4 5 Valley Forge (ACAA) 22.0 12-11 2,047 93.0
...
Now, what you have to do is apply this to all stat values mentioned above.
You can make a function of the code above, and apply it in a for loop to the url "https://stats.ncaa.org/rankings/national_ranking?academic_year=2020.0&division=3.0&ranking_period=110.0&sport_code=MBB&stat_seq={}".format(stat) where stat is in list of all possible values.
Hope it helps.
Maybe a more concise way to do this :
import requests as rq
from bs4 import BeautifulSoup as bs
import pandas as pd
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:76.0) Gecko/20100101 Firefox/76.0"}
params = {"sport_code": "MBB", "stat_seq": "518", "academic_year": "2020.0", "division":"3.0", "ranking_period":"110.0"}
url = "https://stats.ncaa.org/rankings/national_ranking"
resp = rq.post(url, headers=headers, params=params)
soup = bs(resp.content)
colnames = [th.text.strip() for th in soup.find_all("thead")[0].find_all("th")]
data = [[td.text.strip() for td in tr.find_all('td')] for tr in soup.find_all('tbody')[0].find_all("tr")]
df = pd.DataFrame(data, columns=colnames)
df.astype({"GM": 'int32'}).dtypes # convert column in type u want
You have to look at the XHR requests [on Mozilla: F12 -> Network -> XHR].
When you select an item from the dropdown list, this makes a post Request through the following url: https://stats.ncaa.org/rankings/national_ranking.
Some params are required to make this post request, one of them is "stat_seq". The value corresponds to the "value" of dropdown options.
Inspector give you the list of "value"-StatName correspondence :
<option value="625" selected="selected">3-pt Field Goal Attempts</option>
<option value="474">Assist Turnover Ratio</option>
<option value="216">Assists Per Game</option>
<option value="214">Blocked Shots Per Game</option>
<option value="859">Defensive Rebounds per Game</option>
<option value="642">Fewest Fouls</option>
...
...
...
I'm trying to extract data through the json response of this link : https://www.bienici.com/recherche/achat/france?page=2
I have 2 problems:
- first, I want scrape a house's parametrs like (price, area, city, zip code) but I don't know how ?
- Secondly, I want to make a loop that goes all the pages up to page 100
This is the program :
import requests
from pandas.io.json import json_normalize
import csv
payload = {'filters': '{"size":24,"from":0,"filterType":"buy","newProperty":false,"page":2,"resultsPerPage":24,"maxAuthorizedResults":2400,"sortBy":"relevance","sortOrder":"desc","onTheMarket":[true],"limit":"ih{eIzjhZ?q}qrAzaf}AlrD?rvfrA","showAllModels":false,"blurInfoType":["disk","exact"]}'}
url = 'https://www.bienici.com/realEstateAds.json'
response = requests.get(url, params = payload).json()
with open("selog.csv", "w", newline="") as f:
writer = csv.writer(f)
for prop in response['realEstateAds']:
title = prop['title']
city = prop['city']
desc = prop['description']
price = prop['price']
df = json_normalize(response['realEstateAds'])
df.to_csv('selog.csv', index=False)
writer.writerow([price,title,city,desc])
Hi first thing I notice is you're writing the csv twice. Once with writer and once with .to_csv(). Depending what you are trying to do, you don't need both, but ultimately either would work. It just depends then how you iterated through the data.
Personally, I like working with pandas. I’ve had people tell me it’s a little overkill to store temp dataframes and append to a “final” dataframe, but it’s just what I’m comfortable doing and haven’t had issues with it, so I just used that.
To get other data parts, you'll need to investigate what’s all there and work your way through the json format to pull that out of the json response (if you’re going the route of using csv writer).
The pages are part of the payload parameters. To go through pages, just iterate that. The weird thing is, when I tried that, not only do you have to iterate through pages, but also the from parameter. Ie. since I have it doing 60 per page, page 1 is from 0, page 2 is from 60, page 3 is from 120, etc. So had it iterate through those multiples of 60 (it seems to get it). Sometimes it’s possible to see how many pages you’ll iterate through, but I couldn’t find it, so simply left it as a try/except, so when it reaches the end, it’ll break the loop. The only downside, is it could draw an error unexpected before, causing it to stop pre-maturely. I didn’t look too much into that, but just as a side note.
so it would look something like this (might take a while to go through all the pages, so I just did pages 1-10$:
You can also before saving to csv, manipulte the dataframe to keep only the columns you want:
import requests
import pandas as pd
from pandas.io.json import json_normalize
tot_pages = 10
url = 'https://www.bienici.com/realEstateAds.json'
results_df = pd.DataFrame()
for page in range(1, tot_pages+1):
try:
payload = {'filters': '{"size":60,"from":%s,"filterType":"buy","newProperty":false,"page":%s,"resultsPerPage":60,"maxAuthorizedResults":2400,"sortBy":"relevance","sortOrder":"desc","onTheMarket":[true],"limit":"ih{eIzjhZ?q}qrAzaf}AlrD?rvfrA","showAllModels":false,"blurInfoType":["disk","exact"]}' %((60 * (page-1)), page)}
response = requests.get(url, params = payload).json()
print ('Processing Page: %s' %page)
temp_df = json_normalize(response['realEstateAds'])
results_df = results_df.append(temp_df).reset_index(drop=True)
except:
print ('No more pages.')
break
# To Filter out to certain columns, un-comment below
#results_df = results_df[['city','district.name','postalCode','price','propertyType','surfaceArea','bedroomsQuantity','bathroomsQuantity']]
results_df.to_csv('selog.csv', index=False)
Output:
print(results_df.head(5).to_string())
city district.name postalCode price propertyType surfaceArea bedroomsQuantity bathroomsQuantity
0 Colombes Colombes - Fossés Jean Bouvier 92700 469000 flat 92.00 3.0 1.0
1 Nice Nice - Parc Impérial - Le Piol 06000 215000 flat 49.05 1.0 NaN
2 Nice Nice - Gambetta 06000 145000 flat 21.57 0.0 NaN
3 Cagnes-sur-Mer Cagnes-sur-Mer - Les Bréguières 06800 770000 house 117.00 3.0 3.0
4 Pau Pau - Le Hameau 64000 310000 house 110.00 3.0 2.0