I am trying to scrape historical data from a table in coinmarketcap. However, the code that I run gives back "no data." I thought it would be fairly easy, but not sure what I am missing.
url = "https://coinmarketcap.com/currencies/bitcoin/historical-data/"
data = requests.get(url)
bs=BeautifulSoup(data.text, "lxml")
table_body=bs.find('tbody')
rows = table_body.find_all('tr')
for row in rows:
cols=row.find_all('td')
cols=[x.text.strip() for x in cols]
print(cols)
Output:
C:\Users\Ejer\anaconda3\envs\pythonProject\python.exe C:/Users/Ejer/PycharmProjects/pythonProject/CloudSQL_test.py
['No Data']
Process finished with exit code 0
You don't need to scrape the data, you can get request it:
import time
import requests
def get_timestamp(datetime: str):
return int(time.mktime(time.strptime(datetime, '%Y-%m-%d %H:%M:%S')))
def get_btc_quotes(start_date: str, end_date: str):
start = get_timestamp(start_date)
end = get_timestamp(end_date)
url = f'https://web-api.coinmarketcap.com/v1/cryptocurrency/ohlcv/historical?id=1&convert=USD&time_start={start}&time_end={end}'
return requests.get(url).json()
data = get_btc_quotes(start_date='2020-12-01 00:00:00',
end_date='2020-12-10 00:00:00')
import pandas as pd
# making A LOT of assumptions here, hopefully the keys don't change in the future
data_flat = [quote['quote']['USD'] for quote in data['data']['quotes']]
df = pd.DataFrame(data_flat)
print(df)
Output:
open high low close volume market_cap timestamp
0 18801.743593 19308.330663 18347.717838 19201.091157 3.738770e+10 3.563810e+11 2020-12-02T23:59:59.999Z
1 19205.925404 19566.191884 18925.784434 19445.398480 3.193032e+10 3.609339e+11 2020-12-03T23:59:59.999Z
2 19446.966422 19511.404714 18697.192914 18699.765613 3.387239e+10 3.471114e+11 2020-12-04T23:59:59.999Z
3 18698.385279 19160.449265 18590.193675 19154.231131 2.724246e+10 3.555639e+11 2020-12-05T23:59:59.999Z
4 19154.180593 19390.499895 18897.894072 19345.120959 2.529378e+10 3.591235e+11 2020-12-06T23:59:59.999Z
5 19343.128798 19411.827676 18931.142919 19191.631287 2.689636e+10 3.562932e+11 2020-12-07T23:59:59.999Z
6 19191.529463 19283.478339 18269.945444 18321.144916 3.169229e+10 3.401488e+11 2020-12-08T23:59:59.999Z
7 18320.884784 18626.292652 17935.547820 18553.915377 3.442037e+10 3.444865e+11 2020-12-09T23:59:59.999Z
8 18553.299728 18553.299728 17957.065213 18264.992107 2.554713e+10 3.391369e+11 2020-12-10T23:59:59.999Z
Your problem basically is you're trying to get a table but this table is dynamically created by JS in this case you need to call an interpreter for this JS. But however you just can check the network monitor on your browser and you can get the requests and probably contains a full JSON or XML raw data and you don't need to scrape. I did it and I got this request:
https://web-api.coinmarketcap.com/v1/cryptocurrency/ohlcv/historical?id=1&convert=USD&time_start=1604016000&time_end=1609286400
Check it out and I hope help you!
Related
I am trying to extract stock symbols (3rd column) from the table in below screener:
https://chartink.com/screener/2-short-trend
and pass them on to a dataframe.
Due to my limited knowledge, I have hit a wall and can not move past it.
My code is:
from requests_html import HTMLSession
session = HTMLSession()
response = session.get('https://chartink.com/screener/2-short-trend')
response.html.render()
for result in response.html.xpath('//*[#id="DataTables_Table_0"]/tbody/tr/td/a[1]'):
print(f'{result.text}\n')
Output:
Mahindra & Mahindra Limited
M&M
P&F
Apollo Tyres Limited
APOLLOTYRE
P&F
....
I just need stock symbols: M&M, APOLLOTYRE etc., and passed into a dataframe.
Can someone pls guide.
Bit of a quick fix, but you could use a counter assuming that the relevant output is the second result for every company. Something like the below:
from requests_html import HTMLSession
import pandas as pd
session = HTMLSession()
response = session.get('https://chartink.com/screener/2-short-trend')
response.html.render()
i = 1
symbols = []
for result in response.html.xpath('//*[#id="DataTables_Table_0"]/tbody/tr/td/a[1]'):
print(f'{result.text}\n')
if i == 2:
symbols.append(result.text)
i -= 2
else:
i += 1
df = pd.DataFrame({"Symbol": symbols})
I structured i to trigger appending the result to a symbols list at the position where the symbol is iterated over and then a dataframe is created using the output. Using that code gave me a dataframe with the 5 symbols from your link.
I'm trying to build a cryptocurrency price tracker in Python (see code below). I'm working with Python 3.10.1 in Visual Studio Code.
import pandas_datareader.data as web
import datetime as dt
currency = 'EUR'
metric = 'Close'
crypto = ['BTC','ETH']
colnames = []
first = True
start = dt.datetime(2020,1,1)
end = dt.datetime.now()
for ticker in crypto:
data = web.DataReader(f'{crypto}-{currency}', 'yahoo', start, end)
if first:
combined = data[[metric]].copy()
colnames.append(ticker)
combined.columns = colnames
first = False
else:
combined = combined.join(data[metric])
colnames.append(ticker)
combined.columns = colnames
When I execute this code, I get the following error notification:
RemoteDataError: No data fetched for symbol ['BTC', 'ETH']-EUR using YahooDailyReader
When I change the variable crypto to only pull the prices for BTC the code works, but the output looks like this:
Date
B
T
C
2020-01-01
6417.781738
6417.781738
6417.781738
2020-01-02
6252.938477
6252.938477
6252.938477
2020-01-03
6581.735840
6581.735840
6581.735840
In the scenario of only pulling BTC, the variable colnames looks like this: colnames = ['B','T', 'C']. I suspect, there's something wrong with that variable and it's potentially the reason why my code fails when I try to pull the data for multiple cryptocurrencies but I can't quite figure it out and solve my problem.
I'm looking for help with two main things: (1) scraping a web page and (2) turning the scraped data into a pandas dataframe (mostly so I can output as .csv, but just creating a pandas df is enough for now). Here is what I have done so far for both:
(1) Scraping the web site:
I am trying to scrape this page: https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015&id=1284178.015&id=1283809.015&id=1283549.015&id=1282631.015. My end goal is to create a dataframe that would ideally contain only the information I am looking for (i.e. I'd be able to select only the parts of the site that I am interested in for my df); it's OK if I have to pull in all the data for now.
As you can see from the URL as well as the ID hyperlinks underneath "Quick Link Reference" at the top of the page, there are five distinct records on this page. I would like each of these IDs/records to be treated as an individual row in my pandas df.
EDIT: Thanks to a helpful comment, I'm including an example of what I would ultimately want in the table below. The first row represents column headers/names and the second row represents the first inspection.
inspection_id open_date inspection_type close_conference close_case violations_serious_initial
1285328.015 12/28/2017 referral 12/28/2017 06/21/2018 2
Mostly relying on BeautifulSoup4, I've tried a few different options to get at the page elements I'm interested in:
# This is meant to give you the first instance of Case Status, which in the case of this page is "CLOSED".
case_status_template = html_soup.head.find('div', {"id" : "maincontain"},
class_ = "container").div.find('table', class_ = "table-bordered").find('strong').text
# I wasn't able to get the remaining Case Statuses with find_next_sibling or find_all, so I used a different method:
for table in html_soup.find_all('table', class_= "table-bordered"):
print(table.text)
# This gave me the output I needed (i.e. the Case Status for all five records on the page),
# but didn't give me the structure I wanted and didn't really allow me to connect to the other data on the page.
# I was also able to get to the same place with another page element, Inspection Details.
# This is the information reflected on the page after "Inspection: ", directly below Case Status.
insp_details_template = html_soup.head.find('div', {"id" : "maincontain"},
class_ = "container").div.find('table', class_ = "table-unbordered")
for div in html_soup.find_all('table', class_ = "table-unbordered"):
print(div.text)
# Unfortunately, although I could get these two pieces of information to print,
# I realized I would have a hard time getting the rest of the information for each record.
# I also knew that it would be hard to connect/roll all of these up at the record level.
So, I tried a slightly different approach. By focusing instead on a version of that page with a single inspection record, I thought maybe I could just hack it by using this bit of code:
url = 'https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015'
response = get(url)
html_soup = BeautifulSoup(response.text, 'html.parser')
first_table = html_soup.find('table', class_ = "table-borderedu")
first_table_rows = first_table.find_all('tr')
for tr in first_table_rows:
td = tr.find_all('td')
row = [i.text for i in td]
print(row)
# Then, actually using pandas to get the data into a df and out as a .csv.
dfs_osha = pd.read_html('https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015',header=1)
for df in dfs_osha:
print(df)
path = r'~\foo'
dfs_osha = pd.read_html('https://www.osha.gov/pls/imis/establishment.inspection_detail?id=1285328.015',header=1)
for df[1,3] in dfs_osha:
df.to_csv(os.path.join(path,r'osha_output_table1_012320.csv'))
# This worked better, but didn't actually give me all of the data on the page,
# and wouldn't be replicable for the other four inspection records I'm interested in.
So, finally, I found a pretty handy example here: https://levelup.gitconnected.com/quick-web-scraping-with-python-beautiful-soup-4dde18468f1f. I was trying to work through it, and had gotten as far as coming up with this code:
for elem in all_content_raw_lxml:
wrappers = elem.find_all('div', class_ = "row-fluid")
for x in wrappers:
case_status = x.find('div', class_ = "text-center")
print(case_status)
insp_details = x.find('div', class_ = "table-responsive")
for tr in insp_details:
td = tr.find_all('td')
td_row = [i.text for i in td]
print(td_row)
violation_items = insp_details.find_next_sibling('div', class_ = "table-responsive")
for tr in violation_items:
tr = tr.find_all('tr')
tr_row = [i.text for i in tr]
print(tr_row)
print('---------------')
Unfortunately, I ran into too many bugs with this to be able to use it so I was forced to abandon the project until I got some further guidance. Hopefully the code I've shared so far at least shows the effort I've put in, even if it doesn't do much to get to the final output! Thanks.
For this type of page you don't really need beautifulsoup; pandas is enough.
url = 'your url above'
import pandas as pd
#use pandas to read the tables on the page; there are lots of them...
tables = pd.read_html(url)
#Select from this list of tables only those tables you need:
incident = [] #initialize a list of inspections
for i, table in enumerate(tables): #we need to find the index position of this table in the list; more below
if table.shape[1]==5: #all relevant tables have this shape
case = [] #initialize a list of inspection items you are interested in
case.append(table.iat[1,0]) #this is the location in the table of this particular item
case.append(table.iat[1,2].split(' ')[2]) #the string in the cell needs to be cleaned up a bit...
case.append(table.iat[9,1])
case.append(table.iat[12,3])
case.append(table.iat[13,3])
case.append(tables[i+2].iat[0,1]) #this particular item is in a table which 2 positions down from the current one; this is where the index position of the current table comes handy
incident.append(case)
columns = ["inspection_id", "open_date", "inspection_type", "close_conference", "close_case", "violations_serious_initial"]
df2 = pd.DataFrame(incident,columns=columns)
df2
Output (pardon the formatting):
inspection_id open_date inspection_type close_conference close_case violations_serious_initial
0 Nr: 1285328.015 12/28/2017 Referral 12/28/2017 06/21/2018 2
1 Nr: 1283809.015 12/18/2017 Complaint 12/18/2017 05/24/2018 5
2 Nr: 1284178.015 12/18/2017 Accident 05/17/2018 09/17/2018 1
3 Nr: 1283549.015 12/13/2017 Referral 12/13/2017 05/22/2018 3
4 Nr: 1282631.015 12/12/2017 Fat/Cat 12/12/2017 11/16/2018 1
I'm trying to extract data through the json response of this link : https://www.bienici.com/recherche/achat/france?page=2
I have 2 problems:
- first, I want scrape a house's parametrs like (price, area, city, zip code) but I don't know how ?
- Secondly, I want to make a loop that goes all the pages up to page 100
This is the program :
import requests
from pandas.io.json import json_normalize
import csv
payload = {'filters': '{"size":24,"from":0,"filterType":"buy","newProperty":false,"page":2,"resultsPerPage":24,"maxAuthorizedResults":2400,"sortBy":"relevance","sortOrder":"desc","onTheMarket":[true],"limit":"ih{eIzjhZ?q}qrAzaf}AlrD?rvfrA","showAllModels":false,"blurInfoType":["disk","exact"]}'}
url = 'https://www.bienici.com/realEstateAds.json'
response = requests.get(url, params = payload).json()
with open("selog.csv", "w", newline="") as f:
writer = csv.writer(f)
for prop in response['realEstateAds']:
title = prop['title']
city = prop['city']
desc = prop['description']
price = prop['price']
df = json_normalize(response['realEstateAds'])
df.to_csv('selog.csv', index=False)
writer.writerow([price,title,city,desc])
Hi first thing I notice is you're writing the csv twice. Once with writer and once with .to_csv(). Depending what you are trying to do, you don't need both, but ultimately either would work. It just depends then how you iterated through the data.
Personally, I like working with pandas. I’ve had people tell me it’s a little overkill to store temp dataframes and append to a “final” dataframe, but it’s just what I’m comfortable doing and haven’t had issues with it, so I just used that.
To get other data parts, you'll need to investigate what’s all there and work your way through the json format to pull that out of the json response (if you’re going the route of using csv writer).
The pages are part of the payload parameters. To go through pages, just iterate that. The weird thing is, when I tried that, not only do you have to iterate through pages, but also the from parameter. Ie. since I have it doing 60 per page, page 1 is from 0, page 2 is from 60, page 3 is from 120, etc. So had it iterate through those multiples of 60 (it seems to get it). Sometimes it’s possible to see how many pages you’ll iterate through, but I couldn’t find it, so simply left it as a try/except, so when it reaches the end, it’ll break the loop. The only downside, is it could draw an error unexpected before, causing it to stop pre-maturely. I didn’t look too much into that, but just as a side note.
so it would look something like this (might take a while to go through all the pages, so I just did pages 1-10$:
You can also before saving to csv, manipulte the dataframe to keep only the columns you want:
import requests
import pandas as pd
from pandas.io.json import json_normalize
tot_pages = 10
url = 'https://www.bienici.com/realEstateAds.json'
results_df = pd.DataFrame()
for page in range(1, tot_pages+1):
try:
payload = {'filters': '{"size":60,"from":%s,"filterType":"buy","newProperty":false,"page":%s,"resultsPerPage":60,"maxAuthorizedResults":2400,"sortBy":"relevance","sortOrder":"desc","onTheMarket":[true],"limit":"ih{eIzjhZ?q}qrAzaf}AlrD?rvfrA","showAllModels":false,"blurInfoType":["disk","exact"]}' %((60 * (page-1)), page)}
response = requests.get(url, params = payload).json()
print ('Processing Page: %s' %page)
temp_df = json_normalize(response['realEstateAds'])
results_df = results_df.append(temp_df).reset_index(drop=True)
except:
print ('No more pages.')
break
# To Filter out to certain columns, un-comment below
#results_df = results_df[['city','district.name','postalCode','price','propertyType','surfaceArea','bedroomsQuantity','bathroomsQuantity']]
results_df.to_csv('selog.csv', index=False)
Output:
print(results_df.head(5).to_string())
city district.name postalCode price propertyType surfaceArea bedroomsQuantity bathroomsQuantity
0 Colombes Colombes - Fossés Jean Bouvier 92700 469000 flat 92.00 3.0 1.0
1 Nice Nice - Parc Impérial - Le Piol 06000 215000 flat 49.05 1.0 NaN
2 Nice Nice - Gambetta 06000 145000 flat 21.57 0.0 NaN
3 Cagnes-sur-Mer Cagnes-sur-Mer - Les Bréguières 06800 770000 house 117.00 3.0 3.0
4 Pau Pau - Le Hameau 64000 310000 house 110.00 3.0 2.0
I am trying to figure out how to print all tr elements from a table, but I can't quite get it working right.
Here is the link I am working with.
https://en.wikipedia.org/wiki/List_of_current_members_of_the_United_States_Senate
Here is my code.
import requests
from bs4 import BeautifulSoup
link = "https://en.wikipedia.org/wiki/List_of_current_members_of_the_United_States_Senate"
html = requests.get(link).text
# If you do not want to use requests then you can use the following code below
# with urllib (the snippet above). It should not cause any issue."""
soup = BeautifulSoup(html, "lxml")
res = soup.findAll("span", {"class": "fn"})
for r in res:
print("Name: " + r.find('a').text)
table_body=soup.find('senators')
rows = table_body.find_all('tr')
for row in rows:
cols=row.find_all('td')
cols=[x.text.strip() for x in cols]
print(cols)
I am trying to print all tr elements from the table named 'senators'. Also, I am wondering if there is a way to click on links of senators, like 'Richard Shelby' which takes me to this:
https://en.wikipedia.org/wiki/Richard_Shelby
From each link, I want to grab the data under 'Assumed office'. In this case the value is: 'January 3, 2018'. So, ultimately, I want to end up with this:
Richard Shelby May 6, 1934 (age 84) Lawyer U.S. House
Alabama Senate January 3, 1987 2022
Assumed office: January 3, 2018
All I can get now is the name of each senator printed out.
In order to locate the "Senators" table, you can first find the corresponding "Senators" label and then get the first following table element:
soup.find(id='Senators').find_next("table")
Now, in order to get the data row by row, you would have to account for the cells with a "rowspan" which stretch across multiple rows. You can either follow the approaches suggested at What should I do when <tr> has rowspan, or the implementation I provide below (not ideal but works in your case).
import copy
import requests
from bs4 import BeautifulSoup
link = "https://en.wikipedia.org/wiki/List_of_current_members_of_the_United_States_Senate"
with requests.Session() as session:
html = session.get(link).text
soup = BeautifulSoup(html, "lxml")
senators_table = soup.find(id='Senators').find_next("table")
headers = [td.get_text(strip=True) for td in senators_table.tr('th')]
rows = senators_table.find_all('tr')
# pre-process table to account for rowspan, TODO: extract into a function
for row_index, tr in enumerate(rows):
for cell_index, td in enumerate(tr('td')):
if 'rowspan' in td.attrs:
rowspan = int(td['rowspan'])
del td.attrs['rowspan']
# insert same td into subsequent rows
for index in range(row_index + 1, row_index + rowspan):
try:
rows[index]('td')[cell_index].insert_after(copy.copy(td))
except IndexError:
continue
# extracting the desired data
rows = senators_table.find_all('tr')[1:]
for row in rows:
cells = [td.get_text(strip=True) for td in row('td')]
print(dict(zip(headers, cells)))
If you want to, then, follow the links to senator "profile" pages, you would first need to extract the link out of the appropriate cell in a row and then use session.get() to "navigate" to it, something along these lines:
senator_link = row.find_all('td')[3].a['href']
senator_link = urljoin(link, senator_link)
response = session.get(senator_link)
soup = BeautifulSoup(response.content, "lxml")
# TODO: parse
where urljoin is imported as:
from urllib.parse import urljoin
Also, FYI, one of the reasons to use requests.Session() here is to optimize making requests to the same host:
The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use urllib3’s connection pooling. So if you’re making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase
There is also an another way to get the tabular data parsed - .read_html() from pandas. You could do:
import pandas as pd
df = pd.read_html(str(senators_table))[0]
print(df.head())
to get the desired table as a dataframe.