Extract single data point from multiple, webscraping - python

I am trying to extract stock symbols (3rd column) from the table in below screener:
https://chartink.com/screener/2-short-trend
and pass them on to a dataframe.
Due to my limited knowledge, I have hit a wall and can not move past it.
My code is:
from requests_html import HTMLSession
session = HTMLSession()
response = session.get('https://chartink.com/screener/2-short-trend')
response.html.render()
for result in response.html.xpath('//*[#id="DataTables_Table_0"]/tbody/tr/td/a[1]'):
print(f'{result.text}\n')
Output:
Mahindra & Mahindra Limited
M&M
P&F
Apollo Tyres Limited
APOLLOTYRE
P&F
....
I just need stock symbols: M&M, APOLLOTYRE etc., and passed into a dataframe.
Can someone pls guide.

Bit of a quick fix, but you could use a counter assuming that the relevant output is the second result for every company. Something like the below:
from requests_html import HTMLSession
import pandas as pd
session = HTMLSession()
response = session.get('https://chartink.com/screener/2-short-trend')
response.html.render()
i = 1
symbols = []
for result in response.html.xpath('//*[#id="DataTables_Table_0"]/tbody/tr/td/a[1]'):
print(f'{result.text}\n')
if i == 2:
symbols.append(result.text)
i -= 2
else:
i += 1
df = pd.DataFrame({"Symbol": symbols})
I structured i to trigger appending the result to a symbols list at the position where the symbol is iterated over and then a dataframe is created using the output. Using that code gave me a dataframe with the 5 symbols from your link.

Related

Dealing with missing value scraping with bs4

this is my script to scrape odds from a particular web site (it should work also outside my country, i don't think there are restrictions yet):
from selenium import webdriver
from time import sleep
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
odds=[]
home=[]
away=[]
url = "https://www.efbet.it/scommesse/calcio/serie-c_1_31_-418"
driver = webdriver.Chrome(r"C:\chromedriver.exe")
driver.get(url)
sleep(5)
#driver.maximize_window()
#driver.find_element_by_id('onetrust-accept-btn-handler').click()
soup = BeautifulSoup(driver.page_source, "html.parser")
id = soup.find(class_="contenitore-table-grande")
for a in id.select("p[class*='tipoQuotazione_1']"):
odds.append(a.text)
for a in id.select("p[class*='font-weight-bold m-0 text-right']"):
home.append(a.text)
for a in id.select("p[class*='font-weight-bold m-0 text-left']"):
away.append(a.text)
a=np.asarray(odds)
newa= a.reshape(42,10)
df = pd.DataFrame(newa)
df1 = pd.DataFrame(home)
df2 = pd.DataFrame(away)
dftot = pd.concat([df1, df2, df], axis=1)
Now it works fine (i'm aware it could be written in a better and cleaner way) but there's an issue: when new odds are published by the website, sometimes some kind of them are missing (i.e. under over or double chance 1X 12 X2). So i would need to put a zero or null value where they are missing, if not my array would not be corresponding in lenght and in odds to their respective matches.
With ispection i see that when a value is missing there's only no text in the class tipoQuotazione:
<p class="tipoQuotazione_1">1.75</p> with value
<p class="tipoQuotazione_1"></p> when missing
Is there a way to perform this?
Thanks!
... when new odds are published by the website, sometimes some kind of
them are missing ...
As a better design suggestion, this is not only the problem you might end up with. What if the website changes a class name? That would break your code as well.
... sometimes some kind of them are missing (i.e. under over or double
chance 1X 12 X2). So i would need to put a zero or null value where
they are missing ...
for a in id.select("p[class*='tipoQuotazione_1']"):
# if a.text == "" default to 0.0
odds.append(a.text or 0.0)
Or you can do it with an if statement
if not a.text:
odds.append(0.0)

any way to download the data with custom queries from url in python?

I want to download the data from USDA site with custom queries. So instead of manually selecting queries in the website, I am thinking about how should I do this handier in python. To do so, I used request, http to access the url and read the content, it is not intuitive for me how should I pass the queries then make a selection and download the data as csv. Does anyone knows of doing this easily in python? Is there any workaround we could download the data from url with specific queries? Any idea?
this is my current attempt
here is the url that I am going to select data with custom queries.
import io
import requests
import pandas as pd
url="https://www.marketnews.usda.gov/mnp/ls-report-retail?&repType=summary&portal=ls&category=Retail&species=BEEF&startIndex=1"
s=requests.get(url).content
df=pd.read_csv(io.StringIO(s.decode('utf-8')))
so before reading the requested json in pandas, I need to pass following queries for correct data selection:
Category = "Retail"
Report Type = "Item"
Species = "Beef"
Region(s) = "National"
Start Dates = "2020-01-01"
End Date = "2021-02-08"
it is not intuitive for me how should I pass the queries with requested json then download the filtered data as csv. Is there any efficient way of doing this in python? Any thoughts? Thanks
A few details
simplest format is text rather that HTML. Got URL from HTML page for text download
requests(params=) is a dict. Built it up and passed, no need to deal with building complete URL string
clearly text is space delimited, found minimum of double space
import io
import requests
import pandas as pd
url="https://www.marketnews.usda.gov/mnp/ls-report-retail"
p = {"repType":"summary","species":"BEEF","portal":"ls","category":"Retail","format":"text"}
r = requests.get(url, params=p)
df = pd.read_csv(io.StringIO(r.text), sep="\s\s+", engine="python")
Date
Region
Feature Rate
Outlets
Special Rate
Activity Index
0
02/05/2021
NATIONAL
69.40%
29,200
20.10%
81,650
1
02/05/2021
NORTHEAST
75.00%
5,500
3.80%
17,520
2
02/05/2021
SOUTHEAST
70.10%
7,400
28.00%
23,980
3
02/05/2021
MIDWEST
75.10%
6,100
19.90%
17,430
4
02/05/2021
SOUTH CENTRAL
57.90%
4,900
26.40%
9,720
5
02/05/2021
NORTHWEST
77.50%
1,300
2.50%
3,150
6
02/05/2021
SOUTHWEST
63.20%
3,800
27.50%
9,360
7
02/05/2021
ALASKA
87.00%
200
.00%
290
8
02/05/2021
HAWAII
46.70%
100
.00%
230
Just format the query data in the url - it's actually a REST API:
To add more query data, as #mullinscr said, you can change the values on the left and press submit, then see the query name in the URL (for example, start date is called repDate).
If you hover on the Download as XML link, you will also discover you can specify the download format using format=<format_name>. Parsing the tabular data in XML using pandas might be easier, so I would append format=xml at the end as well.
category = "Retail"
report_type = "Item"
species = "BEEF"
regions = "NATIONAL"
start_date = "01-01-2019"
end_date = "01-01-2021"
# the website changes "-" to "%2F"
start_date = start_date.replace("-", "%2F")
end_date = end_date.replace("-", "%2F")
url = f"https://www.marketnews.usda.gov/mnp/ls-report-retail?runReport=true&portal=ls&startIndex=1&category={category}&repType={report_type}&species={species}&region={regions}&repDate={start_date}&endDate={end_date}&compareLy=No&format=xml"
# parse with pandas, etc...

Getting no data when scraping a table

I am trying to scrape historical data from a table in coinmarketcap. However, the code that I run gives back "no data." I thought it would be fairly easy, but not sure what I am missing.
url = "https://coinmarketcap.com/currencies/bitcoin/historical-data/"
data = requests.get(url)
bs=BeautifulSoup(data.text, "lxml")
table_body=bs.find('tbody')
rows = table_body.find_all('tr')
for row in rows:
cols=row.find_all('td')
cols=[x.text.strip() for x in cols]
print(cols)
Output:
C:\Users\Ejer\anaconda3\envs\pythonProject\python.exe C:/Users/Ejer/PycharmProjects/pythonProject/CloudSQL_test.py
['No Data']
Process finished with exit code 0
You don't need to scrape the data, you can get request it:
import time
import requests
def get_timestamp(datetime: str):
return int(time.mktime(time.strptime(datetime, '%Y-%m-%d %H:%M:%S')))
def get_btc_quotes(start_date: str, end_date: str):
start = get_timestamp(start_date)
end = get_timestamp(end_date)
url = f'https://web-api.coinmarketcap.com/v1/cryptocurrency/ohlcv/historical?id=1&convert=USD&time_start={start}&time_end={end}'
return requests.get(url).json()
data = get_btc_quotes(start_date='2020-12-01 00:00:00',
end_date='2020-12-10 00:00:00')
import pandas as pd
# making A LOT of assumptions here, hopefully the keys don't change in the future
data_flat = [quote['quote']['USD'] for quote in data['data']['quotes']]
df = pd.DataFrame(data_flat)
print(df)
Output:
open high low close volume market_cap timestamp
0 18801.743593 19308.330663 18347.717838 19201.091157 3.738770e+10 3.563810e+11 2020-12-02T23:59:59.999Z
1 19205.925404 19566.191884 18925.784434 19445.398480 3.193032e+10 3.609339e+11 2020-12-03T23:59:59.999Z
2 19446.966422 19511.404714 18697.192914 18699.765613 3.387239e+10 3.471114e+11 2020-12-04T23:59:59.999Z
3 18698.385279 19160.449265 18590.193675 19154.231131 2.724246e+10 3.555639e+11 2020-12-05T23:59:59.999Z
4 19154.180593 19390.499895 18897.894072 19345.120959 2.529378e+10 3.591235e+11 2020-12-06T23:59:59.999Z
5 19343.128798 19411.827676 18931.142919 19191.631287 2.689636e+10 3.562932e+11 2020-12-07T23:59:59.999Z
6 19191.529463 19283.478339 18269.945444 18321.144916 3.169229e+10 3.401488e+11 2020-12-08T23:59:59.999Z
7 18320.884784 18626.292652 17935.547820 18553.915377 3.442037e+10 3.444865e+11 2020-12-09T23:59:59.999Z
8 18553.299728 18553.299728 17957.065213 18264.992107 2.554713e+10 3.391369e+11 2020-12-10T23:59:59.999Z
Your problem basically is you're trying to get a table but this table is dynamically created by JS in this case you need to call an interpreter for this JS. But however you just can check the network monitor on your browser and you can get the requests and probably contains a full JSON or XML raw data and you don't need to scrape. I did it and I got this request:
https://web-api.coinmarketcap.com/v1/cryptocurrency/ohlcv/historical?id=1&convert=USD&time_start=1604016000&time_end=1609286400
Check it out and I hope help you!

I can't get wanted parameters through the json response (web-scraping)

I'm trying to extract data through the json response of this link : https://www.bienici.com/recherche/achat/france?page=2
I have 2 problems:
- first, I want scrape a house's parametrs like (price, area, city, zip code) but I don't know how ?
- Secondly, I want to make a loop that goes all the pages up to page 100
This is the program :
import requests
from pandas.io.json import json_normalize
import csv
payload = {'filters': '{"size":24,"from":0,"filterType":"buy","newProperty":false,"page":2,"resultsPerPage":24,"maxAuthorizedResults":2400,"sortBy":"relevance","sortOrder":"desc","onTheMarket":[true],"limit":"ih{eIzjhZ?q}qrAzaf}AlrD?rvfrA","showAllModels":false,"blurInfoType":["disk","exact"]}'}
url = 'https://www.bienici.com/realEstateAds.json'
response = requests.get(url, params = payload).json()
with open("selog.csv", "w", newline="") as f:
writer = csv.writer(f)
for prop in response['realEstateAds']:
title = prop['title']
city = prop['city']
desc = prop['description']
price = prop['price']
df = json_normalize(response['realEstateAds'])
df.to_csv('selog.csv', index=False)
writer.writerow([price,title,city,desc])
Hi first thing I notice is you're writing the csv twice. Once with writer and once with .to_csv(). Depending what you are trying to do, you don't need both, but ultimately either would work. It just depends then how you iterated through the data.
Personally, I like working with pandas. I’ve had people tell me it’s a little overkill to store temp dataframes and append to a “final” dataframe, but it’s just what I’m comfortable doing and haven’t had issues with it, so I just used that.
To get other data parts, you'll need to investigate what’s all there and work your way through the json format to pull that out of the json response (if you’re going the route of using csv writer).
The pages are part of the payload parameters. To go through pages, just iterate that. The weird thing is, when I tried that, not only do you have to iterate through pages, but also the from parameter. Ie. since I have it doing 60 per page, page 1 is from 0, page 2 is from 60, page 3 is from 120, etc. So had it iterate through those multiples of 60 (it seems to get it). Sometimes it’s possible to see how many pages you’ll iterate through, but I couldn’t find it, so simply left it as a try/except, so when it reaches the end, it’ll break the loop. The only downside, is it could draw an error unexpected before, causing it to stop pre-maturely. I didn’t look too much into that, but just as a side note.
so it would look something like this (might take a while to go through all the pages, so I just did pages 1-10$:
You can also before saving to csv, manipulte the dataframe to keep only the columns you want:
import requests
import pandas as pd
from pandas.io.json import json_normalize
tot_pages = 10
url = 'https://www.bienici.com/realEstateAds.json'
results_df = pd.DataFrame()
for page in range(1, tot_pages+1):
try:
payload = {'filters': '{"size":60,"from":%s,"filterType":"buy","newProperty":false,"page":%s,"resultsPerPage":60,"maxAuthorizedResults":2400,"sortBy":"relevance","sortOrder":"desc","onTheMarket":[true],"limit":"ih{eIzjhZ?q}qrAzaf}AlrD?rvfrA","showAllModels":false,"blurInfoType":["disk","exact"]}' %((60 * (page-1)), page)}
response = requests.get(url, params = payload).json()
print ('Processing Page: %s' %page)
temp_df = json_normalize(response['realEstateAds'])
results_df = results_df.append(temp_df).reset_index(drop=True)
except:
print ('No more pages.')
break
# To Filter out to certain columns, un-comment below
#results_df = results_df[['city','district.name','postalCode','price','propertyType','surfaceArea','bedroomsQuantity','bathroomsQuantity']]
results_df.to_csv('selog.csv', index=False)
Output:
print(results_df.head(5).to_string())
city district.name postalCode price propertyType surfaceArea bedroomsQuantity bathroomsQuantity
0 Colombes Colombes - Fossés Jean Bouvier 92700 469000 flat 92.00 3.0 1.0
1 Nice Nice - Parc Impérial - Le Piol 06000 215000 flat 49.05 1.0 NaN
2 Nice Nice - Gambetta 06000 145000 flat 21.57 0.0 NaN
3 Cagnes-sur-Mer Cagnes-sur-Mer - Les Bréguières 06800 770000 house 117.00 3.0 3.0
4 Pau Pau - Le Hameau 64000 310000 house 110.00 3.0 2.0

more efficient way of looping over name list in python

I"m playing around with Bittrex's API to get the current price of a coin. (E.g: btc-ltc). So in this case, the API will read:
r = requests.get('https://bittrex.com/api/v1.1/public/getticker?market=BTC-LTC').json()
pd = pandas.Dataframe(r)
print(pd)
If I want to get the current price of maybe... 50 or 200 different coins, i wrote a loop to replace BTC-LTC with that particular market coin name. (part of another API on Bittrex)
for i in marketnames:
r = requests.get('https://bittrex.com/api/v1.1/public/getticker?market={names}'.format(names=i)).json()
pd = pandas.Dataframe(r)
print(pd)
The problem with this loop is that it goes through 1 by 1, iterating over the list of coin names, 200 times to get the price.
Is there a more efficient way of doing this?
was there a typo in your code? if you iterate through the marketnames list then you should use i in your code, as below?
for i in marketnames:
r = requests.get('https://bittrex.com/api/v1.1/public/getticker?market={names}'.format(names=i)).json()
pd = pandas.Dataframe(r)
print(pd)

Categories