I am currently creating a program where I want to fetch stock data from yahoo finance using the yahoo_finance module. However, I want to fetch data for 4 stocks using what i assume would be a loop. Here's the basic structure I thought of using:
from yahoo_finance import Share
ticker_symbols = ["YHOO", "GOOG", "AAPL"]
i = 0
while i < 4:
company = Share(str(i))
print (company.get_open())
i += 1
The main problem I need assistance with is how I would construct a loop that iterates over all the ticker_symbols. As you can tell from my "try" above I am completely clueless, since I am new to python. A secondary problem I have is how I would fetch data from 30 days ago up to current date using the module. Maybe I should have resorted to web scraping but it seems so much more difficult.
to loop over a list you can just do :
for symbol in ticker_symbols :
company = Share(symbol)
That's basic python ! I will advise you to follow a small tutorial to learn the python basics.
You can get historical daily data using Share(symbol).get_historical('aDate'). Here you can find all the available methods for the package : https://pypi.python.org/pypi/yahoo-finance
good luck with that
You need to iterate over the ticker_symbols list and simply ditch the while loop:
from yahoo_finance import Share
ticker_symbols = ["YHOO", "GOOG", "AAPL"]
for i in ticker_symbols:
company = Share(i)
print (company.get_open())
Related
I've been trying to learn Python over the weekend by doing a project and I got stuck. I am doing an API call to https://api.weather.gov/gridpoints/LOT/74,75/forecast . This is a 7 or 10 day forecast. What I am trying to accomplish is passing a data parameter as a string and returning the 'shortForecast' for that date. I have been able to use parsing to find what I'm looking for but I can't seem to find something that will let me search and just return what I need. It also doesn't help that the dates on the API have a time appended to it, so I would have to search for only the first ten digits (2022-08-21). I'm not sure if that matters or not. I've looked into manipulating data dictionaries but all my queries failed.
local_weather_api = requests.get('https://api.weather.gov/gridpoints/LOT/74,75/forecast')
local_weather_data = local_weather_api.text
parse_json = json.loads(local_weather_data)
#current weather
#weatherWanted = parse_json['properties']['periods'][0]['detailedForecast']
#shortForecast
currentShortForecast = parse_json['properties']['periods'][0]['shortForecast']
currentShortForecast1 = parse_json['properties']['periods'][1]['shortForecast']
currentShortForecast2 = parse_json['properties']['periods'][3]['shortForecast']
currentShortForecast3 = parse_json['properties']['periods'][5]['shortForecast']
currentShortForecast4 = parse_json['properties']['periods'][7]['shortForecast']
currentShortForecast5 = parse_json['properties']['periods'][9]['shortForecast']
Any help or a point in the right direction would be appreciated. Thanks
Trying to fetch my Binance accounts order history with the python-binance module. There is an option to get all orders within one symbol (see documentation):
orders = client.get_all_orders(symbol='BNBBTC', limit=10)
Bu the problem is I can't pass more than 1coin in the symbol parameter
How can I pass a list for the symbol parameter, I want to fetch more than 1 coins order history in a single function
as I'm trying to build a portfolio for my Binance account. Or is there another method to do so?
Currently it is not possible to get all historical orders or trades without specifying the symbol in one call, even without the module python-binance.
There is an ongoing discussion on the Binance forum, requesting this feature.
As a workaround:
If you know your ordered symbols: Use the function get_all_orders() multiple times for each symbol in a loop.
If you don't know your ordered symbols: you could send a GET request for each symbol available at Binance (as mentioned in the discussion linked above). But be careful with the rateLimits.
I was asking myself the same thing. Well, a work around would be to iterate over all the tickers available in Binance looking for the ones we did traded in the past.
If you are working the free plan of the API, the best would be to setup a storage file or database and store all the result. Then you have to care about keeping with the changes from there.
Yeah, it is exactly how I am going to deal with this.
(edit) :
Sleep function will be needed to avoid more than 1200 queries per minute.
(example) :
def getAllTickers(self):
# Get all available exchange tickers
exchangeInfo = self.client.get_exchange_info()
# Extract the tickers general info
exchangeSymbols = []
for i in exchangeInfo['symbols']:
exchangeSymbols.append(i)
return exchangeSymbols
def getMyTrades(self, strSymbol):
return self.client.get_my_trades(symbol=strSymbol)
def getMyTradedTickers(self):
tickers = self.getAllTickers()
# Extract every ticker where trade happened
traded = []
for i in tickers:
tickerTransactions = self.getMyTrades(i["symbol"])
if tickerTransactions :
traded.append(tickerTransactions)
print(i["symbol"], " transactions available")
else :
print(i["symbol"], " has no transactions")
self.time.sleep(0.1)
return traded
**Srry for the code quality. Python is not my main coding language and im getting use to it.
I have a collection of check-in data from foursquare, but it doesn't have information on the country where the check-in was made, only the coordinates. I need to know the country for the model i'm helping develop, so i wrote this script that copies each document (since i don't want to mess with the original data) to another collection with this added country field.
The problem is, this runs extremely slow. I estimated that, on my personal computer, it would take around 48 days to finish. I won't be running it in my personal computer, but i'd still rather not have it take too long. If that makes any difference, the computer I intend to run this on is running mongodb version 3.4.7. If necessary, I can update it, but I would also rather not.
Is there any way to do this more efficiently, while also making sure i don't have to start from the beginning in case the program dies midway?
from pymongo import MongoClient, ReplaceOne, errors
from geopy.geocoders import Nominatim
_client = MongoClient(port=27017)
_collection = cliente.large_foursquare2014.checkins.find()
documents = list()
geolocator = Nominatim(user_agent="omitting the name i actually used on purpose")
i = 0
j = 0
for document in _collection:
i += 1
if i == 1000:
print(j)
j += 1
cliente.large_foursquare2014.teste.bulk_write(documents)
documents.clear()
i = 0
address = geolocator.reverse((documento['latitude'], documento['longitude'])).raw['address']
document['country2'] = address['country_code'].upper()
documents.append(ReplaceOne({'_id': document['_id']}, document, upsert=True))
cliente.large_foursquare2014.teste.bulk_write(documents)
The bottleneck really was the geolocator calls. performance increased by aproximately five times by changing to the reverse_geocoder package.
We are currently working on a project where we need to access the 'NP_' accession number from ClinVar. However, when we use the Entrez.eFetch( ) function, this information appears to be missing in the result. Here is a link to the website page where the NP_ number is listed:
https://www.ncbi.nlm.nih.gov/clinvar/variation/558834/
And here is the Python sample script code that fetches the XML result:
handle = Entrez.efetch(db="clinvar", id=558834, rettype='variation', retmode="text")
print(handle.read())
Interestingly enough, this used to return the NP number in the results, however, it seems to the website formatting/style changed from when we last developed our Python script and we cannot seem to figure out how to retrieve the NP number now.
Any help would be greatly appreciated! Thank you for your time and input!
You need to format it like a new query not an old one:
handle = Entrez.efetch(db="clinvar", id=558834, rettype='vcv', is_varationid="true", from_esearch="true")
print(handle.read())
See also: https://www.ncbi.nlm.nih.gov/clinvar/docs/maintenance_use/
I use python mainly for data analysis, so I'm pretty used to pandas. But apart from basic HTML, I've little experience with web development.
For work I want to make a very simple webpage that, based on the address/query, populates a template page with info from an SQL database (even if it has to be in a dataframe or CSV first that's fine for now). I've done searches but I just don't know the keywords to ask (hence sorry if this a duplicate or the title isn't as clear as it could be).
What I'm imagining (most simple example, excuse my lack of knowledge here!). Example dataframe:
import pandas as pd
df = pd.DataFrame(index=[1,2,3], columns=["Header","Body"], data=[["a","b"],["c","d"],["e","f"]])
Out[1]:
Header Body
1 a b
2 c d
3 e f
User puts in page, referencing the index 2:
"example.com/database.html?id=2" # Or whatever the syntax is.
Output-page: (Since id=2, takes data row data from index = 2, so "c" and "d")
<html><body>
Header<br>
c<p>
Body<br>
d<p>
</body></html>
It should be pretty simple right? But where do I start? Which Python library? I hear about Django and Flask, but are they overkill for this? Is there an example I could follow? And lastly, how does the syntax work for the webpage address?
Cheers!
PS: I realise I should probably just query the SQL database directly and cut out the pandas middle-man, just I'm more familiar with pandas hence the example above.
Edit: I a word.
You can start with flask, It is easy to setup and lots of good resources online,
Start with this minimal web app http://flask.pocoo.org/docs/1.0/quickstart/
Example snippet
#app.route('/database')
def database():
id = request.args.get('id') #if key doesn't exist, returns None
df = pd.DataFrame(index=[1,2,3], columns=["Header","Body"], data=[["a","b"],["c","d"],["e","f"]])
header = df[id].get("Header")
body = df[id].get("Body")
return '''<html><body>Header<br>{}<p>Body<br>d<p></body></html>'''.format(header, body)
For more detailed webpage add a template.
Good luck