I am quite new to coding so pardon me if this is basic stuff. I did not even know what multithreading was until I found this error and started to look around.
When I run the following code. I get Runtime Error: can't start new thread
stock_prices_daily = yf.download(ticker_daily, group_by='Ticker', start = '1990-01-01')
ticker_daily is a list of >5467 tickers. All these tickers have data from Yahoo Finance. I already tried (I was able to get price data for all of them)
I use Spyder as console with Anaconda.
As a workaround I use the below loop but I think it is slowing down the code and I would like to understand if there is a better approach to manage multithreading with YF.
for i in list(range(0,len(ticker_daily))):
temp_price = yf.download(ticker_daily[i], start = '1990-01-01')
temp_price['Ticker'] = ticker_daily[i]
if i == 0:
stock_prices = temp_price
else:
stock_prices = stock_prices.append(temp_price)
print(i)
I am having a go at using the sentinelsat python API to download satellite imagery. However, I am receiving error messages when I try to convert to a pandas dataframe. This code works and downloads my requested sentinel satellite images:
from sentinelsat import SentinelAPI, read_geojson, geojson_to_wkt
from datetime import date
api = SentinelAPI('*****', '*****', 'https://scihub.copernicus.eu/dhus')
footprint = geojson_to_wkt(read_geojson('testAPIpoly.geojson'))
products = api.query(footprint, cloudcoverpercentage = (0,10))
#this works
api.download_all(products)
However if I instead attempt to convert to a pandas dataframe
#api.download_all(products)
#this does not work
products_df = api.to_dataframe(products)
api.download_all(products_df)
I receive an extensive error message that includes
"sentinelsat.sentinel.SentinelAPIError: HTTP status 500 Internal Server Error: InvalidKeyException : Invalid key (processed) to access Products
"
(where processed is also replaced with title, platformname, processingbaseline, etc.). I've tried a few different ways to convert to a dataframe and filter/sort results and have received an error message every time (note: I have pandas/geopandas installed). How can I convert to a dataframe and filter/sort with the sentinelsat API?
Instead of
api.download_all(products_df)
try
api.download_all(products_df.index)
Hi everyone,
I'm trying to wrap my head around microsoft server 2017 and python script.
In general - I'm trying to store a table I took from a website (using bs4),
storing it in a panda df , and then simply put the results in a temp sql table.
I entered the following code (I'm skipping parts of the code because the python script
does work in python. Keep in mind I'm calling the script from microsoft sql server 2017):
CREATE PROC OTC
AS
BEGIN
EXEC sp_execute_external_script
#language = N'Python',
#script = N'
import bs4 as bs
import pandas as pd
import requests
....
r = requests.get(url, verify = False)
html = r.text
soup = bs.BeautifulSoup(html, "html.parser")
data_date = str(soup.find(id="ctl00_SPWebPartManager1_g_4be2cf24_5a47_472d_a6ab_4248c8eb10eb_ctl00_lDate").contents)
t_tab1 = soup.find(id="ctl00_SPWebPartManager1_g_4be2cf24_5a47_472d_a6ab_4248c8eb10eb_ctl00_NiaROGrid1_DataGrid1")
df = parse_html_table(1,t_tab1)
print(df)
OutputDataSet=df
'
I tried the microsoft tutorials and simply couldn't understand how to
handle the inputs/outputs to get the result as a sql table.
Furthermore, I get the error
"
import bs4 as bs
ImportError: No module named 'bs4'
"
I'm obviously missing a lot here.
What am I to add to the sql code?
does the sql server even supports bs4? or only pandas?
and then I need to find another solution like write as csv?
Thanks for any help or advice you can offer
To use pip to install a Python package on SQL Server 2017:
On the server, open a command prompt as administrator.
Then cd to {instance directory}\PYTHON_SERVICES\Scripts
(for example: C:\Program Files\Microsoft SQL Server\MSSQL14.SQL2017\PYTHON_SERVICES\Scripts).
Then execute pip install {package name}.
One you have the necessary package(s) installed and the script executes successfully, simply setting variable OutputDataSet to a pandas data frame will result in the contents of that data frame being returned as a result set from the stored procedure.
If you want to capture that result set in a table (perhaps a temporary table), you can use INSERT...EXEC (e.g. INSERT MyTable(Col1, Col2) EXEC sp_execute_external_script ...).
This is a script using python-firebase:
from firebase import firebase
firebase = firebase.FirebaseApplication('https://<my-firebase-id>.firebaseio.com', None)
result = firebase.get('/status/time', None)
print result
Everything works as intended (displaying the word "time"), except that it prints it in 6 different lines, like so:
time
time
time
time
time
time
[Finished in 3.3s]
Why does this occur?
I compiled with Python 3.4 instead of Python 2.7 as I was previously using, solving my problem.
I recently started using Python so I could interact with the Bloomberg API, and I'm having some trouble storing the data into a Pandas dataframe (or a panel). I can get the output in the command prompt just fine, so that's not an issue.
A very similar question was asked here:
Pandas wrapper for Bloomberg api?
The referenced code in the accepted answer for that question is for the old API, however, and it doesn't work for the new open API. Apparently the user who asked the question was able to easily modify that code to work with the new API, but I'm used to having my hand held in R, and this is my first endeavor with Python.
Could some benevolent user show me how to get this data into Pandas? There is an example in the Python API (available here: http://www.openbloomberg.com/open-api/) called SimpleHistoryExample.py that I've been working with that I've included below. I believe I'll need to modify mostly around the 'while(True)' loop toward the end of the 'main()' function, but everything I've tried so far has had issues.
Thanks in advance, and I hope this can be of help to anyone using Pandas for finance.
# SimpleHistoryExample.py
import blpapi
from optparse import OptionParser
def parseCmdLine():
parser = OptionParser(description="Retrieve reference data.")
parser.add_option("-a",
"--ip",
dest="host",
help="server name or IP (default: %default)",
metavar="ipAddress",
default="localhost")
parser.add_option("-p",
dest="port",
type="int",
help="server port (default: %default)",
metavar="tcpPort",
default=8194)
(options, args) = parser.parse_args()
return options
def main():
options = parseCmdLine()
# Fill SessionOptions
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(options.host)
sessionOptions.setServerPort(options.port)
print "Connecting to %s:%s" % (options.host, options.port)
# Create a Session
session = blpapi.Session(sessionOptions)
# Start a Session
if not session.start():
print "Failed to start session."
return
try:
# Open service to get historical data from
if not session.openService("//blp/refdata"):
print "Failed to open //blp/refdata"
return
# Obtain previously opened service
refDataService = session.getService("//blp/refdata")
# Create and fill the request for the historical data
request = refDataService.createRequest("HistoricalDataRequest")
request.getElement("securities").appendValue("IBM US Equity")
request.getElement("securities").appendValue("MSFT US Equity")
request.getElement("fields").appendValue("PX_LAST")
request.getElement("fields").appendValue("OPEN")
request.set("periodicityAdjustment", "ACTUAL")
request.set("periodicitySelection", "DAILY")
request.set("startDate", "20061227")
request.set("endDate", "20061231")
request.set("maxDataPoints", 100)
print "Sending Request:", request
# Send the request
session.sendRequest(request)
# Process received events
while(True):
# We provide timeout to give the chance for Ctrl+C handling:
ev = session.nextEvent(500)
for msg in ev:
print msg
if ev.eventType() == blpapi.Event.RESPONSE:
# Response completly received, so we could exit
break
finally:
# Stop the session
session.stop()
if __name__ == "__main__":
print "SimpleHistoryExample"
try:
main()
except KeyboardInterrupt:
print "Ctrl+C pressed. Stopping..."
I use tia (https://github.com/bpsmith/tia/blob/master/examples/datamgr.ipynb)
It already downloads data as a panda dataframe from bloomberg.
You can download history for multiple tickers in one single call and even download some bloombergs reference data (Central Bank date meetings, holidays for a certain country, etc)
And you just install it with pip.
This link is full of examples but to download historical data is as easy as:
import pandas as pd
import tia.bbg.datamgr as dm
mgr = dm.BbgDataManager()
sids = mgr['MSFT US EQUITY', 'IBM US EQUITY', 'CSCO US EQUITY']
df = sids.get_historical('PX_LAST', '1/1/2014', '11/12/2014')
and df is a pandas dataframe.
Hope it helps
You can also use pdblp for this (Disclaimer: I'm the author). There is a tutorial showing similar functionality available here https://matthewgilbert.github.io/pdblp/tutorial.html, the functionality could be achieved using something like
import pdblp
con = pdblp.BCon()
con.start()
con.bdh(['IBM US Equity', 'MSFT US Equity'], ['PX_LAST', 'OPEN'],
'20061227', '20061231', elms=[("periodicityAdjustment", "ACTUAL")])
I've just published this which might help
http://github.com/alex314159/blpapiwrapper
It's basically not very intuitive to unpack the message, but this is what works for me, where strData is a list of bloomberg fields, for instance ['PX_LAST','PX_OPEN']:
fieldDataArray = msg.getElement('securityData').getElement('fieldData')
size = fieldDataArray.numValues()
fieldDataList = [fieldDataArray.getValueAsElement(i) for i in range(0,size)]
outDates = [x.getElementAsDatetime('date') for x in fieldDataList]
output = pandas.DataFrame(index=outDates,columns=strData)
for strD in strData:
outData = [x.getElementAsFloat(strD) for x in fieldDataList]
output[strD] = outData
output.replace('#N/A History',pandas.np.nan,inplace=True)
output.index = output.index.to_datetime()
return output
I've been using pybbg to do this sort of stuff. You can get it here:
https://github.com/bpsmith/pybbg
Import the package and you can then do (this is in the source code, bbg.py file):
banner('ReferenceDataRequest: single security, single field, frame response')
req = ReferenceDataRequest('msft us equity', 'px_last', response_type='frame')
print req.execute().response
The advantages:
Easy to use; minimal boilerplate, and parses indices and dates for you.
It's blocking. Since you mention R, I assume you are using this in some type of an interactive environment, like IPython. So this is what you want , rather than having to mess around with callbacks.
It can also do historical (i.e. price series), intraday and bulk data request (no tick data yet).
Disadvantages:
Only works in Windows, as far as I know (you must have BB workstationg installed and running).
Following on the above, it depends on the 32 bit OLE api for Python. It only works with the 32 bit version - so you will need 32 bit python and 32 bit OLE bindings
There are some bugs. In my experience, when retrieving data for a number of instruments, it tends to hang IPython. Not sure what causes this.
Based on the last point, I would suggest that if you are getting large amounts of data, you retrieve and store these in an excel sheet (one instrument per sheet), and then import these. read_excel isn't efficient for doing this; you need to use the ExcelReader (?) object, and then iterate over the sheets. Otherwise, using read_excel will reopen the file each time you read a sheet; this can take ages.
Tia https://github.com/bpsmith/tia is the best I've found, and I've tried them all... It allows you to do:
import pandas as pd
import datetime
import tia.bbg.datamgr as dm
mgr = dm.BbgDataManager()
sids = mgr['BAC US EQUITY', 'JPM US EQUITY']
df = sids.get_historical(['BEST_PX_BPS_RATIO','BEST_ROE'],
datetime.date(2013,1,1),
datetime.date(2013,2,1),
BEST_FPERIOD_OVERRIDE="1GY",
non_trading_day_fill_option="ALL_CALENDAR_DAYS",
non_trading_day_fill_method="PREVIOUS_VALUE")
print df
#and you'll probably want to carry on with something like this
df1=df.unstack(level=0).reset_index()
df1.columns = ('ticker','field','date','value')
df1.pivot_table(index=['date','ticker'],values='value',columns='field')
df1.pivot_table(index=['date','field'],values='value',columns='ticker')
The caching is nice too.
Both https://github.com/alex314159/blpapiwrapper and https://github.com/kyuni22/pybbg do the basic job (thanks guys!) but have trouble with multiple securities/fields as well as overrides which you will inevitably need.
The one thing this https://github.com/kyuni22/pybbg has that tia doesn't have is bds(security, field).
A proper Bloomberg API for python now exists which does not use COM. It has all of the hooks to allow you to replicate the functionality of the Excel addin, with the obvious advantage of a proper programming language endpoint. The request and response objects are fairly poorly documented, and are quite obtuse. Still, the examples in the API are good, and some playing around using the inspect module and printing of response messages should get you up to speed. Sadly, the standard terminal licence only works on Windows. For *nix you will need a server licence (even more expensive). I have used it quite extensively.
https://www.bloomberg.com/professional/support/api-library/