get Dividend and Split from blapi via python - python

I would like to get Dividend and Split from the python module to Bloomberg API (blapi) for some companies in the US (I am using a Screening to extract these companies). I am using the python module blapi :
import blpapi
# Connect the bloomberg platform
sessionOptions = blpapi.SessionOptions()
sessionOptions.setServerHost(bloomberg_host)
sessionOptions.setServerPort(bloomberg_port)
session = blpapi.Session(sessionOptions)
# Get the dividend and Split
refDataService = session.getService("//blp/refdata")
request = refDataService.createRequest("HistoricalDataRequest")
request.getElement("securities").appendValue("AAPL US Equity")
request.getElement("fields").appendValue("DVD_HIST_ALL")
request.set("periodicityAdjustment", "ACTUAL")
request.set("periodicitySelection", "DAILY")
request.set("startDate", "20140101")
request.set("endDate", "20141231")
request.set("maxDataPoints", 1)
But I get the following amswer :
HistoricalDataResponse = {
securityData = {
security = "AAPL US Equity"
eidData[] = {
}
sequenceNumber = 0
fieldExceptions[] = {
fieldExceptions = {
fieldId = "DVD_HIST_ALL"
errorInfo = {
source = "951::bbdbh5"
code = 1
category = "BAD_FLD"
message = "Not valid historical field"
subcategory = "NOT_APPLICABLE_TO_HIST_DATA"
}
}
}
fieldData[] = {
}
}
}
Looking at the documentation (blpapi-developers-guide) I see multiple request possibility (Reference Data Service, Market Data Service, API Field Information Service) but none of them explain how to get the dividend/split. I don't know which Service and which Request to use.
From the terminal these Dividend and Split and registered under the tag CACT if you use a screening and DVD if you look for the dividend/split of a currently loaded stock (I can loop over the companies I want in my code in worse case).
If someone knows how to do it you will illuminate my day!

Related

Get live data of nse stocks of all symbols in python/Django

I m working on a stock prediction project. This is how I want:
To show all the stocks available in Nifty50, Nifty100 or so and then the user will select the stock to predict the high and low price of a stock on next day only.
I m using Django.
What I have done till now:
I m able to display a list of stock.
def index(request):
api_key = 'myAPI_Key'
url50 = 'https://archives.nseindia.com/content/indices/ind_nifty50list.csv'
url100 = 'https://archives.nseindia.com/content/indices/ind_nifty100list.csv'
url200 = 'https://archives.nseindia.com/content/indices/ind_nifty200list.csv'
sfifty = requests.get(url50).content
shundred = requests.get(url100).content
stwohundred = requests.get(url200).content
nifty50 = pd.read_csv(io.StringIO(sfifty.decode('utf-8')))
nifty100 = pd.read_csv(io.StringIO(shundred.decode('utf-8')))
nifty200 = pd.read_csv(io.StringIO(stwohundred.decode('utf-8')))
nifty50 = nifty50['Symbol']
nifty100 = nifty100['Symbol']
nifty200 = nifty200['Symbol']
context = {
'fifty': nifty50,
'hundred': nifty100,
'twohundred': nifty200
}
return render(request, 'StockPrediction/index.html', context)
What I want:
I want to get the live data of all stocks open, high,LTP,Change, Volume.by mean of live data is that it will change as per stock values will change.
Please Help!
You must combine Ajax/Jquery like code below to periodically get data and update values in DOM :
(function getStocks() {
$.ajax({
type: "GET",
url: "url to your view",
success: function (data) {
// here you can get data from backend and do changes like
// changing color by the data coming from your view.
}
}).then(function() { // on completion, restart
setTimeout(getStocks, 30000); // function refers to itself
});
})();
But be careful about making too requests, you must choose proper interval right in this line setTimeout(getStocks, "proper interval");
And in your view you should put queries into a JSON format something like this :
return JsonResponse({'stocks': stocks})
here stocks must be in json format.

Querying Mongodb by unix timestamp range

I'm trying to perform a custom query in Python via an ajax call.
The frontend sends the the start time and end time data in unix time eg 1548417600000.
I then convert to (ISO) time (I think?) in Python as that is what MongoDB prefers afaik.
Document example:
{
"_id" : ObjectId("5c125a185dea1b0252c5352"),
"time" : ISODate("2018-12-13T15:09:42.536Z"),
}
PyMonogo doesn't return anything however, despite knowing that there should be thousands of results.
#login_required(login_url='/login')
def querytimerange(request):
print("Expecto Patronum!!")
if request.method == 'POST':
querydata = lambda x: request.POST.get(x)
colname = querydata('colname')
startdate = querydata('start')
enddate = querydata('end')
startint = int(startdate)
endint = int(enddate)
dtstart = datetime.utcfromtimestamp(startint/1000.0)
iso_start = str(dtstart.isoformat())
print(iso_start)
dtend = datetime.utcfromtimestamp(endint/1000.0)
iso_end = str(dtend.isoformat())
print(iso_end)
collection = db[colname]
data = collection.find({"time": {"$gt": iso_start,"$lt": iso_end}})
for a in data:
print(a)
return JsonResponse({"ok": "ok"})
else:
return JsonResponse({"ok": "no"})
So yeah, I think I'm struggling to get the format of the dates right.
After converting from Unix time, the date is in a str like this:
2019-01-20T04:00:00 &
2019-01-25T12:00:00.
Not sure if that's correct, but that should by isoformat afaik?
Main goal is to use it in an aggregation pipeline.
{
"$match": {
"time":{
"date": {
"$gt":startdate,
"$lt":enddate
}
}
}
},
I'm using PyMongo Driver on my Django app.
Thanks!

Pardot Salesforce Python API

I'm having this requirement to get data out from Pardot Saleforce objects using Python API.
Can someone please share any available snippets to get data from all the Pardot objects(tables) using Python.
I am working on a Pardot sync solution using pypardot4 (kudos to Matt for https://github.com/mneedham91/PyPardot4), which involves retrieving data through the API (v4).
Here are some snippets for Visitors API, but you can use the same for almost any Pardot APIs (except Visit...):
from pypardot.client import PardotAPI
# ... some code here to read API config ...
email = config['pardot_email']
password = config['pardot_password']
user_key = config['pardot_user_key']
client = PardotAPI(email, password, user_key)
client.authenticate()
# plain query
data = client.visitors.query(sort_by='id')
total = data['total_results']
# beware - max 200 results are returned, need to implement pagination using offset query paramerter
# filtered query
data = client.visitors.query(sort_by='id', id_greater_than=last_id)
Also I have used some introspection to iterate through API config data I have set up like this:
apiList = config['apiList']
# loop through the apis, and call their query method
for api in apiList:
api_name = api['apiName']
api_object_name = api['clientObject']
api_object = getattr(client, api_object_name)
method = 'query'
if api.get('noSortBy', False) == False:
data = getattr(api_object, method)(created_after=latest_sync, sort_by='created_at')
else:
# The API is not consistent, the sort_by criteria is not supported by all resources
data = getattr(api_object, method)(created_after=latest_sync)
And a snippet from the apiList config JSON:
"apiList":[
{
"apiName": "campaign",
"clientObject": "campaigns"
},
{
"apiName": "customField",
"clientObject": "customfields"
},
{
"apiName": "customRedirect",
"clientObject": "customredirects"
},
{
"apiName": "emailClick",
"clientObject": "emailclicks",
"noSortBy": true
},
...
Notice the noSortBy field and how it's handled in the code.
Hope this helps!

Python: Cannot read returned values from functions

I am working on an Fall Detection System. I wrote the Arduino Code and connected to Firebase. So now I have two variables that get 1 or 0 status, and I created a mobile application to receive an automatic push notification whenever the system detects a fall through Firebase+Pusher. I wrote this Python code with PyCharm and I used the stream function to read live data from Firebase and send automatic notifications. The code was working for the variable "Fall_Detection_Status" and I was able to receive push notifications normally with every fall detection. But I tried to modify the code to read data from another variable "Fall_Detection_Status1" and I want my code now to send the notification if both variables are giving 1's. I came up with this code but it seems that the last if statement is not working because I am not able to receive notifications and also print(response['publishId']) at the end of the if statement is not showing any result.
So what is wrong?
import pyrebase
from pusher_push_notifications import PushNotifications
config = {
'apiKey': "***********************************",
'authDomain': "arfduinopushnotification.firebaseapp.com",
'databaseURL': "https://arduinopushnotification.firebaseio.com",
'projectId': "arduinopushnotification",
'storageBucket': "arduinopushnotification.appspot.com",
'messagingSenderId': "************"
}
firebase = pyrebase.initialize_app(config)
db = firebase.database()
pn_client = PushNotifications(
instance_id='*****************************',
secret_key='**************************',
)
value = 0
value1 = 0
def stream_handler(message):
global value
print(message)
if message['data'] is 1:
value = message['data']
return value
def stream_handler1(message):
global value1
print(message)
if message['data'] is 1:
value1 = message['data']
return value1
if value == 1 & value1 == 1:
response = pn_client.publish(
interests=['hello'],
publish_body={
'apns': {
'aps': {
'alert': 'Hello!',
},
},
'fcm': {
'notification': {
'title': 'Notification',
'body': 'Fall Detected !!',
},
},
},
)
print(response['publishId'])
my_stream = db.child("Fall_Detection_Status").stream(stream_handler)
my_stream1 = db.child("Fall_Detection_Status1").stream(stream_handler1)
You are using the wrong operator '&' to combine the results of the two tests. In Python, '&' is the bitwise and operator! I believe you want the logical version which is 'and'.
Secondly, assuming the stream_handler/1 calls are run by your last two statements, those two statements are AFTER the place where you test the values in the if statement. Move those line above the if block.

Facebook Marketing API: retrieving metadata for many Ads via Python

I hope, someone has stumbled over the same issue and might guide me towards a simple solution for my problem.
I want to retrieve regularly some data regarding my Ads on Facebook. Basically, I just want to store some metadata in one of my databases for further reporting purposes. Thus, I want to get AD-ID, AD-name and corresponding ADSET-ID for all my Ads.
I have written this small function in Python:
def get_ad_stats(ad_account):
""" Pull basic stats for all ads
Args: 'ad_account' is the Facebook AdAccount object
Returns: 'fb_ads', a list with basic values
"""
fb_ads = []
fb_fields = [
Ad.Field.id,
Ad.Field.name,
Ad.Field.adset_id,
Ad.Field.created_time,
]
fb_params = {
'date_preset': 'last_14_days',
}
for ad in ad_account.get_ads(fields = fb_fields, params = fb_params):
fb_ads.append({
'id': ad[Ad.Field.id],
'name': ad[Ad.Field.name],
'adset_id': ad[Ad.Field.adset_id],
'created_time': datetime.datetime.strptime(ad[Ad.Field.created_time], "%Y-%m-%dT%H:%M:%S+0000"),
})
return (fb_ads)
Similar functions for Campaign- and AdSet-data work fine. But for Ads I am always reaching a user request limit: "(#17) User request limit reached".
I do have an API-access level of "BASIC" and we're talking here about 12,000 Ads.
And, unfortunately, async-calls seem to work only for the Insights-edge.
Is there a way to avoid the user request limit, e.g. by limiting the API-request to only those Ads which have been changed/newly created after a specific date or so?
Ok, sacrificing the 'created_time' field, I have realized I could use the Insights-edge for that.
Here is a revised code for the same function which is now using async-calls and a delay between calls:
def get_ad_stats(ad_account):
""" Pull basic stats for all ads
Args: 'ad_account' is the Facebook AdAccount object
Returns: 'fb_ads', a list with basic values
"""
fb_ads = []
fb_params = {
'date_preset': 'last_14_days',
'level': 'ad',
}
fb_fields = [
'ad_id',
'ad_name',
'adset_id',
]
async_job = ad_account.get_insights(fields = fb_fields, params = fb_params, async=True)
async_job.remote_read()
while async_job['async_percent_completion'] < 100:
time.sleep(1)
async_job.remote_read()
for ad in async_job.get_result():
fb_ads.append({
'id': ad['ad_id'],
'name': ad['ad_name'],
'adset_id': ad['adset_id'],
})
return (fb_ads)

Categories