I am relatively new to Python and am working my way through the zipline-trader library. I came across a data structure that I am unfamiliar with and was wondering if you could help me access a certain element out of it.
I ran a backtest on zipline-trader and have the results-DataFrame that has a column "positions" which includes the portfolio positions for a given day.
Here is an example of the content of that column:
[{'sid': Equity(1576 [JPM]), 'amount': 39, 'cost_basis': 25.95397, 'last_sale_price': 25.94}, {'sid': Equity(2942 [UNH]), 'amount': 11, 'cost_basis': 86.62428999999999, 'last_sale_price': 86.58}]
The syntax I am unfamiliar with is the part "Equity (1576 [JPM])" - can anybody explain to me what this is? Also, can you please let me know how to access the "[JPM]"-part of it? Ultimately, what I am trying to do is access that cell of the DataFrame using a loc-function and producing the result "{JPM: 1576, UNH: 2942}"
Thank you!
That is (likely to be) an object of type Equity. If the structure you showed us was stored in a variable data then the object can be fetched using
eq = data[0]['sid']
The text when it's printed will be coming from the __str__ method defined in the Equity class, so it doesn't really tell us anything about how to access it. You would have to look up the documentation.
If you are able to access the object in an interactive session then you could run the help command against it and that might contain something useful. Again, if the structure you showed us was stored in a variable data then you could do:
help(data[0]['sid'])
Related
I created an ontology using protegee and now want to insert data using RDFLIB in python. Because I have to write the update sparql statements in string and my data comes in various types including, float64, integer, strings, and Datetime, I had to do some parsing and so not all are working. Below is a snippet of my code:
df=df.tail(2000)
for ind in df.index:
pData=df['Product'][ind]
lData=df['Lifecycle left in minutes'][ind]
XLPerc=df['Percent of Lifecycle left'][ind]
q = """
INSERT DATA
{
myontology:XP myontology:LifecycleData XL.
myontology:XP myontology:UseCycleData XU.
#myontology:XP myontology:LifecyclePer XLPerc.
myontology:XP myontology:Temperature XTemperature.
#myontology:XP myontology:LifecyclePer XLPerc
}
""".replace('XU', str(uData)).replace('XL', str(lData)).replace('XP', str(pData))
g.update(
q,
initNs={
"myontology":Namespace("https://js......../myontology.owl#")
}
)
So I am looping over my Dataframe (df) and inserting it into the ontology. Some are working and some are not working despite using the same method. I am getting ParseException error as follows:
ParseException: Expected end of text, found 'I' (at char 5), (line:2, col:5)
There is a long error code but this is the last line. I can provide more information if needed.
I do not know what the issue is, can somebody help me?
Thank you.
I have been able to rectify the problem myself.
The replace() functions were not evaluating correctly due to too similar variables.
For instance, myontology:XP myontology:LifecyclePer XLPerc and myontology:XP myontology:LifecycleData XL. both had XL as in XLPerc and XL itself.
So, while evaluating, the XL in XLPerc was replaced with another value such as 68.23433Perc and not the expected value68.23433, and many other similar errors like this.
I solved this by defining my variables as unique as possible and now it is evaluating just fine.
Thank you everyone for your help.
I am working with Python and Riot APIs, and I have a problem.
When I get match data with matchId, I get json for result. Then inside participants, I get spell data like this:
”spell1Id”: 14,
“spell2Id”: 4,
...
But I can’t find list or dictionary of spell id. It is not in even here.
Am I missing something simple? Does anybody know where to find these spell ids with number?
You are actually missing something simple.It is in the link you pasted It's just called "key".
I'm currently trying to create a small python program using SolrClient to index some files.
My need is that I want to index some file content and then add some attributes to enrich the document.
I used the post command line tool to index the files. Then I use a python program trying to enrich documents, something like this:
doc = solr.get('collection', id)
doc['new_attribute'] = 'value'
solr.index_json('collection',json.dumps([doc]))
solr.commit(openSearcher=True)
Problem is that I have the feeling that we lost file content index. If I run a query with a word present in all attributes of the doc, I find it.
If I run a query with a word only in the file, it does not work (it works indexing only the file with post without my update tentative).
I'm not sure to understand how to update the doc keeping the index created by the post command.
I hope I'm clear enough, maybe I misunderstood the way it works...
thanks a lot
If I understand correctly, you want to modify an existing record. You should be able to do something like this without using a solr.get:
doc = [{'id': 'value', 'new_attribute':{'set': 'value'}}]
solr.index_json('collection',json.dumps([doc]))
See also:
https://cwiki.apache.org/confluence/display/solr/Updating+Parts+of+Documents
It has worked for me in this way, it can be useful for someone
from SolrClient import SolrClient
solrConect = SolrClient("http://xx.xx.xxx.xxx:8983/solr/")
doc = [{'id': 'my_id', 'count_related_like':{'set': 10}}]
solrConect.index_json("my_collection", json.dumps(doc) )
solrConect.commit("my_collection", softCommit=True)
Trying with Curl did not change anything. I did it differently so now it works. Instead of adding the file with the post command and trying to modify it afterwards, I read the file in a string and index in a "content" field. It means every document is added in one shot.
The content field is defined as not stored, so I just index it.
It works fine and suits my needs. It's also more simple since it removes many attributes set by post command that I don't need.
If I find some time, I'll try again the partial update and update the post.
Thanks
Rémi
I'm having trouble with a JSON object being passed to me by one of our products API's. I'm using Python 2.7 to create a function to let our customer service team see details about jobs that are posted on our website. The JSON package returns an array of objects that each contain an array and an object. I need to read the array associated with one of the objects inside the main object, however their not nested. As in the array of applicants is not nested inside the object of Job. This means my usual "response[0][0]['applicantName']" won't work here.
The data below is updated, to represent what the API is actually giving me. My apologies before, I had edited it in order to protect the data. Still done the same, but it's the actual result.
What I'd like to do is let a user input the jobId and I'll provide them with a list of all the applicants related to that jobID. Since the jobID can sometimes be non-sequential, I can't use an index number, it must be the jobID number.
Can someone help?
Heres the JSON structure I get:
[{u'bids': [{u'applicantId': 221,
u'comment': 'I have applied to the job'},
{u'applicantId': 221,
u'comment': 'I have applied to the job'}],
u'job': {u'jobId': 1}},
{u'bids': [{u'applicantId': 221,
u'comment': 'I have applied to the job'},
{u'applicantId': 221,
u'comment': 'I have applied to the job'}],
u'job': {u'jobId': 1}}]
As I said, I'm working in python 2.7 using the "requests" library to call the API and .json() to read it.
Thanks in advance!
That content doesn't seem to be a valid json, which means you won't be able to parse it with the typical well-known json.loads function.
Also, you won't be able to use ast.literal_eval cos it's not a valid python expression.
Not sure this will be a good idea... but assuming you're getting that content as a string I'd try to write my own parser for that type of server-objects instead or just looking for an external library able to parse them.
Newbie question. I'm expecting to be able to do something like the following:
import boto
from boto.iam.connection import IAMConnection
cfn = IAMConnection()
cfn.get_all_users()
for user in userList:
print user.user_id
The connection works fine, but the last line returns the error "'unicode' object has no attribute 'user_id'".
I did a type(userList) and it reported the type as <class 'boto.jsonresponse.Element'>, which doesn't appear (?) to be documented. Using normal JSON parsing doesn't appear to work either.
From another source, it looks as if the intent is the results of an operation like this are supposed to be "pythonized."
Anyway, I'm pretty new to boto, so I assume there's some simple way to do this that I just haven't stumbled across.
Thx!
For some of the older AWS services, boto takes the XML responses from the service and turns them into nice Python objects. For others, it takes the XML response and transliterates it directly into native Python data structures. The boto.iam module is of the latter form. So, the actual data returned by get_all_users() looks like this:
{u'list_users_response':
{u'response_metadata': {u'request_id': u'8d694cbd-93ec-11e3-8276-993b3edf6dba'},
u'list_users_result': {u'users':
[{u'path': u'/', u'create_date': u'2014-01-21T17:19:45Z', u'user_id':
u'<userid>', u'arn': u'<arn>', u'user_name': u'foo'},
{...next user...}
]
}
}
So, all of the data you want is there, it's just a bit difficult to find. The boto.jsonresponse.Element object returned does give you a little help. You can actually do something like this:
data = cfn.get_all_users()
for user in data.user:
print(user['user_name'])
but for the most part you just have to dig around in the data returned to find what you are looking for.