I am trying to execute this following code to push data to Salesforce using the simple_salesforce python library :
from simple_salesforce import Salesforce
staging_df = hive.execute("select * from hdmni")
staging_df = staging_df.toPandas()
# # staging_df['birth_date']= staging_df['birth_date'].dt.date
staging_df['birth_date'] = staging_df['birth_date'].astype(str)
staging_df['encounter_start_date'] = staging_df['encounter_start_date'].astype(str)
staging_df['encounter_end_date'] = staging_df['encounter_end_date'].astype(str)
bulk_data = []
for row in staging_df.itertuples():
d= row._asdict()
del d['Index']
bulk_data.append(d)
sf = Salesforce(password='', username='', security_token='')
sf.bulk.Delivery_Detail__c.insert(bulk_data)
I am getting this error while trying to send dictionary to salesforce :
SalesforceMalformedRequest: Malformed request
https://subhotutorial-dev-ed.my.salesforce.com/services/async/38.0/job/7500o00000HtWP6AAN/batch/7510o00000Q15TnAAJ/result.
Response content: {'exceptionCode': 'InvalidBatch',
'exceptionMessage': 'Records not processed'}
There's something about your query that is not correct. While I don't know your use case, by reading this line, you can tell that you are attempting to insert into a custom object/entity in Salesforce:
sf.bulk.Delivery_Detail__c.insert(bulk_data)
The reason you can tell is because of the __c suffix, which gets appended onto custom objects and fields (that's two underscores, by the way).
Since you're inserting into a custom object, your fields would have to be custom, too. And note, you've not appended that suffix onto them.
Note: Every custom object/entity in Salesforce does come with a few standard fields to support system features like record key (Id), record name (Name), audit fields (CreatedById, CreatedDate, etc.). These wouldn't have a suffix. But none of the fields you reference are any of these standard system fields...so the __c suffix would be expected.
I suspect that what Salesforce is expecting in your insert operation are field names like this:
Birth_Date__c
Encounter_Start_Date__c
Encounter_End_Date__c
These are referred to as the API name for both objects and fields, and anytime code interacts with them (whether via integration, or on code that executes directly on the Salesforce platform) you need to make certain you're using this API name.
Incidentally, you can retrieve this API name through a number of ways. Probably easiest is to log into your Salesforce org, and in Setup > Object Manager > [some object] > Fields and Relationships you can view details of each field, including the API name. Here's a screen shot.
You can also use SObject describe APIs, either in native Apex code, or via integration and either the REST or SOAP APIs. Here's part of the response from the describe API request to the describe REST endpoint for the same object as my UI example above, found here at https://[domain]/services/data/v47.0/sobjects/Expense__c/describe:
Looking at the docs for the simple-salesforce python library you're using, they've surfaced the describe API. You can find some info under Other Options. You would invoke it as sf.SObject.describe where "SObject" is the actual object you want to find the information about. For instance, in your case you would use:
sf.Delivery_Detail__c.describe()
As a good first troubleshooting step when interacting with a Salesforce object, I'd always recommend double-checking correctly referencing an API name. I can't tell you how many times I've bumped into little things like adding or missing an underscore. Especially with the __c suffix.
Related
I'm trying to use Tweepy (version 4.4.0) to get a user's description but it's seemingly not working:
u = api.get_user(username='XXXX', user_fields=['description'])
but the output of this is simply:
Response(data=<User id=123 name=XXX username=XXX>, includes={}, errors=[], meta={})
So it's getting me the name and id fine, but it's returning an empty for any user fields.
Note I've also tried with user_auth: 1, but I get 'Unauthorized: 401' - but from what I've seen around, I don't think user authentication is the problem here... but maybe it is?
Any advice would be great!
api seems to be an instance of tweepy.Client here.
From the relevant FAQ section in Tweepy's documentation:
Why am I not getting expansions or fields data with API v2 using Client?
If you are simply printing the objects and looking at that output, the string representations of API v2 models/objects only include the default attributes that are guaranteed to exist.
The objects themselves still include the relevant data, which you can access as attributes or by key, like a dictionary.
The user object being returned in the response should have a description field with the user's description.
You can access the description with: u.data.description
Keep in mind that the description may be blank some times, try with created_at to be sure it works.
Sample code for extracting description of any twitter user is as follows
import tweepy
auth = tweepy.OAuth2BearerHandler(os.environ.get("TWITTER_API_KEY"))
api = tweepy.API(auth)
user = api.get_user(screen_name="TechWiser", include_entities=False)
description = user._json['description']
user._json contain many other key, value pair that you can explore
I am completely new to this module and Python in general, yet wanted to start some sort of a fun project in my spare time.
I have a specific question concerning the GooglePlaces module for Python - how do I retrieve the reviews of a place by only knowing its Place ID.
So far I have done...
from googleplaces import GooglePlaces, types, lang
google_places = GooglePlaces('API KEY')
query_result = google_places.get_place(place_id="ChIJB8wSOI11nkcRI3C2IODoBU0")
print(query_result) #<Place name="Starbucks", lat=48.14308250000001, lng=11.5782337>
print(query_result.get_details()) # Prints None
print(query_result.rating) # Prints the rating of 4.3
I am completely lost here, because I cannot get access to the object's details. Maybe I am missing something, yet would be very thankful for any guidance through my issue.
If you are completly lost just read the docs :)
Example from https://github.com/slimkrazy/python-google-places:
for place in query_result.places:
# Returned places from a query are place summaries.
# The following method has to make a further API call.
place.get_details()
# Referencing any of the attributes below, prior to making a call to
# get_details() will raise a googleplaces.GooglePlacesAttributeError.
print place.details # A dict matching the JSON response from Google.
See the Problem with your code now?
print(query_result.get_details()) # Prints None
should be
query_result.get_details() # Fetch details
print(query_result.details) # Prints details dict
Regarding the results, the Google Docs states:
reviews[] a JSON array of up to five reviews. If a language parameter
was specified in the Place Details request, the Places Service will
bias the results to prefer reviews written in that language. Each
review consists of several components:
I am trying to access the worklogs in python by using the jira python library. I am doing the following:
issues = jira.search_issues("key=MYTICKET-1")
print(issues[0].fields.worklogs)
issue = jira.search_issues("MYTICKET-1")
print(issue.fields.worklogs)
as described in the documentation, chapter 2.1.4. However, I get the following error (for both cases):
AttributeError: type object 'PropertyHolder' has no attribute 'worklogs'
Is there something I am doing wrong? Is the documentation outdated? How to access worklogs (or other fields, like comments etc)? And what is a PropertyHolder? How to access it (its not described in the documentation!)?
This is because it seems jira.JIRA.search_issues doesn't fetch all "builtin" fields, like worklog, by default (although documentation only uses vague term "fields - [...] Default is to include all fields"
- "all" out of what?).
You either have to use jira.JIRA.issue:
client = jira.JIRA(...)
issue = client.issue("MYTICKET-1")
or explicitly list fields which you want to fetch in jira.JIRA.search_issues:
client = jira.JIRA(...)
issue = client.search_issues("key=MYTICKET-1", fields=[..., 'worklog'])[0]
Also be aware that this way you will get at most 20 worklog items attached to your JIRA issue instance. If you need all of them you should use jira.JIRA.worklogs:
client = jira.JIRA(...)
issue = client.issue("MYTICKET-1")
worklog = issue.fields.worklog
all_worklogs = client.worklogs(issue) if worklog.total > 20 else worklog.worklogs
This question here is similar to yours and someone has posted a work around.
There is a also a similar question on Github in relation to attachments (not worklogs). The last answer in the comments has workaround that might assist.
I have a database with a bunch of regular documents that look something like this (example from wiki):
{
"_id":"some_doc_id",
"_rev":"D1C946B7",
"Subject":"I like Plankton",
"Author":"Rusty",
"PostedDate":"2006-08-15T17:30:12-04:00",
"Tags":["plankton", "baseball", "decisions"],
"Body":"I decided today that I don't like baseball. I like plankton."
}
I'm working in Python with couchdb-python and I want to know if it's possible to add a field to each document. For example, if I wanted to have a "Location" field or something like that.
Thanks!
Regarding IDs
Every document in couchdb has an id, whether you set it or not. Once the document is stored you can access it through the doc._id field.
If you want to set your own ids you'll have to assign the id value to doc._id. If you don't set it, then couchdb will assign a uuid.
If you want to update a document, then you need to make sure you have the same id and a valid revision. If say you are working from a blog post and the user adds the Location, then the url of the post may be a good id to use. You'd be able to instantly access the document in this case.
So what's a revision
In your code snippet above you have the doc._rev element. This is the identifier of the revision. If you save a document with an id that already exists, couchdb requires you to prove that the document is still the valid doc and that you are not trying to overwrite someone else's document.
So how do I update a document
If you have the id of your document, you can just access each document by using the db.get(id) function. You can then update the document like this:
doc = db.get(id)
doc['Location'] = "On a couch"
db.save(doc)
I have an example where I store weather forecast data. I update the forecasts approximately every 2 hours. A separate process is looking for data that I get from a different provider looking at characteristics of tweets on the day.
This looks something like this.
doc = db.get(id)
doc_with_loc = GetLocationInformationFromOtherProvider(doc) # takes about 40 seconds.
doc_with_loc["_rev"] = doc["_rev"]
db.save(doc_with_loc) # This will fail if weather update has also updated the file.
If you have concurring processes, then the _rev will become invalid, so you have to have a failsave, eg. this could do:
doc = db.get(id)
doc_with_loc = GetLocationInformationFromAltProvider(doc)
update_outstanding = true
while update_outstanding:
doc = db.get(id) //reretrieve this to get
doc_with_loc["_rev"] = doc["_rev"]
update_outstanding = !db.save(doc_with_loc)
So how do I get the Ids?
One option suggested above is that you actively set the id, so you can retrieve it. Ie. if a user sets a given location that is attached to a URL, use the URL. But you may not know which document you want to update - or even have a process that finds all the document that don't have a location and assign one.
You'll most likely be using a view for this. Views have a mapper and a reducer. You'll use the first one, forget about the last one. A view with a mapper does the following:
It returns a simplyfied/transformed way of looking at your data. You can return multiple values per data or skip some. It gives the data you emit a key, and if you use the _include_docs function it will give you the document (with _id and rev alongside).
The simplest view is the default view db.view('_all_docs') this will return all documents and you may not want to update all of them. Views for example will be stored as a document as well when you define these.
The next simple way is to have view that only returns items that are of the type of the document. I tend to have a _type="article in my database. Think of this as marking that a document belongs to a certain table if you had stored them in a relational database.
Finally you can filter elements that have a location so you'd have a view where you can iterate over all those docs that still need a location and identify this in a separate process. The best documentation on writing view can be found here.
I have a simple Firebase that I mostly interact with via Javascript, which works really well. However, I also have a Python program that needs to get data from existing children and put/update data on existing children. I tried python-firebasin, which would do what I want, but it is unreliable (hangs, fails, etc.).
So I'm looking at the python-firebase REST wrapper. This seems efficient, and works well. However, every time I try to post() data, I get not just the data I'm posting, but some kind of unique string paired with it, all inserted as a child.
For example, via Javascript, I might say:
db = new Firebase('https://myfirebase.firebaseio.com/testval/');
db.transaction(function(current) { return 1; });
This would then give me a Firebase that looked like:
|---testval: 1
But when I try to do something similar with the Python Firebase REST wrapper, such as:
db = firebase.FirebaseApplication('https://myfirebase.firebaseio.com/')
db.post('/testval/',1)
My Firebase looks something like this:
|---testval:
|---JI4BiBbICSEAnM9mDXf: 1
In other words, it inserts a new child, gives it a new string, and then appends the data. Is there any way to insert/modify data on my Firebase using the REST wrapper that would do it cleanly like I'm doing with Javascript? Without adding children, without adding these unique strings?
Try this instead:
db.put(1)
db.post() is the equivalent of .push() in the JavaScript API, so it creates a unique ID for you. db.put() is equivalent to .set() and will just set the data, which appears to be what you want.
Note that there is no equivalent for transactions in the REST API, but your example was just using a transaction to do a .set() so hopefully you don't actually need them.
Try this:
db = firebase.FirebaseApplication('https://myfirebase.firebaseio.com/')
db.put('', 'testval', 1)
put takes three arguments : first is url or path, second is the key name or the snapshot name and third is the data(json)