I'm trying to upsert a record via the salesforce Beatbox python client the upsert operation seems to work fine but I can't quite work out how to specify an externalid as a foreign key:
Attempting to upsert with:
consolidatedToInsert = []
for id,ce in ConsolidatedEbills.items():
consolidatedToInsert.append(
{
'type':'consolidated_ebill__c',
'Account__r':{'type':'Account','ETL_Natural_Key__c':ce['CLASS_REFERENCE']},
'ETL_Natural_Key__c':ce['ISSUE_UNIQUE_ID']
}
)
print consolidatedToInsert[0]
pc.login('USERNAME', 'TOTALLYREALPASSWORD')
ret = pc.upsert('ETL_Natural_Key__c',consolidatedToInsert[0])
print ret
gives the error:
'The external foreign key reference does not reference a valid entity: Account__r'
[{'isCreated': False, 'errors': [{'fields': [], 'message': 'The external foreign key reference does not reference a valid entity: Account__r', 'statusCode': 'INVALID_FIEL
D'}], 'id': '', 'success': False, 'created': False}]
The soap examples and the specificity of the error text seem to indicate that it's possible but I can find little in the documentation about inserting with external ids.
On a closer look I'm not sure if this is possible at all, a totally mangled key to Account__r seems to pass silently as if it's not even being targeted for XML translation, I'd love to be wrong though.
A quick change to pythonclient.py 422:0:
for k,v in field_dict.items():
if v is None:
fieldsToNull.append(k)
field_dict[k] = []
if k.endswith('__r') and isinstance(v,dict):
pass
elif hasattr(v,'__iter__'):
if len(v) == 0:
fieldsToNull.append(k)
else:
field_dict[k] = ";".join(v)
and another to __beatbox.py 375:0
for fn in sObjects.keys():
if (fn != 'type'):
if (isinstance(sObjects[fn],dict)):
self.writeSObjects(s, sObjects[fn], fn)
else:
s.writeStringElement(_sobjectNs, fn, sObjects[fn])
and it works like some dark magic.
Currently Beatbox doesn't support serializing nested dictionaries like this, which is needed for the externalId resolution you're trying to do. (If you look at the generated request, you can see that the nested dictionary is just serialized as a string)).
Related
Changes on our LDAP Server have changed the case of the attributes returned from search. For example, "mailroutingaddress" is now "mailRoutingAddress". The searches themselves are case insensitive, but the python code processing the returned ldap object is attempting to reference attributes in all lower case and failing.
Is there a simple way to specify that the LDAP module should lowercase all returned attributes? Or, is there a straightforward way to change them to lowercase as soon as the results are returned?
We're trying to avoid extensive rewrites to handle this change in our LDAP software.
This code returns an error "Key not found"
timestamp_filter = ldap.filter.filter_format("a filter that works as intended")
timestamp_result = search_ldap(timestamp_filter, ['modifytimestamp'])
if not timestamp_result:
log_it('WARN', 'No timestamp returned for group "%s"' % group_cn)
return False
else:
mod = timestamp_result[0][1]['modifytimestamp'][0].decode('utf-8').split('Z')[0]
When the last line was change to this, it worked:
mod = timestamp_result[0][1]['modifyTimestamp'][0].decode('utf-8').split('Z')[0]
I was hoping there was something I could do when the ldap object was first bound:
def bind_to_ldap(dn=auth_dn,pwd=auth_pwd):
# create connection to ldap and pass bind object back to caller
try:
ldc = ldap.initialize(uri, bytes_mode=False)
bind = ldc.simple_bind_s(dn, pwd)
except ldap.LDAPError as lerr:
log_it('EXCEPTION',"Exception raised when attempting to bind to LDAP server %s with dn %s" % (uri, dn))
graceful_exit("%s" % lerr)
return ldc
Or, I could iterate over all of the attributes passed back by the search function.
s_filter = ldap.filter.filter_format("a filter that works as intended")
s_results = search_ldap(s_filter)
groups = {}
# okay, lots of processing do here....
for group in s_results:
# etc. etc. etc.
You can change a dict's keys to lowercase pretty easily with a dict comprehension:
>>> timestamp_result = {'modifyTimestamp': 'foo'}
>>> timestamp_result = {k.lower(): v for k, v in timestamp_result.items()}
>>> timestamp_result
{'modifytimestamp': 'foo'}
A more robust solution would be to have the search_ldap function normalize the server output -- that would minimize the amount of code that you'd need to update when the server response changes.
I'm trying to validate the headers of a flask request and its failing. I'm trying to use the below code to simulate the same and can see that its failing to validate the headers properly even if I miss some of the mandatory headers.
The below code is expected to fail but its passing.
import validictory
from werkzeug.datastructures import EnvironHeaders
obj = EnvironHeaders(environ={})
validictory.validate(obj,{'type': 'object', 'properties': {'test':{'required': True, 'type': 'any'}}})
If I convert the EnvironHeaders as dict then validation is happening properly.
import validictory
from werkzeug.datastructures import EnvironHeaders
obj = EnvironHeaders(environ={})
validictory.validate(dict(obj),{'type': 'object', 'properties': {'test':{'required': True, 'type': 'any'}}})
This properly raises the below error during validation. Any idea on the reason for improper validation happened in the first case?
validictory.validator.RequiredFieldValidationError: Required field 'test' is missing
I was able to find out the reason for this issue by going through the source code of validictory.
It was passing the type validation since EnvironHeaders has both the attributes 'keys' and 'values'.
def validate_type_object(self, val):
return isinstance(val, Mapping) or (hasattr(val, 'keys') and hasattr(val, 'items'))
Property validation is happening only for dict types and the validation is passing since the code doesn't raise any error if the input type is not a dictionary.
def validate_properties(self, x, fieldname, schema, path, properties=None):
''' Validates properties of a JSON object by processing the object's schema recursively '''
value = x.get(fieldname)
if value is not None:
if isinstance(value, dict):
if isinstance(properties, dict):
if self.disallow_unknown_properties or self.remove_unknown_properties:
self._validate_unknown_properties(properties, value, fieldname,
schema.get('patternProperties'))
for property in properties:
self.__validate(property, value, properties.get(property),
path + '.' + property)
else:
raise SchemaError("Properties definition of field '{0}' is not an object"
.format(fieldname))
Note: Validictory has stopped support and hence not going to raise any issue in git repo. Will try using jsonschema package as suggested.
I've been trying update custom_fields per the latest version of Asana's API, very similarly to this post but with a later version of the API (e.g. I need to use update_task method). I can update fields at the top level of a task, but the custom_fields object is proving much more challenging to update. For example, I have many custom fields, and am trying to update a test field called "Update" and just set the text_value to "Hello"...
import asana
asanaPAT = 'myToken'
client = asana.Client.access_token(asanaPAT)
result = client.tasks.get_tasks({'project': 'myProjectID'}, opt_pretty=True)#, iterator_type=None)
for index, result in enumerate(result):
complete_task = client.tasks.find_by_id(result["gid"])
task_name = complete_task['name']
task_id = complete_task['gid']
custom_fields = complete_task['custom_fields']
#I can easily update top-level fields like 'name' and 'completed'...
#result = client.tasks.update_task(task_id, {'name': task_name + '(new)'}, opt_pretty=True)
#result = client.tasks.update_task(task_id, {'completed': False}, opt_pretty=True)
for custom_fieldsRow in custom_fields:
if custom_fieldsRow['name'] == "Updated":
#custom_fieldsRow['text_value'] = 'Hello'
#finished loop through individual custom fields, so update on the level of the task...
#client.tasks.update_task(task_id, {custom_fields}, opt_pretty=True)
manualCustomField = {'data': { 'custom_fields': {'gid': 'theGIDOfCustomField', 'text_value': 'Hello'} }}
resultFromUpdate = client.tasks.update_task(task_id, manualCustomField, opt_pretty=True)
As you can see above, I started off trying to loop through the custom_fields and make changes to the specific field before updating later. But now I'm even trying to manually set the custom_field data (last line of my code), but it does nothing (no error, but doesn't change my task). I'm completely out of ideas to troubleshoot this so appreciate any feedback on where I'm going wrong.
Apologies, I figured out my mistake, I just needed my penultimate line to read...
manualCustomField = { 'custom_fields': {'theGIDOfCustomField':'Hello'} }
Kinda a strange way to do that in the API (not specifically stating which field you'll update or which id you're using) if you ask me, but now it finally works.
I'm using the atlassian rest API and creating issues through it. For accessing the API I'm using the JIRA-API-Wrapper see: https://pypi.org/project/jira/.
In my application I'm uploading a bunch of tickets. Due to performance reasons I'm using concurrent.futures. The tickets are uploaded via the following code:
fields = [{'project': 'Project', 'summary': 'New summary', 'issuetype': {'name': 'Task'}}, ....]
with concurrent.futures.ThreadPoolExecutor() as executor:
data = executor.map(jira.create_issue, fields)
My problem is, that I'm not really sure how to get the information, when a ticket couldn't be uploaded for some reason. Everytime a ticket couldn't be uploaded, the JIRA-Wrapper returns a JIRAError-Exception. Therefore, I somehow have to count whenever I get a JIRAError. But unfortunally I'm not sure how to count the errors.
I know that the result can be retrieved via:
for i in data:
counter = counter + 1
print(i)
But because data contains the JIRAErrors the above code fails. This is why I tried the following.
try:
for i in data:
print(i)
except:
print(fields[counter])
But when the exception appears the code just continues. Therefore I tried solutions with a while-loop, but they also didn't get the right solution.
Is there a way to get the tickets, which couldn't be uploaded?
I haven't used jira-python myself. I wrote my own python client that I've been using for years. I'll have to give this a try myself.
According to the documentation to create bulk issues:
https://jira.readthedocs.io/en/latest/examples.html#issues
issue_list = [
{
'project': {'id': 123},
'summary': 'First issue of many',
'description': 'Look into this one',
'issuetype': {'name': 'Bug'},
},
{
'project': {'key': 'FOO'},
'summary': 'Second issue',
'description': 'Another one',
'issuetype': {'name': 'Bug'},
},
{
'project': {'name': 'Bar'},
'summary': 'Last issue',
'description': 'Final issue of batch.',
'issuetype': {'name': 'Bug'},
}]
issues = jira.create_issues(field_list=issue_list)
Additionally, there is a note about the failures which you are interested in:
Using bulk create will not throw an exception for a failed issue
creation. It will return a list of dicts that each contain a possible
error signature if that issue had invalid fields. Successfully created
issues will contain the issue object as a value of the issue key.
So to see the ones that failed, you would iterate through issues and look for error signatures.
As far as performance issues, you could look at jira.create_issues(data, prefectch=false)
https://jira.readthedocs.io/en/latest/api.html#jira.JIRA.create_issues
prefetch (bool) – whether to reload the created issue Resource for
each created issue so that all of its data is present in the value
returned from this method.
However, if you must use concurrent.futures, note that it will likely fail when calling via jira.create_issue if jira object has a state that needs to be saved between calls to create_issue when being run async.
If a func call raises an exception, then that exception will be raised
when its value is retrieved from the iterator.
I would recommend using a separate jira object for each create_issue if you do not trust the create_issues() function.
def create_issue(fields):
print(fields)
j_obj = JIRA(...)
try:
ret = j_obj.create_issue(fields)
except:
# Do something here
ret = False
return ret
with concurrent.futures.ThreadPoolExecutor() as executor:
data = executor.map(create_issue, issue_list)
items = [item for item in data]
print(items)
# Interact with result
pdb.set_trace()
When you break into the trace, any successful issues created will be an Issue type, any failures will show up as False. This is just an example, and you can decide what you want to return, in whatever format you need.
Working with a response from a Websockets subscription.
The response reads like this:
{'jsonrpc': '2.0', 'method': 'subscription', 'params': {'channel': 'book.BTC-PERPETUAL.none.1.100ms', 'data': {'timestamp': 1588975154127, 'instrument_name': 'BTC-PERPETUAL', 'change_id': 19078703948, 'bids': [[10019.5, 8530.0]], 'asks': [[10020.0, 506290.0]]}}}
And I'm trying to reach the first and only values inside "bids" and "asks" arrays via json.loads()
Code looks like this:
async def __async__get_ticks(self):
async with self.ws as echo:
await echo.send(json.dumps(self.request))
while True:
response = await echo.receive()
responseJson = json.loads(response)
print(responseJson["params"]["data"])
And error says:
print(responseJson["params"]["data"])
KeyError: 'params'
However I try, it doesn't want to catch any of the JSON after "jsonprc", for which it successfully returns 2.0. Anything beyond that always comes up with an error.
I tried using .get(), and it helps to go one level deeper, but still not more.
Any ideas on how to format this properly and reach the bids and asks ?
Thank you in advance.
I would suggest using the dict.get() method, but make sure that you set it to return an empty dictionary when querying dictionaries that are expected to have nested dicts.
By default (if you don't specify a second argument to dict.get()), it will return None. This explains why you were only able to go one level deep.
Here's an example:
empty_dict = {}
two_level_dict = {
"one": {
"level": "deeper!"
}
}
# This will return None and the second get call will not fail, because
# the first get returned an empty dict for the .get("level") call to succeed.
first_get = empty_dict.get("one", {}).get("level")
# This will return 'deeper!'
second_get = two_level_dict.get("one", {}).get("level")
print(first_get)
print(second_get)