I have written code to migrate data from sql server 2008 to PostGreSQL using OpenERPLib in Python for OpenERP. I want to set the value of "categ_id" column of type "Many2one" of "crm.opportunity2phonecall" object. Here below is my existing code.
scheduleCall = {
'name': 'test',
'action': ['schedule'],
'phone': "123456",
'user_id': 1,
"categ_id": 10,
'note': mail['body']
}
SCHEDULECALL_MODEL.create(scheduleCall)
SCHEDULECALL_MODEL = OECONN.get_model("crm.opportunity2phonecall")
In the above code i have set the hard-coded value "10" for "categ_id" field as per my requirement. When i execute above code, it gives me an error -
TypeError: unhashable type: 'list'
Try assigning a list instead of an integer as follows:
categ_id: [10]
anyway, as Atul said in his comment, update OpenERP with xmlrpc, it is safe and stable, and suppports different versions of OpenERP
Okay, I got the solution.
What i had done is - define one method in python which returns categ_id and set its value in "scheduleCall" dict and surprisingly its work. Here is my code.
scheduleCall = {
'name': 'test',
'action': ['schedule'],
'phone': "123456",
'user_id': 1,
"categ_id": get_categid_by_name('Outbound'),
'note': mail['body']
}
SCHEDULECALL_MODEL.create(scheduleCall)
SCHEDULECALL_MODEL = OECONN.get_model("crm.opportunity2phonecall")
And here is the method that i had define.
def get_categid_by_name(name):
"""return category id"""
categ_id = False
ids = CATEG_MODEL.search([('name', '=', name)])
categ_id = ids[0]
return categ_id
CATEG_MODEL = OECONN.get_model("crm.case.categ")
Hope it'll help to others.
Related
I have been working with BOTO 3 to describe all load balancers available in the account. I used the following snippet of code:
'elbv2=boto3.client('elbv2',aws_access_key_id=access_key_id,aws_secret_access_key=secret_key,region_name=region)
response=elbv2.describe_load_balancers()
print(response)
The response here stores the dict with all the information, like so:
{
'LoadBalancers': [{
'LoadBalancerArn': 'arn:aws:elasticloadbalancing:ap-south-1:407203256002:loadbalancer/net/aws-lb-02/9d4b15bfd6f579d3',
'DNSName': 'aws-lb-02-9d4b15bfd6f579d3.elb.ap-south-1.amazonaws.com',
'CanonicalHostedZoneId': 'ZVDDRBQ08TROA',
'CreatedTime': datetime.datetime(2021, 3, 31, 11, 45, 6, 729000, tzinfo = tzutc()),
'LoadBalancerName': 'aws-lb-02',
'Scheme': 'internet-facing',
'VpcId': 'vpc-0be01860',
'State': {
'Code': 'active'
},
'Type': 'network',
'AvailabilityZones': [{
'ZoneName': 'ap-south-1a',
'SubnetId': 'subnet-ed5fb986',
'LoadBalancerAddresses': []
}, {
'ZoneName': 'ap-south-1b',
'SubnetId': 'subnet-89d285c5',
'LoadBalancerAddresses': []
}]]}"'
I want to access LoadBalancerAddress , which I tried like this:
LoadBalancers=response['LoadBalancers']
for i in LoadBalancers:
AvailabilityZones=i['AvailabilityZones']
for j in AvailabilityZones:
LoadBalancerAddresses=i['LoadBalancerAddresses']
However, it throws an error saying that there is a keyword error for LoadBalancerAddresses, which I fail to understand.
Please help in how should I access the variable.
you can use nested list comprehension here, like that:
addresses = [x['LoadBalancerAddresses'] for res in aaa['LoadBalancers'] for x in res['AvailabilityZones']]
or with ordinar nested lists:
addresses = []
for bal in aaa['LoadBalancers']:
for zones in bal['AvailabilityZones']:
addresses += zones['LoadBalancerAddresses']
You mistyped j['LoadBalancerAddresses'] as i['LoadBalancerAddresses']. Since there are no keys named LoadBalancerAddresses directly under response['LoadBalancers'], your program throws a KeyError.
The fixed version:
LoadBalancers=response['LoadBalancers']
for i in LoadBalancers:
AvailabilityZones=i['AvailabilityZones']
for j in AvailabilityZones:
LoadBalancerAddresses=j['LoadBalancerAddresses']
As a safety option, it's good practice to check if the key exists before you access it, such as:
for j in AvailabilityZones:
if "LoadBalancerAddresses" in j:
LoadBalancerAddresses=j['LoadBalancerAddresses']
else:
print("The key does not exist")
Hopefully a pretty easy questions follows. When I get an item with Pyrebase's .get() method, like so:
for company_id in game[company_type]:
pyre_company = db.child("companies/data").order_by_child("id").equal_to(company_id).limit_to_first(
1).get()
company = pyre_company.val()
print(company)
break # Run only once for testing purposes
I get this following output, even though I use the .val()
OrderedDict([('-LEw2zHYiJ6p15iBhKuZ', {'id': 427, 'name': 'Bugbear Entertainment', 'type': 'developer'})])
But I only want the JSON Object
{'id': 427, 'name': 'Bugbear Entertainment', 'type': 'developer'}
This is because
db.child("companies/data").order_by_child("id").equal_to(company_id).limit_to_first(1).get()
is a Query, because you call the orderByChild() method on a Reference (as well as an equalTo() method btw).
As explained here in the JavaScript SDK doc:
Even when there is only a single match for the query, the snapshot is still a
list; it just contains a single item. To access the item,
you need to loop over the result:
ref.once('value', function(snapshot) {
snapshot.forEach(function(childSnapshot) {
var childKey = childSnapshot.key;
var childData = childSnapshot.val();
// ...
});
});
With pyrebase you should use the each() method, as explained here, which "Returns a list of objects on each of which you can call val() and key()".
pyre_company = db.child("companies/data").order_by_child("id").equal_to(company_id).limit_to_first(1).get()
for company in pyre_company.each():
print(company.val()) // {'id': 427, 'name': 'Bugbear Entertainment', 'type': 'developer'}
I want to change the default type from dict to string for a particular user.
DOMAIN = {
'item': {
'schema': {
'profile':{
'type': 'dict'
},
'username': {
'type': 'string'
}
}
}
}
suppose if I get a request from x user type should not change. If I get a request from y user type should change from dict to string. How to change for a particular item resource without affecting others.
TIA.
Your best approach would probably be to set up two different API endpoints, one for users of type X, and another for users of type Y. Both endpoints would consume the same underlying datasource (same DB collection being updated). You achieve that by setting the datasource for your endpoint, like so:
itemx = {
'url': 'endpoint_1',
'datasource': {
'source': 'people', # actual DB collection consumed by the endpoint
'filter': {'usertype': 'x'} # optional
'projection': {'username': 1} # optional
},
'schema': {...} # here you set username to dict, or string
}
Rinse and repeat for the second endpoint. See the docs for more info.
I am using the jira-python library to create an issue in JIRA. However I cannot get the syntax right for setting Cascading Select values. The code below creates an issue and works for the first(parent) select in the cascading select but not the second (child). Can anyone tell me what I'm missing?
from jira import JIRA
jira = JIRA(options,basic_auth=('auth_email','auth_pw'))
issue_dict = {
'project': {'key': 'AT'}, #key for project
'summary': 'Summary Message',
'description': 'Not important',
'issuetype': {'name': 'Bug'},
'customfield_10207':{'value': 'test val2'}, #Updates first cascading select
'customfield_10207+1':{'value': 'test test2'}, #Fails
}
new_issue = jira.create_issue(fields=issue_dict)
(customfield_10207, customfield_10207+1 is the cascading select). Problem is with customfield_10207+1 which I expected to correspond to the second select list.
Looking at some atlassian forum docs you need to do the following:
{
"update" : {
"customfield_11272" : [{"set" : {"value" : "External Customer (Worst)","child": {"value":"Production"}}}]
}
}
Apparently the + and : syntax doesn't work :(
Update:
Adding the actual solution:
issue_dict = { 'project': {'key': 'AT'}, 'customfield_10207 : {"value" : "test val2","child": {"value":"test test2"}}, }
In Python Eve framework, is it possible to have a condition which checks combination of two fields to be unique?
For example the below definition restricts only firstname and lastname to be unique for items in the resource.
people = {
# 'title' tag used in item links.
'item_title': 'person',
'schema': {
'firstname': {
'type': 'string',
'required': True,
'unique': True
},
'lastname': {
'type': 'string',
'required': True,
'unique': True
}
}
Instead, is there a way to restrict firstname and lastname combination to be unique?
Or is there a way to implement a CustomValidator for this?
You can probably achieve what you want by overloading the _validate_unique and implementing custom logic there, taking advantage of self.document in order to retrieve the other field value.
However, since _validate_unique is called for every unique field, you would end up performing your custom validation twice, once for firstname and then for lastname. Not really desirable. Of course the wasy way out is setting up fullname field, but I guess that's not an option in your case.
Have you considered going for a slighty different design? Something like:
{'name': {'first': 'John', 'last': 'Doe'}}
Then all you need is make sure that name is required and unique:
{
'name': {
'type':'dict',
'required': True,
'unique': True,
'schema': {
'first': {'type': 'string'},
'last': {'type': 'string'}
}
}
}
Inspired by Nicola and _validate_unique.
from eve.io.mongo import Validator
from eve.utils import config
from flask import current_app as app
class ExtendedValidator(Validator):
def _validate_unique_combination(self, unique_combination, field, value):
""" {'type': 'list'} """
self._is_combination_unique(unique_combination, field, value, {})
def _is_combination_unique(self, unique_combination, field, value, query):
""" Test if the value combination is unique.
"""
if unique_combination:
query = {k: self.document[k] for k in unique_combination}
query[field] = value
resource_config = config.DOMAIN[self.resource]
# exclude soft deleted documents if applicable
if resource_config['soft_delete']:
query[config.DELETED] = {'$ne': True}
if self.document_id:
id_field = resource_config['id_field']
query[id_field] = {'$ne': self.document_id}
datasource, _, _, _ = app.data.datasource(self.resource)
if app.data.driver.db[datasource].find_one(query):
key_names = ', '.join([k for k in query])
self._error(field, "value combination of '%s' is not unique" % key_names)
The way I solved this issue is by creating a dynamic field using a combination of functions and lambdas to create a hash that will use
which ever fields you provide
def unique_record(fields):
def is_lambda(field):
# Test if a variable is a lambda
return callable(field) and field.__name__ == "<lambda>"
def default_setter(doc):
# Generate the composite list
r = [
str(field(doc)
# Check is lambda
if is_lambda(field)
# jmespath is not required, but it enables using nested doc values
else jmespath.search(field, doc))
for field in fields
]
# Generate MD5 has from composite string (Keep it clean)
return hashlib.md5(''.join(r).encode()).hexdigest()
return {
'type': 'string',
'unique': True,
'default_setter': default_setter
}
Practical Implementation
My use case was to create a collection that limits the amount of key value pairs a user can create within the collection
domain = {
'schema': {
'key': {
'type': 'string',
'minlength': 1,
'maxlength': 25,
'required': True,
},
'value': {
'type': 'string',
'minlength': 1,
'required': True
},
'hash': unique_record([
'key',
lambda doc: request.USER['_id']
]),
'user': {
'type': 'objectid',
'default_setter': lambda doc: request.USER['_id'] # User tenant ID
}
}
}
}
The function will receive a list of either string or lambda function for dynamic value setting at request time, in my case the user's "_id"
The function supports the use of JSON query with the JMESPATH package, this isn't mandatory, but leave the door open for nested doc flexibility in other usecases
NOTE: This will only work with values that are set by the USER at request time or injected into the request body using the pre_GET trigger pattern, like the USER object I inject in the pre_GET trigger which represents the USER currently making the request