I am trying to create multiple invoices from an array of dictionaries with the create values in odoo13.
Creating one record at a time is okay but when I try with the batch record I get the error can't adapt to type dict
I have tried looping through the array and create a record for each item in it but this error persists.
I am currently checking on the #api.model_create_multi decorator but haven't grasped it yet fully.
What I want is that for each line in the visa_line(same as order line), to create an invoice from that. Some fields in creating an invoice are missing but that should not be the issue.
When I print the record, in the final function, it prints the duct with values correctly.
Here is my code, thank you in advance
def _prepare_invoice(self):
journal = self.env['account.move'].with_context(
default_type='out_invoice')._get_default_journal()
invoice_vals = {
'type': 'out_invoice',
'invoice_user_id': self.csa_id and self.csa_id.id,
'source_id': self.id,
'journal_id': journal.id,
'state': 'draft',
'invoice_date': self.date,
'invoice_line_ids': []
}
return invoice_vals
def prepare_create_invoice(self):
invoice_val_dicts = []
invoice_val_list = self._prepare_invoice()
for line in self.visa_line:
invoice_val_list['invoice_partner_bank_id'] = line.partner_id.bank_ids[:1].id,
invoice_val_list['invoice_line_ids'] = [0, 0, {
'name': line.code,
'account_id': 1,
'quantity': 1,
'price_unit': line.amount,
}]
invoice_val_dicts.append(invoice_val_list)
return invoice_val_dicts
#api.model_create_multi
def create_invoice(self, invoices_dict):
invoices_dict = self.prepare_create_invoice()
for record in invoices_dict:
print(record)
records = self.env['account.move'].create(record)
I fixed this issue by explicitly type fixing the record with duct. Using a normal create method without the #api.model_create_multi.
def create_invoice(self):
invoices_dict = self.prepare_create_invoice()
for record in invoices_dict:
records = self.env['account.move'].create(dict(record))
Related
I'm writing a python program to save and retrieve a customer data in cloud datastore. My entity looks like below:
entity.update({
'customerId': args['customerId'],
'name': args['name'],
'email': args['email'],
'city': args['city'],
'mobile': args['mobile']
})
datastore_client.put(entity)
I'm successfully saving the data. Now, I want to retrieve a random email id from the a record. I have written the below code:
def get_customer():
query = datastore_client.query(kind='CustomerKind')
results = list(query.fetch())
chosen_customer = random.choice(results)
print(chosen_customer)
But instead getting only one random email id, I'm getting the entire row like this:
<Entity('CustomerKind', 6206716152643584) {'customerId': '103', 'city': 'bhubaneswar', 'name': 'Amit', 'email': 'amit#gmail.com', 'mobile': '7879546732'}>
Can anyone suggest how can I get only 'email': 'amit#gmail.com' ? I'm new to datastore.
When using
query = datastore_client.query(kind='CustomerKind')
results = list(query.fetch())
you are retrieving all the properties from all the entities that will be returned.
Instead, you can use a projection query, which allows you to retrieve only the specified properties from the entities:
query = client.query(kind="CustomerKind")
query.projection = ["email"]
results = list(query.fetch())
Using projection queries is recommended for cases like this, in which you only need some properties as they reduce cost and latency.
I'm performing what I imagine is a common pattern with indexing graph databases: my data is a list of edges and I want to "stream" the upload of this data. I.e, for each edge, I want to create the two nodes on each side and then create the edge between them; I don't want to first upload all the nodes and then link them afterwards. A naive implementation would result in a lot of duplicate nodes obviously. Therefore, I want to implement some sort of "get_or_create" to avoid duplication.
My current implementation is below, using pyArango:
def get_or_create_graph(self):
db = self._get_db()
if db.hasGraph('citator'):
self.g = db.graphs["citator"]
self.judgment = db["judgment"]
self.citation = db["citation"]
else:
self.judgment = db.createCollection("judgment")
self.citation = db.createCollection("citation")
self.g = db.createGraph("citator")
def get_or_create_node_object(self, name, vertex_data):
object_list = self.judgment.fetchFirstExample(
{"name": name}
)
if object_list:
node = object_list[0]
else:
node = self.g.createVertex('judgment', vertex_data)
node.save()
return node
My problems with this solution are:
Since the application, not the database, is checking existence, there could be an insertion between the existence check and the creation. I have found duplicate nodes in practice I suspect this is why?
It isn't very fast. Probably because it hits the DB twice potentially.
I am wandering whether there is a faster and/or more atomic way to do this, ideally a native ArangoDB query? Suggestions? Thank you.
Update
As requested, calling code shown below. It's in a Django context, where Link is a Django model (ie data in a database):
... # Class definitions etc
links = Link.objects.filter(dirty=True)
for i, batch in enumerate(batch_iterator(links, limit=LIMIT, batch_size=ITERATOR_BATCH_SIZE)):
for link in batch:
source_name = cleaner.clean(link.case.mnc)
target_name = cleaner.clean(link.citation.case.mnc)
if source_name == target_name: continue
source_data = _serialize_node(link.case)
target_data = _serialize_node(link.citation.case)
populate_pair(citation_manager, source_name, source_data, target_name, target_data, link)
def populate_pair(citation_manager, source_name, source_data, target_name, target_data, link):
source_node = citation_manager.get_or_create_node_object(
source_name,
source_data
)
target_node = citation_manager.get_or_create_node_object(
target_name,
target_data
)
description = source_name + " to " + target_name
citation_manager.populate_link(source_node, target_node, description)
link.dirty = False
link.save()
And here's a sample of what the data looks like after cleaning and serializing:
source_data: {'name': 'P v R A Fu', 'court': 'ukw', 'collection': 'uf', 'number': 'CA 139/2009', 'tag': 'NA', 'node_id': 'uf89638', 'multiplier': '5.012480529547776', 'setdown_year': 0, 'judgment_year': 0, 'phantom': 'false'}
target_data: {'name': 'Ck v R A Fu', 'court': 'ukw', 'collection': 'uf', 'number': '10/22147', 'tag': 'NA', 'node_id': 'uf67224', 'multiplier': '1.316227766016838', 'setdown_year': 0, 'judgment_year': 0, 'phantom': 'false'}
source_name: [2010] ZAECGHC 9
target_name: [2012] ZAGPJHC 189
I don't know with the Python driver. But this could be done using AQL
FOR doc in judgement
Filter doc.name == "name"
Limit 1
Insert merge((vertexobject, { _from: doc.id }) into citator
The vertextObject need to be an AQL object with at least the _to value
Note There may be typo I'm answering from my phone
I got a list in Python with Twitter user information and exported it with Pandas to an Excel file.
One row is one Twitter user with nearly all information of the user (name, #-tag, location etc.)
Here is my code to create the list and fill it with the user data:
def get_usernames(userids, api):
fullusers = []
u_count = len(userids)
try:
for i in range(int(u_count/100) + 1):
end_loc = min((i + 1) * 100, u_count)
fullusers.extend(
api.lookup_users(user_ids=userids[i * 100:end_loc])
)
print('\n' + 'Done! We found ' + str(len(fullusers)) + ' follower in total for this account.' + '\n')
return fullusers
except:
import traceback
traceback.print_exc()
print ('Something went wrong, quitting...')
The only problem is that every row is in JSON object and therefore one long comma-seperated string. I would like to create headers (no problem with Pandas) and only write parts of the string (i.e. ID or name) to colums.
Here is an example of a row from my output.xlsx:
User(_api=<tweepy.api.API object at 0x16898928>, _json={'id': 12345, 'id_str': '12345', 'name': 'Jane Doe', 'screen_name': 'jdoe', 'location': 'Nirvana, NI', 'description': 'Just some random descrition')
I have two ideas, but I don't know how to realize them due to my lack of skills and experience with Python.
Create a loop which saves certain parts ('id','name' etc.) from the JSON-string in colums.
Cut off the User(_api=<tweepy.api. API object at 0x16898928>, _json={ at the beginning and ) at the end, so that I may export they file as CSV.
Could anyone help me out with one of my two solutions or suggest a "simple" way to do this?
fyi: I want to do this to gather data for my thesis.
Try the python json library:
import json
jsonstring = "{'id': 12345, 'id_str': '12345', 'name': 'Jane Doe', 'screen_name': 'jdoe', 'location': 'Nirvana, NI', 'description': 'Just some random descrition')"
jsondict = json.loads(jsonstring)
# type(jsondict) == dictionary
Now you can just extract the data you want from it:
id = jsondict["id"]
name = jsondict["name"]
newdict = {"id":id,"name":name}
I have a dictionary for creating insert statements for a test I'm doing. The insert value for the description field needs to have the id of the current row, WHICH I DO NOT HAVE until I run the program. Also, that ID increments by 1 each time I insert, and the description for each insert has to have its corresponding row_num.
I want to load a dictionary of all the fields in the table in advance, so I can use the information in it to create the insert and alter statements for my test. I don't want to hardcode the test_value of a field in the code; I want what's supposed to be in it to be defined in the dictionary, and calculated at runtime. The dictionary is meant to be a template for what I want the value of the field to be.
I am getting the max id from the database, and adding 1 to it. That's the row number. I want the value that's being inserted for the description to be, for example, Row Num: {row_num} - Num Inserts {num_inserts} - Wait Time {wait_time}. I have the num_inserts and the wait_time from a config file. They are defined in advance.
I am getting NameError: name 'row_num' is not defined no matter how I've tried to define row_num in this dictionary. When I import the dictionary, the row_num isn't available yet, hence the error.
Here's a small snippet of my database fields dictionary (users is the table in this example):
all_fields_dict = {
'users':
{
'first_name': {
'db_field' : 'FirstName',
'datatype': 'varchar(50)',
'test_value': {utils.calc_field_value(['zfill', 'FirstName'])}, # another attempt that didn't work
'num_bool': False
},
'username': {
'db_field' : 'username',
'datatype': 'varchar(50)',
'test_value': f"user{utils.get_random_str(5)}", # this works, but it's a diff kind of calculation
'num_bool': False,
},
'description': {
'db_field' : 'description',
'datatype': 'text',
'test_value': f"{utils.get_desc_info(row_num)}", # one of my attempts - fails
'num_bool': False,
},
}
}
Among other things, I have tried:
{row_num}:
test_value: f"{row_num"}
calling a function that returns the row num:
def get_row_num()
return row_num
test_value: f"{utils.get_row_num()}
calling a function that CALLS the get_row_num function:
def get_desc_info():
row_num = get_row_num()
return f"Row Num: {row_num} - Wait Time: {wait_time} - Total Inserts: {num_inserts}"
test_value: f"{utils.get_desc_info()}"
I've even tried creating a function with a switcher that returns the get_row_num function, if 'rnum' is passed in as the test_value
def calc_field_value(type):
switcher = {
'rnum': get_row_num(),
etc
}
return switcher[type]
test_value: f"{utils.calc_field_value('rnum')
I've tried declaring it as global in just about every place I can think of.
I haven't tried eval, because of all the security warnings I've read about it.
Same thing, every single time.
Initialize test_field to some placeholder value, or simply don't set a value at all.
Then, later in the code when you do know the value, update the dict.
I have this method that writes json data to a file. The title is based on books and data is the book publisher,date,author, etc. The method works fine if I wanted to add one book.
Code
import json
def createJson(title,firstName,lastName,date,pageCount,publisher):
print "\n*** Inside createJson method for " + title + "***\n";
data = {}
data[title] = []
data[title].append({
'firstName:', firstName,
'lastName:', lastName,
'date:', date,
'pageCount:', pageCount,
'publisher:', publisher
})
with open('data.json','a') as outfile:
json.dump(data,outfile , default = set_default)
def set_default(obj):
if isinstance(obj,set):
return list(obj)
if __name__ == '__main__':
createJson("stephen-king-it","stephen","king","1971","233","Viking Press")
JSON File with one book/one method call
{
"stephen-king-it": [
["pageCount:233", "publisher:Viking Press", "firstName:stephen", "date:1971", "lastName:king"]
]
}
However if I call the method multiple times , thus adding more book data to the json file. The format is all wrong. For instance if I simply call the method twice with a main method of
if __name__ == '__main__':
createJson("stephen-king-it","stephen","king","1971","233","Viking Press")
createJson("william-golding-lord of the flies","william","golding","1944","134","Penguin Books")
My JSON file looks like
{
"stephen-king-it": [
["pageCount:233", "publisher:Viking Press", "firstName:stephen", "date:1971", "lastName:king"]
]
} {
"william-golding-lord of the flies": [
["pageCount:134", "publisher:Penguin Books", "firstName:william","lastName:golding", "date:1944"]
]
}
Which is obviously wrong. Is there a simple fix to edit my method to produce a correct JSON format? I look at many simple examples online on putting json data in python. But all of them gave me format errors when I checked on JSONLint.com . I have been racking my brain to fix this problem and editing the file to make it correct. However all my efforts were to no avail. Any help is appreciated. Thank you very much.
Simply appending new objects to your file doesn't create valid JSON. You need to add your new data inside the top-level object, then rewrite the entire file.
This should work:
def createJson(title,firstName,lastName,date,pageCount,publisher):
print "\n*** Inside createJson method for " + title + "***\n";
# Load any existing json data,
# or create an empty object if the file is not found,
# or is empty
try:
with open('data.json') as infile:
data = json.load(infile)
except FileNotFoundError:
data = {}
if not data:
data = {}
data[title] = []
data[title].append({
'firstName:', firstName,
'lastName:', lastName,
'date:', date,
'pageCount:', pageCount,
'publisher:', publisher
})
with open('data.json','w') as outfile:
json.dump(data,outfile , default = set_default)
A JSON can either be an array or a dictionary. In your case the JSON has two objects, one with the key stephen-king-it and another with william-golding-lord of the flies. Either of these on their own would be okay, but the way you combine them is invalid.
Using an array you could do this:
[
{ "stephen-king-it": [] },
{ "william-golding-lord of the flies": [] }
]
Or a dictionary style format (I would recommend this):
{
"stephen-king-it": [],
"william-golding-lord of the flies": []
}
Also the data you are appending looks like it should be formatted as key value pairs in a dictionary (which would be ideal). You need to change it to this:
data[title].append({
'firstName': firstName,
'lastName': lastName,
'date': date,
'pageCount': pageCount,
'publisher': publisher
})