issue in one2many write - python

In the below code i have get a 2 rows as a result. then i create the object by using the results. Then i assign the created values in to the one2many field (inventory_line) . but here only one row has displayed. i want to list out the all created vales in the one2many..? how can i fix this issue..?
#api.multi
def _inventory(self):
result = {}
if not self: return result
print ("Trueeeeeeeeeeeeee")
inventory_obj = self.env['tpt.product.inventory']
print (inventory_obj,"inventory_obj")
for id in self:
print (id,"id")
result.setdefault(id, [])
sql = 'delete from tpt_product_inventory where product_id=%s'%(id.id)
print (sql,"sql")
self._cr.execute(sql)
sql = '''
select foo.loc,foo.prodlot_id,foo.id as uom,sum(foo.product_qty) as ton_sl, foo.product_id from
(select l2.id as loc,st.prodlot_id,pu.id,st.product_qty,st.product_id
from stock_move st
inner join stock_location l2 on st.location_dest_id= l2.id
inner join product_uom pu on st.product_uom = pu.id
where st.state='done' and st.product_id=%s and l2.usage = 'internal'
union all
select l1.id as loc,st.prodlot_id,pu.id,st.product_qty*-1, st.product_id
from stock_move st
inner join stock_location l1 on st.location_id= l1.id
inner join product_uom pu on st.product_uom = pu.id
where st.state='done' and st.product_id=%s and l1.usage = 'internal'
)foo
group by foo.loc,foo.prodlot_id,foo.id, foo.product_id
'''%(id.id,id.id)
self._cr.execute(sql)
})
for inventory in self._cr.dictfetchall():
print (inventory,"inventory")
new_id = inventory_obj.create( {'warehouse_id':inventory['loc'],'product_id':inventory['product_id'],'prodlot_id':inventory['prodlot_id'],'hand_quantity':inventory['ton_sl'],'uom_id':inventory['uom']})
print (new_id,"new_id")
self.inventory_line = new_id

I think that you better create an SQL View for this scenario and associate it with the model tpt.product.inventory and you could remove all of the code you are using to match the records(delete and create the new ones)
You could find a very similar example here:
https://github.com/odoo/odoo/blob/695050dd10e786d7b316f6e7e40418441cf0c8dd/addons/stock/report/report_stock_forecast.py

Related

Odoo sales reports pipeline filter by res partner category

In Sales -> Reports -> Pipeline I would like to allow to filter by res.partner.category.
In Odoo res.partner has a field category_id
category_id = fields.Many2many('res.partner.category', column1='partner_id',
column2='category_id', string='Tags', default=_default_category)
I tried copying
category_id = fields.Many2many('res.partner.category', column1='partner_id',
column2='category_id', string='Tags', default=_default_category)` to my crm_opportunity_report (that has inherited crm.opportunity.report) but I get errors.
Tried adding field
category_ids = fields.Many2many(comodel_name='res.partner.category', relation="res_partner_res_partner_category_rel",
column1='category_id', column2='partner_id')
and this failed too.
How to add category name as a filter to crm_opportunity_report? What can be done to allow filtering by category?
Here's somewhat solution (based on discussion: see comments on the question). It makes a string ("'Tagname1';'Tagname2';'Tagname3';...") from tag names to filter on.
SELECT
c.id,
c.name as name,
c.date_deadline,
c.date_open as opening_date,
c.date_closed as date_closed,
c.date_last_stage_update as date_last_stage_update,
c.user_id,
c.probability,
c.stage_id,
stage.name as stage_name,
c.type,
c.company_id,
c.priority,
c.team_id,
(SELECT COUNT(*)
FROM mail_message m
WHERE m.model = 'crm.lead' and m.res_id = c.id) as nbr_activities,
c.active,
c.campaign_id,
c.source_id,
c.medium_id,
c.partner_id,
c.city,
c.country_id,
c.planned_revenue as total_revenue,
c.planned_revenue*(c.probability/100) as expected_revenue,
c.create_date as create_date,
extract('epoch' from (c.date_closed-c.create_date))/(3600*24) as delay_close,
abs(extract('epoch' from (c.date_deadline - c.date_closed))/(3600*24)) as delay_expected,
extract('epoch' from (c.date_open-c.create_date))/(3600*24) as delay_open,
c.lost_reason,
c.date_conversion as date_conversion,
COALESCE(rp.customer, FALSE) as is_customer,
COALESCE(x.Categories, '') AS Categories
FROM
"crm_lead" c
LEFT JOIN "res_partner" rp ON rp.id = c.partner_id
LEFT JOIN "crm_stage" stage ON stage.id = c.stage_id
LEFT JOIN
(
SELECT rp.id AS partner_id, array_to_string(array_agg(''''||rpc.name||'''' ORDER BY rp.id, rpc.name),';') AS Categories
FROM res_partner_res_partner_category_rel rpcl
JOIN res_partner_category rpc ON rpc.id = rpcl.category_id
JOIN res_partner rp ON rp.id = rpcl.partner_id
GROUP BY rp.id
ORDER BY rp.id
) AS x ON x.partner_id = c.partner_id
GROUP BY c.id, stage.name, COALESCE(rp.customer, FALSE), COALESCE(x.Categories, '')
ORDER BY c.partner_id

Creating a Table(array) of Records

If I wanted to store records from two files into a table (an array of records), could I use a format similar to the below code, and just put both file names in def function like def readTable(log1,log2): and then use the same code for both log1 and log2 allowing it to make a table1 and a table2?
def readTable(fileName):
s = Scanner(fileName)
table = []
record = readRecord(s)
while (record != ""):
table.append(record)
record = readRecord(s)
s.close()
return table
Just use *args, and get a list of records?
def readTable(*args):
tables = []
for filename in args:
s = Scanner(fileName)
table = []
record = readRecord(s)
while (record != ""):
table.append(record)
record = readRecord(s)
s.close()
tables.append(table)
return tables
This way, you can pass log1, log2, log3 (any number of logs you like and get back a list of tables for each
Since readTable returns a list, if you want to concatenate the records from 2 logs, use the + operator.
readTable(log1) + readTable(log2)

What is an efficient way of inserting thousands of records into an SQLite table using Django?

I have to insert 8000+ records into a SQLite database using Django's ORM. This operation needs to be run as a cronjob about once per minute.
At the moment I'm using a for loop to iterate through all the items and then insert them one by one.
Example:
for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
What is an efficient way of doing this?
Edit: A little comparison between the two insertion methods.
Without commit_manually decorator (11245 records):
nox#noxdevel marinetraffic]$ time python manage.py insrec
real 1m50.288s
user 0m6.710s
sys 0m23.445s
Using commit_manually decorator (11245 records):
[nox#noxdevel marinetraffic]$ time python manage.py insrec
real 0m18.464s
user 0m5.433s
sys 0m10.163s
Note: The test script also does some other operations besides inserting into the database (downloads a ZIP file, extracts an XML file from the ZIP archive, parses the XML file) so the time needed for execution does not necessarily represent the time needed to insert the records.
You want to check out django.db.transaction.commit_manually.
http://docs.djangoproject.com/en/dev/topics/db/transactions/#django-db-transaction-commit-manually
So it would be something like:
from django.db import transaction
#transaction.commit_manually
def viewfunc(request):
...
for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
transaction.commit()
Which will only commit once, instead at each save().
In django 1.3 context managers were introduced.
So now you can use transaction.commit_on_success() in a similar way:
from django.db import transaction
def viewfunc(request):
...
with transaction.commit_on_success():
for item in items:
entry = Entry(a1=item.a1, a2=item.a2)
entry.save()
In django 1.4, bulk_create was added, allowing you to create lists of your model objects and then commit them all at once.
NOTE the save method will not be called when using bulk create.
>>> Entry.objects.bulk_create([
... Entry(headline="Django 1.0 Released"),
... Entry(headline="Django 1.1 Announced"),
... Entry(headline="Breaking: Django is awesome")
... ])
In django 1.6, transaction.atomic was introduced, intended to replace now legacy functions commit_on_success and commit_manually.
from the django documentation on atomic:
atomic is usable both as a decorator:
from django.db import transaction
#transaction.atomic
def viewfunc(request):
# This code executes inside a transaction.
do_stuff()
and as a context manager:
from django.db import transaction
def viewfunc(request):
# This code executes in autocommit mode (Django's default).
do_stuff()
with transaction.atomic():
# This code executes inside a transaction.
do_more_stuff()
Bulk creation is available in Django 1.4:
https://django.readthedocs.io/en/1.4/ref/models/querysets.html#bulk-create
Have a look at this. It's meant for use out-of-the-box with MySQL only, but there are pointers on what to do for other databases.
You might be better off bulk-loading the items - prepare a file and use a bulk load tool. This will be vastly more efficient than 8000 individual inserts.
To answer the question particularly with regard to SQLite, as asked, while I have just now confirmed that bulk_create does provide a tremendous speedup there is a limitation with SQLite: "The default is to create all objects in one batch, except for SQLite where the default is such that at maximum 999 variables per query is used."
The quoted stuff is from the docs--- A-IV provided a link.
What I have to add is that this djangosnippets entry by alpar also seems to be working for me. It's a little wrapper that breaks the big batch that you want to process into smaller batches, managing the 999 variables limit.
You should check out DSE. I wrote DSE to solve these kinds of problems ( massive insert or updates ). Using the django orm is a dead-end, you got to do it in plain SQL and DSE takes care of much of that for you.
Thomas
def order(request):
if request.method=="GET":
cust_name = request.GET.get('cust_name', '')
cust_cont = request.GET.get('cust_cont', '')
pincode = request.GET.get('pincode', '')
city_name = request.GET.get('city_name', '')
state = request.GET.get('state', '')
contry = request.GET.get('contry', '')
gender = request.GET.get('gender', '')
paid_amt = request.GET.get('paid_amt', '')
due_amt = request.GET.get('due_amt', '')
order_date = request.GET.get('order_date', '')
print(order_date)
prod_name = request.GET.getlist('prod_name[]', '')
prod_qty = request.GET.getlist('prod_qty[]', '')
prod_price = request.GET.getlist('prod_price[]', '')
print(prod_name)
print(prod_qty)
print(prod_price)
# insert customer information into customer table
try:
# Insert Data into customer table
cust_tab = Customer(customer_name=cust_name, customer_contact=cust_cont, gender=gender, city_name=city_name, pincode=pincode, state_name=state, contry_name=contry)
cust_tab.save()
# Retrive Id from customer table
custo_id = Customer.objects.values_list('customer_id').last() #It is return
Tuple as result from Queryset
custo_id = int(custo_id[0]) #It is convert the Tuple in INT
# Insert Data into Order table
order_tab = Orders(order_date=order_date, paid_amt=paid_amt, due_amt=due_amt, customer_id=custo_id)
order_tab.save()
# Insert Data into Products table
# insert multiple data at a one time from djanog using while loop
i=0
while(i<len(prod_name)):
p_n = prod_name[i]
p_q = prod_qty[i]
p_p = prod_price[i]
# this is checking the variable, if variable is null so fill the varable value in database
if p_n != "" and p_q != "" and p_p != "":
prod_tab = Products(product_name=p_n, product_qty=p_q, product_price=p_p, customer_id=custo_id)
prod_tab.save()
i=i+1
I recommend using plain SQL (not ORM) you can insert multiple rows with a single insert:
insert into A select from B;
The select from B portion of your sql could be as complicated as you want it to get as long as the results match the columns in table A and there are no constraint conflicts.
def order(request):
if request.method=="GET":
# get the value from html page
cust_name = request.GET.get('cust_name', '')
cust_cont = request.GET.get('cust_cont', '')
pincode = request.GET.get('pincode', '')
city_name = request.GET.get('city_name', '')
state = request.GET.get('state', '')
contry = request.GET.get('contry', '')
gender = request.GET.get('gender', '')
paid_amt = request.GET.get('paid_amt', '')
due_amt = request.GET.get('due_amt', '')
order_date = request.GET.get('order_date', '')
prod_name = request.GET.getlist('prod_name[]', '')
prod_qty = request.GET.getlist('prod_qty[]', '')
prod_price = request.GET.getlist('prod_price[]', '')
# insert customer information into customer table
try:
# Insert Data into customer table
cust_tab = Customer(customer_name=cust_name, customer_contact=cust_cont, gender=gender, city_name=city_name, pincode=pincode, state_name=state, contry_name=contry)
cust_tab.save()
# Retrive Id from customer table
custo_id = Customer.objects.values_list('customer_id').last() #It is return Tuple as result from Queryset
custo_id = int(custo_id[0]) #It is convert the Tuple in INT
# Insert Data into Order table
order_tab = Orders(order_date=order_date, paid_amt=paid_amt, due_amt=due_amt, customer_id=custo_id)
order_tab.save()
# Insert Data into Products table
# insert multiple data at a one time from djanog using while loop
i=0
while(i<len(prod_name)):
p_n = prod_name[i]
p_q = prod_qty[i]
p_p = prod_price[i]
# this is checking the variable, if variable is null so fill the varable value in database
if p_n != "" and p_q != "" and p_p != "":
prod_tab = Products(product_name=p_n, product_qty=p_q, product_price=p_p, customer_id=custo_id)
prod_tab.save()
i=i+1
return HttpResponse('Your Record Has been Saved')
except Exception as e:
return HttpResponse(e)
return render(request, 'invoice_system/order.html')

outer join modelisation in django

I have a many to many relationship table whith some datas in the jointing base
a basic version of my model look like:
class FooLine(models.Model):
name = models.CharField(max_length=255)
class FooCol(models.Model):
name = models.CharField(max_length=255)
class FooVal(models.Model):
value = models.CharField(max_length=255)
line = models.ForeignKey(FooLine)
col = models.ForeignKey(FooCol)
I'm trying to search every values for a certain line with a null if the value is not present (basically i'm trying to display the fooval table with null values for values that haven't been filled)
a typical sql would be
SELECT value FROM FooCol LEFT OUTER JOIN
(FooVal JOIN FooLine
ON FooVal.line_id == FooLine.id AND FooLine.name = "FIXME")
ON FooCol.id = col_id;
Is there any way to modelise above query using django model
Thanks
Outer joins can be viewed as a hack because SQL lacks "navigation".
What you have is a simple if-statement situation.
for line in someRangeOfLines:
for col in someRangeOfCols:
try:
cell= FooVal.objects().get( col = col, line = line )
except FooVal.DoesNotExist:
cell= None
That's what an outer join really is -- an attempted lookup with a NULL replacement.
The only optimization is something like the following.
matrix = {}
for f in FooVal.objects().all():
matrix[(f.line,f.col)] = f
for line in someRangeOfLines:
for col in someRangeOfCols:
cell= matrix.get((line,col),None)

SQLAlchemy session query with INSERT IGNORE

I'm trying to do a bulk insert/update with SQLAlchemy. Here's a snippet:
for od in clist:
where = and_(Offer.network_id==od['network_id'],
Offer.external_id==od['external_id'])
o = session.query(Offer).filter(where).first()
if not o:
o = Offer()
o.network_id = od['network_id']
o.external_id = od['external_id']
o.title = od['title']
o.updated = datetime.datetime.now()
payout = od['payout']
countrylist = od['countries']
session.add(o)
session.flush()
for country in countrylist:
c = session.query(Country).filter(Country.name==country).first()
where = and_(OfferPayout.offer_id==o.id,
OfferPayout.country_name==country)
opayout = session.query(OfferPayout).filter(where).first()
if not opayout:
opayout = OfferPayout()
opayout.offer_id = o.id
opayout.payout = od['payout']
if c:
opayout.country_id = c.id
opayout.country_name = country
else:
opayout.country_id = 0
opayout.country_name = country
session.add(opayout)
session.flush()
It looks like my issue was touched on here, http://www.mail-archive.com/sqlalchemy#googlegroups.com/msg05983.html, but I don't know how to use "textual clauses" with session query objects and couldn't find much (though admittedly I haven't had as much time as I'd like to search).
I'm new to SQLAlchemy and I'd imagine there's some issues in the code besides the fact that it throws an exception on a duplicate key. For example, doing a flush after every iteration of clist (but I don't know how else to get an the o.id value that is used in the subsequent OfferPayout inserts).
Guidance on any of these issues is very appreciated.
The way you should be doing these things is with session.merge().
You should also be using your objects relation properties. So the o above should have o.offerpayout and this a list (of objects) and your offerpayout has offerpayout.country property which is the related countries object.
So the above would look something like
for od in clist:
o = Offer()
o.network_id = od['network_id']
o.external_id = od['external_id']
o.title = od['title']
o.updated = datetime.datetime.now()
payout = od['payout']
countrylist = od['countries']
for country in countrylist:
opayout = OfferPayout()
opayout.payout = od['payout']
country_obj = Country()
country_obj.name = country
opayout.country = country_obj
o.offerpayout.append(opayout)
session.merge(o)
session.flush()
This should work as long as all the primary keys are correct (i.e the country table has a primary key of name). Merge essentially checks the primary keys and if they are there merges your object with one in the database (it will also cascade down the joins).

Categories