Accounting settings cannot be changed in Odoo 8 - python

I need to modify an option of the accounting configuration (menu Accounting > Configuration > Accounting).
As you know, those options belong to a Transient Model named account.config.settings, which inherits from res.config.settings.
The problem is that even if I modify no option and click on Apply, Odoo begins loading forever. I put the log in debug_sql mode, and I realised that after clicking on Apply, Odoo starts to make thousands of SQL queries, and that is the reason why it does not stop loading.
I made a database backup and restored it in a newer instance of Odoo 8. In this instance, when I click on Apply, Odoo makes several SQL queries but not so much as in the other instance, so it works perfectly.
My conclusion was that the problem could be in the instance code (not in the database), so I looked for all the modules inheriting from account.config.settings and updated their repositories to go back to the same commits as the wrong instance (with git checkout xxx).
Afterwards I was expecting the newer instance to start failing when clicking on Apply, but it remains working OK.
So I am running out of ideas. I am thinking about running the backup database in the newer instance just to change the option I need, and after that restoring it again in the older instance, but I prefer to avoid that since I think it is a bit risky.
Any ideas? What more can I try to find out the problem?

Finally I found out the guilty module. It was account_due_list from the repository account-payment of the Odoo Community Association. The commit which fixes the problem is https://github.com/OCA/account-payment/commit/d7a09399982c80bb0f9465c44b9dc2a2b17e557a#diff-57131fd364915a56cbf8696d74e19478, merged on September the 22nd in 2016. Its title, "check if currency id not changed per company, remove it from create values".
The computed field maturity_residual depended on company_id.currency_id. This dependency has to be removed due to be the cause of the whole problem. It triggered thousands of SQL queries which made Odoo be loading forever.
Old and wrong code
#api.depends('date_maturity', 'debit', 'credit', 'reconcile_id',
'reconcile_partial_id', 'account_id.reconcile',
'amount_currency', 'reconcile_partial_id.line_partial_ids',
'currency_id', 'company_id.currency_id')
'currency_id')
def _maturity_residual(self):
...
New and right code
#api.depends('date_maturity', 'debit', 'credit', 'reconcile_id',
'reconcile_partial_id', 'account_id.reconcile',
'amount_currency', 'reconcile_partial_id.line_partial_ids',
'currency_id')
def _maturity_residual(self):
...
I found very risky to update repositories to the latest version due to what #CZoellner exactly says, sometimes there are weird commits which can destroy some database data. So, these are the consequences of not doing that.

Related

SQLAlchemy Repeated Commit()

I've run into a really frustrating bug, but I'm not sure exactly how to phrase my question. The core behavior seems to be as follows:
1) Create a new db.session, bound to an existing PostgreSQL database
2) Run db.session.add(myObj)
3) Run db.session.commit()
>>> Check the database using PGAdmin, myObj was successfully uploaded
4) *
5) Run db.session.query(myClass) as many times as I want
>>> Returns [myObj]
6) Run db.session.query(myClass).filter(anyFilterThatDoesNotActuallyChangeResult)
>>> Returns [myObj]
>>> BUG >>> 5 seconds later, another copy of myObj is added to the database (visible in PGAdmin)
7) Repeat step 6 as many times as you want
>>> Returns [myObj, myObj]
8) Repeat step 5
>>> Returns [myObj, myObj]
>>> BUG >>> 5 seconds later, another copy of myObj is added to the database (visible in PGAdmin)
Further confusing information: I can completely close and restart my text editor and python environment at step 4, and the buggy behavior persists.
My intuition is that the COMMIT string is somehow being cached somewhere (in SQLAlchemy or in PostgreSQL) and whenever the query command is changed, that triggers some sort of autoflush on the DB, thereby rerunning the commit string, but not actually clearing that cache upon success.
----------------- EDIT -----------------
IGNORE THE REST OF THIS QUESTION, AS IT WAS NOT RELEVANT TO THE BUG AT HAND.
To further explore this behavior, I ran the following code:
1) Create a new db.session, bound to an existing PostgreSQL database
2) Run db.session.add(myObj)
3) Run db.session.commit()
4) Run db.session.commit()
Which, I would expect to only add ONE copy of myObj, but instead it actually adds TWO!!! This breaks my understanding of what commit is doing--specifically autoflushing, ending the transaction, and removing add(myObj) from its "to do" list. Furthermore, none of the code I try running between lines 3 and 4 will prevent this behavior: for example db.session.expire_all()
I am a complete noob around databases (this is my first project), so I would appreciate any suggestions, especially explicit step-by-step recommendations for how I can overcome this bug. E.g. What code should I add in, and where, to clear such a cache?
Turns out that the problem was more nefarious than I imagined. The steps to repeat were actually more basic than that:
1) Save any file in the same directory as my session manager
>>> BUG >>> 15 seconds later another copy of myObj is added to the database
I am using VS Code (Version: 1.47.3), and the bug only happens while the Python extension is enabled.
My running hypothesis is that because one of the files in the directory auto-initializes a database session (via pyscopg2), there is some caching mechanism that executes that code in a poorly-managed state, which somehow successfully manages to establish a new engine connection, followed by whatever the last commit statement was.
I have stopped trying to debug it, and moved to a refactor of the session management structure so that the connection is only established within a function call, as opposed to whenever the file is run.
Thanks for reading. Hope this helps someone else hitting this infuriatingly unreproducible bug. Literally thought I was going crazy: each time I absent-mindedly saved my file, a mystery object would appear. I would save at different points and at different frequencies, so the behavior appeared utterly random. The only reason I found the original steps to reproduce was because the debugger I was using saved the file before running it.
----------------- FINAL SOLUTION -----------------
It turns out the root of all my woes was my choice of names.
I had written some code that tested my sql code, but foolishly named it test_XXX.py
Then, whenever any file was saved, pytest would do an automatic sweep of all the files that started test_* and run them, thus causing my entire SQL example work to be run behind the scenes.
Tune in next week for more adventures in Things That I Could Have Prevented.

Django - Best way to create snapshots of objects

I am currently working on a Django 2+ project involving a blockchain, and I want to make copies of some of my object's states into that blockchain.
Basically, I have a model (say "contract") that has a list of several "signature" objects.
I want to make a snapshot of that contract, with the signatures. What I am basically doing is taking the contract at some point in time (when it's created for example) and building a JSON from it.
My problem is: I want to update that snapshot anytime a signature is added/updated/deleted, and each time the contract is modified.
The intuitive solution would be to override each "delete", "create", "update" of each of the models involved in that snapshot, and pray that all of them the are implemented right, and that I didn't forget any. But I think that this is not scalable at all, hard to debug and ton maintain.
I have thought of a solution that might be more centralized: using a periodical job to get the last update date of my object, compare it to the date of my snapshot, and update the snapshot if necessary.
However with that solution, I can identify changes when objects are modified or created, but not when they are deleted.
So, this is my big question mark: how with django can you identify deletions in relationships, without any prior context, just by looking at the current database's state ? Is there a django module to record deleted objects ? What are your thoughts on my issue ?
All right?
I think that, as I understand your problem, you are in need of a module like Django Signals, which listens for changes in the database and, when identified (and if all the desired conditions are met), executes certain commands in your application ( even be able to execute in the database).
This is the most recent documentation:
https://docs.djangoproject.com/en/3.1/topics/signals/

Nifi - nipyapi - Updating processor's sensitive variables

I keep on getting the following error while running nipyapi.canvas.update_variable_registry(versionedPG, variable) API from nipyapi.
Do I need to refresh the flow before making this call. Is there any nipyapi call to do the same ?
I referred the following link https://community.cloudera.com/t5/Support-Questions/NIFI-processor-not-the-most-up-to-date/m-p/158171 which states that if you are modifying the component from 2 different places, then I could see this errors. But in my case, I am running python code to modify and update the processor & components.
Also, what does 5 in the error below means.
ERROR:main:[5, null, 0d389912-2f27-31da-d5d2-f399556fb35e] is not the most up-to-date revision. This component appears to have been modified
How to get the most up-to-date revision of the processor ?
Well, it seems like update_variable_registry is not the good way to update those variables.
According to Nifi http logs examination, you have to
Create an update request through a POST. This is done using submit_update_variable_registry_request(...)
Wait for completin using through a GET of this update request. This is done using get_update_request(...)
Finally DELETE the update request. This is done using delete_update_request(...)
After having tried that, it seems like only the first part is really needed. Part 2 and 3 may be elements of UI refresh ...
This is resolved in version 0.13.3 of NiPyAPI Github

Updating files with a Perforce trigger before submit

I understand that this question has, in essence, already been asked, but that question did not have an unequivocal answer, so please bear with me.
Background: In my company, we use Perforce submission numbers as part of our versioning. Regardless of whether this is a correct method or not, that is how things are. Currently, many developers do separate submissions for code and documentation: first the code and then the documentation to update the client-facing docs with what the new version numbers should be. I would like to streamline this process.
My thoughts are as follows: create a Perforce trigger (which runs on the server side) which scans the submitted documentation files (such as .txt) for a unique term (such as #####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####) and then replaces it with the value of what the change list would be when submitted. I already know how to determine this value. What I cannot figure out, is how or where to update the files.
I have already determined that using the change-content trigger (whether possible or not), which
"fire[s] after changelist creation and file transfer, but prior to committing the submit to the database",
is the way to go. At this point the files need to exist somewhere on the server. How do I determine the (temporary?) location of these files from within, say, a Python script so that I can update or sed to replace the placeholder value with the intended value? The online documentation for Perforce which I have found so far have not been very explicit on whether this is possible or how the mechanics of a submission at this stage would work.
EDIT
Basically what I am looking for is RCS-like functionality, but without the unsightly special character sequences which accompany it. After more digging, what I am asking is the same as this question. However I believe that this must be possible, because the trigger is running on the server side and the files had already been transferred to the server. They must therefore be accessible by the script.
EXAMPLE
Consider the following snippet from a release notes document:
[#####PERFORCE##CHANGELIST##NUMBER###ROFL###LOL###WHATEVER#####] Added a cool new feature. Early retirement is in sight.
[52702] Fixed a really annoying bug. Many lives saved.
[52686] Fixed an annoying bug.
This is what the user submits. I then want the trigger to intercept this file during the submission process (as mentioned, at the change-content stage) and alter it so that what is eventually stored within Perforce looks like this:
[52738] Added a cool new feature. Early retirement is in sight.
[52702] Fixed a really annoying bug. Many lives saved.
[52686] Fixed an annoying bug.
Where 52738 is the final change list number of what the user submitted. (As mentioned, I can already determine this number, so please do dwell on this point.) I.e., what the user sees on the Perforce client console is.
Changelist 52733 renamed 52738.
Submitted change 52738.
Are you trying to replace the content of pending changelist files that were edited on a different client workspace (and different user)?
What type of information are you trying to replace in the documentation files? For example,
is it a date, username like with RCS keyword expansion? http://www.perforce.com/perforce/doc.current/manuals/p4guide/appendix.filetypes.html#DB5-18921
I want to get better clarification on what you are trying to accomplish in case there is another way to do what you want.
Depending on what you are trying to do, you may want to consider shelving ( http://www.perforce.com/perforce/doc.current/manuals/p4guide/chapter.files.html#d0e5537 )
Also, there is an existing Perforce enhancement request I can add your information to,
regarding client side triggers to modify files on the client side prior to submit. If it becomes implemented, you will be notified by email.
99w,
I have also added you to an existing enhancement request for Customizable RCS keywords, along
with the example you provided.
Short of using a post-command trigger to edit the archive content directly and then update the checksum in the database, there is currently not a way to update the file content with the custom-edited final changelist number.
One of the things I learned very early on in programming was to keep out of interrupt level as much as possible, and especially don't do stuff in interrupt that requires resources that can hang the system. I totally get that you want to resolve the internal labeling in sequence, but a better way to do it may be to just set up the edit during the trigger so that a post trigger tool can perform the file modification.
Correct me if I'm looking at this wrong, but there seems a bit of irony, or perhaps recursion, if you are trying to make a file change during the course of submitting a file change. It might be better to have a second change list that is reserved for the log. You always know where that file is, in your local file space. That said, ktext files and $ fields may be able to help.

Conflict resolution in ZODB

I do run parallel write requests on my ZODB. I do have multiple BTree instances inside my ZODB. Once the server accesses the same objects inside such a BTree, I get a ConflictError for the IOBucket class. For all my Django bases classes I do have _p_resolveconflict set up, but can't implement it for IOBucket 'cause its a C based class.
I did a deeper analysis, but still don't understand why it complains about the IOBucket class and what it writes into it. Additionally, what would be the right strategy to resolve it?
Thousand thanks for any help!
IOBucket is part of the persistence structure of a BTree; it exists to try and reduce conflict errors, and it does try and resolve conflicts where possible.
That said, conflicts are not always avoidable, and you should restart your transaction. In Zope, for example, the whole request is re-run up to 5 times if a ConflictError is raised. Conflicts are ZODB's way of handling the (hopefully rare) occasion where two different requests tried to change the exact same data structure.
Restarting your transaction means calling transaction.begin() and applying the same changes again. The .begin() will fetch any changes made by the other process and your commit will be based on the fresh data.

Categories