Openerp module update fails - python

I am in the process of trying to upgrade my module written for Openerp. Although it works fine on the local machine and the local openerp server. It gives me the below error when I try to update the files via SVN to the staged server. The error states that they are trying to insert a record to the DB where is its actually an update not a insertion. I am worried to remove that record from the Postgres db directly though, i think it might do the trick.
I also removed all the existing files before I did the SVN update on the staged server. May be this might have been the pit fall but i am not quite sure. Let me know what you guys think is the best solution for this problem. Below is the error messages show in Openerp Server when its restarted after the SVN update. The Server Stops from here and never ends.
But soon as I revert the files or remove them and update the Server works like a Charm.
module abc: loading objects
[2011-09-14 08:12:49,425][oe_test] INFO:init:module abc:registering objects
[2011-09-14 08:12:49,432][oe_test] INFO:init:module abc: creating or updating database tables
[2011-09-14 08:12:49,434][oe_test] DEBUG:sql:bad query: INSERT INTO ir_model_data (name,date_init,date_update,module,model,res_id) VALUES (E'model_abc', now(), now(), E'abc', E'ir.model', 301)
[2011-09-14 08:12:49,434][oe_test] DEBUG:sql:('model_abc', u'abc', 'ir.model', 301)
[2011-09-14 08:12:49,434][oe_test] DEBUG:sql:duplicate key value violates unique constraint "ir_model_data_module_name_uniq"
Regards,
Gayan

[2011-09-14 08:12:49,434][oe_test] DEBUG:sql:duplicate key value violates unique constraint "ir_model_data_module_name_uniq"
In ir.model.data, there is an "_sql_constraint", defined for unique record name. so error comes from that code and says that You can't have duplicate record name.
as per my knowledge, this kind of error could occur, because of duplicate record id in your *_data.xml file.
Note : Check either noupdate="True" in your *_data.xml file or not.

After running around with the above problem I was able to figure out the actual cause and over come the issue. The underling problem was that I have another module which accidentally carries the same name. So due to this the above conflicting exception occurs. Finally I changed the module name, and the model names and the problem was sorted.
Thanks for all the inputs.
Regards,
Gayan

did you try to launch the server with -u your_module_name -d your_db_name?

Related

Is it possible to auto repair (on the fly) Django corrupted Mysql table upon exception?

I am using Django with Mysql as part of a product. Occasionally some table gets corrupted and when accessing the table I get the exception:
InternalError: (145, "Table 'some_table' is marked as crashed and should be repaired")
I then need to run a sql script which uses REPAIR TABLE command to fix the issue.
My questions are:
Is there a django mechanism which will detect this issue, run "REPAIR TABLE some_table", print notification and then retry the operation which has failed?
if not - is it reasonable to put a decorator to django interface functions like filter, save etc.?
of course if the operation will fail again after repair I will want to print something rather than continue to run the db operation again.
I appreciate any answer especially one with an elaborated python example.
Thanks in advance.
I appreciate that this would be an incredibly DevOps feature to have, but I dare say that if your table is randomly getting corrupted and the extent of your caring is to run REPAIR TABLE some_table; when it errors out, you should probably skip straight to
ALTER TABLE some_table ENGINE=BLACKHOLE;
That will be a lot more robust on an otherwise unstable system.

Having issues connecting to PostgreSQL Database in Django

I am getting the error
django.db.utils.OperationError: FATAL:database "/path/to/current/project/projectname/databasename" does not exist.
I have accessed the database both manually through psql, as well as through pgadmin4, and have verified in both instances that the database does exist, and I have verified that the port is correct.
Im not sure why I cant access the database, or why it would say the database cannot be found.
According to pgAdmin4, the database is healthy, and it is receiving at least 1 I/O per second, so it can be read and written to by...something?
I have installed both the psycopg2 and the psycopg2-binary just to be safe.
I figured out the answer, or at least I do believe I did. It was a two part problem.
Part of it was I left os.path.join(base_dir...) included as part of the '' name section.
The other was I used an "#" character as part of my password. Once I changed the password, and I removed the os.path.join(base_dir...) portion, it worked.

In a Flask application that is using SQLAlchemy, is it ok to permanently put `session.rollback` at the beginning of the app?

I'm new to Flask and web development in general. I have a Flask web-application that is using SQLAlchemy, is it ok to put session.rollback at the beginning of the app in order to keep it running even after a transaction fails?
I had a problem with my website when it stopped working after I was attempting to delete records of one table. The error log showed that the deletion failed due to entries in another table still referencing these records as their foreign key. The error log suggested using session.rollback to rollback this change, so I put it at the beginning of my app just after binding my database and creating the session and my website worked. This gave me the hint to leave that line there. Is my move right, safe and ok? Can anyone tell me what is the correct thing to do if this is somewhat endangering the functionality or logic of my website by any chance?
I'd say by that you are by definition cargo cult coding and should try to determine why you're finding these errors in the first place instead of just including a bit of code for a reason you don't understand.
The problem you're describing is the result of using foreign keys to ensure data integrity in your database. Typically SQLAlchemy will nullify all of the depending foreign keys, but since I don't know anything about your set up I can't explain why it wasn't. It is perhaps a difference between databases.
One massive problem with putting the rollback at the beginning of a route (or the entire global app) is that you might rollback data which you didn't want to. You haven't provided an MVCE so no one can really help you debug your problem.
Cargo cult coding in circumstances like this is understandable, but it's never a good practice. To solve this problem, investigate the cascades in SQLAlchemy. Also, fire up your actual SQL db interface and look at the data's structure, and set SQLALCHEMY_ECHO = 1 in your config file to see what's actually getting emitted.
Good luck!
You should not use the rollback at the beginning but when a database operation fails.
The error is due to an integrity condition in your database. Some rows in your table are being referenced by another table. So, you have to remove referencing rows first.

Django ProgrammingError must appear in the GROUP BY clause or be used in an aggregate function

Give it any basic model say;
class Post(models.Model):
created = models.DateTimeField(auto_now_add=True)
title = models.CharField(_('Title'), max_length=100)
content = models.TextField(_('Content html'), max_length=65000)
author = models.ForeignKey('user.User', on_delete=models.SET_NULL)
A query like Post.objects.annotate(Count('id')) (or any field, any annotate()) fails with the following error:
ProgrammingError: column "post.created" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: SELECT "post"."id", "post"."created", "post"."ti...
Using django 1.11.16, and postgres 9.4.19.
As I read here in another stackoverflow question tried different django versions and postgres versions; using django 2.0, 2.1.2, postgres 9.5.. same error! Reading around I've seen that might be a problem related to SQL, but I'm having this problem only at one server running Ubuntu 18.04 (bionic). Running the query locally at Ubuntu 16.04, with django 1.11.16, or any version above that and postgres 9.4 or above runs fine in my local system.. So the problem might actually be related with some low level libraries perhaps, I'm not running complex queries, any simple annotate() with django 1.11+ fails in ubuntu 18.04 with postgres 9.4 or 9.5
[UPDATE]
Might be useful for you if you find yourself in this scenario with no clues of what's going on, verify that the table in question has its indexes created. My problem turned out to be the posts table had no PRIMARY KEY definition nor any other constraint, a fail in a pg_restore which restored all the data and some schema definitions, yeah you read right some other schema definitions were missing, no idea of why.. But instead of trying to debug what happened with pg_restore in the first place, I ran an initial python manage migrate on an empty DB so the schema was correctly created this time and verified (psql -d <db_name> -c '\d posts'), then run pg_restore again with the --data-only and --disable-triggers flags.. So finally I got the schema and data properly restored and the query worked
That error message is because PostgreSQL will not make a guess as to what to do with non-grouped columns when there's a aggregate function in the query. This is one of the cases where the Django ORM handwaves too much and allows us to shoot ourselves in the foot.
I ran a test on my projects that use Django 2.1 and another that uses 1.11 with Model.objects.annotate(Count('id')) with no issues. If you post the full QuerySet, people will be able to help you further.
I met the problem and I solved it.
you should add the .order_by("xxx") follow your ORM!
before:
search_models.Author.objects.values("first_institution", 'institution_code').annotate(counts=Count('first_institution'))
after:
search_models.Author.objects.values("first_institution", 'institution_code').annotate(counts=Count('first_institution')).order_by('-counts')
It's worked for me, also wish can help you.

cx_Oracle.DatabaseError: ORA-14411

I ran an ALTER TABLE query to add some columns to a table and then db.commit(). That didn't raise any error or warning but in Oracle SQL developer the new columns don't show up on SELECT * ....
So I tried to rerun the ALTER TABLE but it raised
cx_Oracle.DatabaseError: ORA-14411: The DDL cannot be run concurrently with other DDLs
That kinda makes sense (I can't create columns that already exist) but when I try to fill the new column with values, I get a message
SQL Error: ORA-00904: "M0010": invalid ID
00904. 00000 - "%s: invalid identifier"
which suggests that the new column has not been created yet.
Does anybody understand what may be going on?
UPDATE/SOLVED I kept trying to run the queries another couple of times and at some point, things suddenly started working (for no apparent reason). Maybe processing time? Would be weird because the queries are ultra light. I'll get back to this if it happens again.
First, you don't need commit, DDL effectively commits any transaction.
Ora-14411 means
Another conflicting DDL was already running.
so it seems that your first ALTER TABLE statement hasn't finished yet (probably table is too big, or some other issues).

Categories