The models.py of my table is :
class TestCaseGlobalMetricData(models.Model):
testRunData = models.ForeignKey(TestRunData)
testCase = models.CharField(max_length=200)
cvmTotalFree = models.IntegerField(default=0)
systemFree = models.IntegerField(default=0)
sharedMemory = models.IntegerField(default=0)
And the part of the code which uses this table is
TestCaseGlobalMetricData(testRunData=testRunDataObj,
testCase=tokens['tc_name'],
timestamp=tokens['timestamp'],
cvmTotalFree=totalFree).save()
When this line is executed the following error is seen,
File "/web/memmon/eink-memmon/projects/process.py", line 382, in process_cvm_usage
cvmTotalFree=totalFree).save()
.....
Warning: Field 'sharedMemory' doesn't have a default value
I tried inserting into the table using the python manage.py shell and it works fine.
The schema of the table as seen in mysql is
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| testRunData_id | int(11) | NO | MUL | NULL | |
| testCase | varchar(200) | NO | | NULL | |
| cvmTotalFree | int(11) | NO | | NULL | |
| systemFree | int(11) | NO | | NULL | |
| sharedMemory | int(11) | NO | | NULL | |
+-----------------+--------------+------+-----+---------+----------------+
Note:
The field sharedMemory was added in the models.py today. I did proper migrations using makemigrations and migrate commands.
Related
SQLAlchemy isn't respecting default=datetime.datetime.utcnow or default=func.now() (I've tried both) for DateTime columns.
Python:
class DataStuff(BASE):
"""some data"""
__tablename__ = 'datastuff'
_id = Column(Integer, primary_key=True)
_name = Column(String(64))
json = Column(sqlalchemy.UnicodeText())
_timestamp = Column(DateTime, default=datetime.datetime.utcnow)
MySQL:
mysql> describe datastuff;
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| _id | int(11) | NO | PRI | NULL | auto_increment |
| _name | varchar(64) | YES | | NULL | |
| json | text | YES | | NULL | |
| _timestamp | datetime | YES | | NULL | |
According to the documentation using server_default is the way to go when the desired action is to provide a default value on a column within the CREATE TABLE statement.
http://docs.sqlalchemy.org/en/rel_1_0/core/defaults.html
I ended up using server_default with the func.now() function instead of default=datetime.datetime.utcnow.
New code:
from sqlalchemy import func
_timestamp = Column(DateTime, server_default=func.now())
Result:
+---------------+--------------+------+-----+-------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+-------------------+----------------+
| _timestamp | datetime | YES | | CURRENT_TIMESTAMP | |
violá
This seems to be the most obscure issue I have dealt with so far. I am struggling hard.
I have an application that collects remote data from some hardware and saves it to a database. I renamed one of the columns using alembic. I have a development database and a testing database. Both are on the same server (MySQL via MariaDB on CentOS 7).
I interact with the development server via the app running on my laptop and the testing db is interacted with via the app running on a clone of the production server.
All servers and databases are set up using Ansible, so differences are limited to usernames and passwords.
Below is the ansible snippet from the upgrade script
def upgrade():
op.alter_column('units', 'ip_address', new_column_name='ipv4', existing_type=sa.String(100))
If I run the app from my laptop (using my IDE), the data is saved ok.
If I run the script on the testing server manually (env)$ python app.py, the data is saved ok.
But, here's the problem, If I run the script using supervisord, I get an SQLAlchemy error. (excerpts below, full traceback)
...
File "thefile.py", line 64, in run
self.data['my_wellsites'][w.name]['ipv4'] = wellsite.unit.ipv4
...
sqlalchemy.exc.InternalError: (InternalError) (1054, u"Unknown column 'units.ipv4' in 'field list'") 'SELECT units.id AS units_id, units.unittype_id AS units_unittype_id, units.serial AS units_serial, units.ipv4 AS units_ipv4, units.mac_address AS units_mac_address, units.engine_hours AS units_engine_hours, units.gen_odometer AS units_gen_odometer, units.gen_periodic AS units_gen_periodic, units.notes AS units_notes \nFROM units \nWHERE units.id = %s' (1,)
...
SQLAlchemy models..
class Unit(Model):
__tablename__ = 'units'
__table_args__ = {'mysql_engine': 'InnoDB'}
id = Column(Integer, Sequence('unit_id_seq'), primary_key=True)
wellsites = relationship('Wellsite', order_by='Wellsite.id', backref='unit')
ipv4 = Column(String(16), unique=True)
...
class Wellsite(Model):
__tablename__ = 'wellsites'
__table_args__ = {'mysql_engine': 'InnoDB'}
id = Column(Integer, Sequence('wellsite_id_seq'), primary_key=True)
unit_id = Column(Integer, ForeignKey('units.id'), nullable=False)
...
etc/supervisord.conf..
...
[program:datacollect]
command = /home/webdev/mydevelopment/git/ers_data_app/env/bin/python /home/webdev/mydevelopment/git/ers_data_app/data_monitoring/collection_manager.py
stdout_logfile=/home/webdev/logs/datacollect.log
stderr_logfile=/home/webdev/logs/datacollecterr.log
autostart=true
autorestart=unexpected
startsecs=10
I tried running an additional alembic upgrade with
op.create_unique_constraint('uq_ipv4', 'units', ['ipv4'])
No dice.
The traceback is identical (using diff program) except timestamps
Here are the two database descriptions of the units table (identical)
MariaDB [ers_DEV]> show columns from units;
+--------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| unittype_id | int(11) | YES | MUL | NULL | |
| serial | varchar(10) | YES | UNI | NULL | |
| ipv4 | varchar(100) | YES | UNI | NULL | |
| mac_address | varchar(17) | YES | UNI | NULL | |
| engine_hours | int(11) | YES | | NULL | |
| gen_odometer | tinyint(1) | YES | | NULL | |
| gen_periodic | tinyint(1) | YES | | NULL | |
| notes | mediumtext | YES | | NULL | |
+--------------+--------------+------+-----+---------+----------------+
MariaDB [ers_TEST]> show columns from units;
+--------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| unittype_id | int(11) | YES | MUL | NULL | |
| serial | varchar(10) | YES | UNI | NULL | |
| ipv4 | varchar(100) | YES | UNI | NULL | |
| mac_address | varchar(17) | YES | UNI | NULL | |
| engine_hours | int(11) | YES | | NULL | |
| notes | mediumtext | YES | | NULL | |
| gen_odometer | tinyint(1) | YES | | NULL | |
| gen_periodic | tinyint(1) | YES | | NULL | |
+--------------+--------------+------+-----+---------+----------------+
The issue was at the top of the /etc.supervisord.conf file. In it, an environment variable is set that was pointing the script to the wrong database, overriding the environment variable that was set and examined from anywhere else. This environment variable is only set when the script is being run from supervisor and was causing the trouble
I have 2 Models in my django app.
The corresponding tables for the two models are:
Table 1:
+-------------+---------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| post_id | int(11) | NO | MUL | NULL | |
| user_id | int(11) | NO | MUL | NULL | |
+-------------+---------+------+-----+---------+----------------+
Table2:
+---------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| p_text | varchar(200) | NO | | NULL | |
| p_slug | varchar(50) | YES | MUL | NULL | |
| user_id | int(11) | NO | MUL | NULL | |
+---------------+--------------+------+-----+---------+----------------+
Now what i want is to write the equivalent of the below query in my django view in the best way possible? The query i want to write is a simple join as:
select B.p_slug from Table1 A, Table2 B where A.post_id = B.id;
I tried but could not get it working. Any help please? How to implement the above query in Django views
The models are: Model1:
class Model1(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL)
post = models.ForeignKey(Model1)
class Model2(models.Model):
p_text = models.CharField(max_length=200)
user = models.ForeignKey(settings.AUTH_USER_MODEL)
p_slug = models.SlugField(null=True,blank=True)
Try this: Model2.objects.filter(pk__in=Model1.objcts.values_list('post_id', flat=True)).values('p_slug'). I hope it helps.
Using Django, I am trying to fetch this specific result view from the database using Django:
select * from CO2_Low_Adj a JOIN CO2_Low_Metrics b on a.gene_id_B = b.gene_id where a.gene_id_A='Traes_1AL_00A8A2030'
I know I can do it using connections, cursor, fetchall and get back a list of dictionaries. However, I am wondering if there is a way to do this in Django while keeping the ORM.
The tables look like this:
class Co2LowMetrics(models.Model):
gene_id = models.CharField(primary_key=True, max_length=24)
modular_k = models.FloatField()
modular_k_rank = models.IntegerField()
modular_mean_exp_rank = models.IntegerField()
module = models.IntegerField()
k = models.FloatField()
k_rank = models.IntegerField()
mean_exp = models.FloatField()
mean_exp_rank = models.IntegerField()
gene_gene = models.ForeignKey(Co2LowGene, db_column='Gene_gene_id') # Field name made lowercase.
class Meta:
managed = False
db_table = 'CO2_Low_Metrics'
class Co2LowGene(models.Model):
gene_id = models.CharField(primary_key=True, max_length=24)
entry = models.IntegerField(unique=True)
gene_gene_id = models.CharField(db_column='Gene_gene_id', max_length=24) # Field name made lowercase.
class Meta:
managed = False
db_table = 'CO2_Low_Gene'
class Co2LowAdj(models.Model):
gene_id_a = models.CharField(db_column='gene_id_A', max_length=24) # Field name made lowercase.
edge_number = models.IntegerField(unique=True)
gene_id_b = models.CharField(db_column='gene_id_B', max_length=24) # Field name made lowercase.
value = models.FloatField()
gene_gene_id_a = models.ForeignKey('Co2LowGene', db_column='Gene_gene_id_A') # Field name made lowercase.
gene_gene_id_b = models.ForeignKey('Co2LowGene', db_column='Gene_gene_id_B') # Field name made lowercase.
class Meta:
managed = False
db_table = 'CO2_Low_Adj'
The database table descriptions are:
mysql> describe CO2_Low_Metrics;
+-----------------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------------+-------------+------+-----+---------+-------+
| gene_id | varchar(24) | NO | PRI | NULL | |
| modular_k | double | NO | | NULL | |
| modular_k_rank | int(8) | NO | | NULL | |
| modular_mean_exp_rank | int(8) | NO | | NULL | |
| module | int(8) | NO | | NULL | |
| k | double | NO | | NULL | |
| k_rank | int(8) | NO | | NULL | |
| mean_exp | double | NO | | NULL | |
| mean_exp_rank | int(8) | NO | | NULL | |
| Gene_gene_id | varchar(24) | NO | MUL | NULL | |
+-----------------------+-------------+------+-----+---------+-------+
mysql> describe CO2_Low_Gene;
+--------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+----------------+
| gene_id | varchar(24) | NO | PRI | NULL | |
| entry | int(8) | NO | UNI | NULL | auto_increment |
| Gene_gene_id | varchar(24) | NO | | NULL | |
+--------------+-------------+------+-----+---------+----------------+
mysql> describe CO2_Low_Adj;
+----------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+-------------+------+-----+---------+----------------+
| gene_id_A | varchar(24) | NO | MUL | NULL | |
| edge_number | int(9) | NO | PRI | NULL | auto_increment |
| gene_id_B | varchar(24) | NO | MUL | NULL | |
| value | double | NO | | NULL | |
| Gene_gene_id_A | varchar(24) | NO | MUL | NULL | |
| Gene_gene_id_B | varchar(24) | NO | MUL | NULL | |
+----------------+-------------+------+-----+---------+----------------+
Assume that I do not have the ability to change the underlying database schema. That may change and if a suggestion can help in making it easier to use Django's ORM then I can attempt to get it changed.
However, I have been trying to use prefetch_related and select_related but I'm doing something wrong and am not getting everything back right.
With my SQL query I get essentially with the described tables in order CO2_Low_Adj then CO2_Low_Metrics where gene_id_A is the same as gene_gene_id_A ('Traes_1AL_00A8A2030') and gene_id_B is the same as gene_gene_id_B. CO2_Low_Gene does not seem to be used at all with the SQL query.
Thanks.
Django does not have a way to perform JOIN queries without foreign keys. This is why prefetch_related and select_related will not work - they work on foreign keys.
I am not sure what are you trying to achieve. Since your gene_id is unique, there will be only one CO2_Low_Metrics instance and a list of adj instances:
adj = CO2_Low_Adj.objects.filter(gene_id_A='Traes_1AL_00A8A2030')
metrics = CO2_Low_Metrics.objects.get(pk='Traes_1AL_00A8A2030')
and then work on a separate list.
I have a django project for which I'm trying to update the database tables without using any migration tools. I'm adding a 'slug' field to a model which simply references it's name, and I was intending on doing so by replicating what was in another table that already has a slug for.
So, I have an existing table 'gym' in the database as follows
+----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| gym_name | varchar(50) | NO | UNI | NULL | |
| gym_slug | varchar(50) | NO | MUL | NULL | |
| created | datetime | NO | | NULL | |
| modified | datetime | NO | | NULL | |
+----------+--------------+------+-----+---------+----------------+
and I have another table 'wall' in the database as follows:
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| wall_name | varchar(50) | NO | UNI | NULL | |
| gym_id | int(11) | NO | MUL | NULL | |
| created | datetime | NO | | NULL | |
| modified | datetime | NO | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
This would get me part of the way there:
ALTER TABLE wall ADD COLUMN wall_slug varchar(50);
But I'm not certain how to figure out where the foreign key in the first table is referencing and thus where I should point the new one.
End goal: the wall_slug field tied to a unique wall_name field. Hope that makes sense.
from django.template.defaultfilters import slugify
class Wall(Model):
name = CharField(max_length=60)
slug = CharField(max_length=60, unique=True)
...
def save(self, *args, **kwargs):
if self.slug == '':
self.slug = slugify(self.name)
super(Wall, self).save(*args, **kwargs)
This will generate for you a slug based on the name when saving. It also protects you from changing the slug next time you change the name, so URLs remain stable and unchanged.