I have a list of numeric codes with corresponding mnemonic names and I want to have a Django model for them so the names are primary keys, but there is also a constraint that the values in the code column are unique.
What I tried is the following:
class Constant(models.Model):
name = models.CharField(max_length=70)
name.primary_key = True
code = models.IntegerField()
description = models.CharField(max_length=100)
unique_together = (("code",),)
I realize that unique_together is meant to enforce uniqueness of values in a set of columns, but I thought I would try with just one and it seemed to work, i.e. no error when doing python manage.py syncdb, but it doesn't really enforce the constraint I want:
mysql> describe constant;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| name | varchar(70) | NO | PRI | | |
| code | int(11) | NO | | | |
| description | varchar(100) | NO | | | |
+-------------+--------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
mysql> insert into constant values ('x',1,'fooo');
Query OK, 1 row affected (0.00 sec)
mysql> insert into constant values ('y',1,'foooo');
Query OK, 1 row affected (0.00 sec)
What can I do to make sure values in both columns are unique?
Add the unique option to your code field.
class Constant(models.Model):
name = models.CharField(max_length=70, primary_key=True)
code = models.IntegerField(unique=True)
description = models.CharField(max_length=100)
Related
I have a table which has columns named measured_time, data_type and value.
In data_type, there is two types, temperature and humidity.
I want to combine two rows of data if they have same measured_time using Django ORM.
I am using Maria DB.
Using Raw SQL, The following Query does what I want to.
SELECT T1.measured_time, T1.temperature, T2.humidity
FROM ( SELECT CASE WHEN data_type = 1 then value END as temperature,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T1,
( SELECT CASE WHEN data_type = 1 then value END as temperature ,
CASE WHEN data_type = 2 then value END as humidity ,
measured_time FROM data_table) as T2
WHERE T1.measured_time = T2.measured_time and
T1.temperature IS NOT null and T2.humidity IS NOT null and
DATE(T1.measured_time) = '2019-07-01'
Original Table
| measured_time | data_type | value |
|---------------------|-----------|-------|
| 2019-07-01-17:27:03 | 1 | 25.24 |
| 2019-07-01-17:27:03 | 2 | 33.22 |
Expected Result
| measured_time | temperaure | humidity |
|---------------------|------------|----------|
| 2019-07-01-17:27:03 | 25.24 | 33.22 |
I've never used it and so can't answer in detail, but you can feed a raw SQL query into Django and get the results back through the ORM. Since you have already got the SQL this may be the easiest way to proceed. Documentation here
I have a MySQL database that contains a table named commands with the following structure:
+-----------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| input | varchar(3000) | NO | | NULL | |
| inputhash | varchar(66) | YES | UNI | NULL | |
+-----------+---------------+------+-----+---------+----------------+
I am trying to insert rows in it, but only if the inputhash field does not already exist. I thought INSERT IGNORE was the way to do this, but I am still getting warnings.
For instance, suppose that the able already contains
+----+---------+------------------------------------------------------------------+
| id | input | inputhash |
+----+---------+------------------------------------------------------------------+
| 1 | enable | 234a86bf393cadeba1bcbc09a244a398ac10c23a51e7fd72d7c449ef0edaa9e9 |
+----+---------+------------------------------------------------------------------+
Then when using the following Python code to insert a row
import MySQLdb
db = MySQLdb.connect(host='xxx.xxx.xxx.xxx', user='xxxx', passwd='xxxx', db='dbase')
c = db.cursor()
c.execute('INSERT IGNORE INTO `commands` (`input`, `inputhash`) VALUES (%s, %s)', ('enable', '234a86bf393cadeba1bcbc09a244a398ac10c23a51e7fd72d7c449ef0edaa9e9',))
I am getting the warning
Warning: Duplicate entry '234a86bf393cadeba1bcbc09a244a398ac10c23a51e7fd72d7c449ef0edaa9e9' for key 'inputhash'
c.execute('INSERT IGNORE INTO `commands` (`input`, `inputhash`) VALUES (%s, %s)', ('enable','234a86bf393cadeba1bcbc09a244a398ac10c23a51e7fd72d7c449ef0edaa9e9',))
Why does this happen? I thought that the whole point of using INSERT IGNORE on a table with UNIQUE fields is to suppress the error and simply ignore the write attempt?
What is the proper way to resolve this? I suppose I can suppress the warning in Python with warnings.filterwarnings('ignore') but why does the warning appear in the first place?
I hope it will help you !
import MySQLdb
db = MySQLdb.connect(host='xxx.xxx.xxx.xxx', user='xxxx', passwd='xxxx',
db='dbase')
c = db.cursor()
c.execute('INSERT INTO `commands` (`input`, `inputhash`) VALUES ('enable',
'234a86bf393cadeba1bcbc09a244a398ac10c23a51e7fd72d7c449ef0edaa9e9') ON
DUPLICATE KEY UPDATE 'inputhash'='inputhash')
I have a model given below.
class mc(models.Model):
ap=models.CharField(max_length=50,primary_key=True)
de=models.CharField(max_length=50)
STATUS=models.CharField(max_length=12,default='Y')
class Meta:
unique_together=(("ap","de"),)
db_table='mc'
I have written ap='1', de='2' to table mc.
+----+----+--------+
| ap | de | STATUS |
+----+----+--------+
| 1 | 2 | Y |
+----+----+--------+
Then i have tried to write ap='1' and de='3' but its just overwritten the previous one. Now the table contains
+----+----+--------+
| ap | de | STATUS |
+----+----+--------+
| 1 | 3 | Y |
+----+----+--------+
Then i tried to write ap='2',de='3' and its work. Since i have given unique_together, the compination of ap and de is not working.
But unique_together work if i haven't used 'primary_key' or 'unique' constraints on either of ap or de.
Will you please help me.? How to use unique_together on primary keys?
Actual problem is that ,i have another class mc1 which use a foreign key to the field ap.
class mc1(models.Model):
test=models.ForeignKey(mc,on_delete=models.CASCADE,to_field='ap')
So mc.ap should be unique to define a foreignkey.
Django does not support multiple-column primary keys. It has been a feature request for several years. Each model must have one primary key.
In your case, if ap is the primary key, then each value in ap must be unique. It is not possible to store both (ap=1, de=2) and (ap=1, de=3).
Perhaps you could add another field as the primary key. Then you will be able to have unique_together for ('ap', 'de').
This question already has answers here:
Django unique together constraint failure?
(4 answers)
Closed 9 years ago.
I am using django 1.5.5 and python 2.7 and MySQL
this is my model
class Foo(models.Model):
user = models.ForeignKey(User)
date = models.DateTimeField(null=True, blank=True,editable=True)
class Meta:
unique_together = ('user', 'date')
If i use this i can add 2 records with the same user and an empty date.
I would like the uniqueness to be enforced even for empty date.
table create command
CREATE TABLE `management_foo`
( `id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL,
`date` datetime DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `user_id` (`user_id`,`date`),
KEY `management_foo_6232c63c` (`user_id`),
CONSTRAINT `user_id_refs_id_d47e5747` FOREIGN KEY (`user_id`)
REFERENCES `auth_user` (`id`))
ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1
and table describe
+-------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| user_id | int(11) | NO | MUL | NULL | |
| date | datetime | YES | | NULL | |
+-------------+--------------+------+-----+---------+----------------+
In InnoDB each NULL treats as unique value.
example:
mysql> create table x (i int, j int, primary key (i), unique key (j));
mysql> insert into x (i,j) values (NULL,NULL);
ERROR 1048 (23000): Column 'i' cannot be null
mysql> insert into x (i,j) values (1,NULL);
mysql> insert into x (i,j) values (2,NULL);
mysql> insert into x (i,j) values (3,3);
mysql> select * from x;
+---+------+
| i | j |
+---+------+
| 1 | NULL |
| 2 | NULL |
| 3 | 3 |
+---+------+
3 rows in set (0.01 sec)
mysql> insert into x (i,j) values (4,3);
ERROR 1062 (23000): Duplicate entry '3' for key 'j'
You need to add not null constraint <=> remove null=True from field definition (replace it with default for example)
BTW, it's better to write in such style: unique_together = (("field1", "field2"),) – it will be much easier to extend unique pairs
Just for the record: the SQL standard states that NULL is not a value and so must not be taken in account for unique constraints. IOW it has nothing to do with Django and actually is the expected behavior.
For a technical solution see akaRem's answer.
I have the following type of data:
The data is segmented into "frames" and each frame has a start and stop "gpstime". Within each frame are a bunch of points with a "gpstime" value.
There is a frames model that has a frame_name,start_gps,stop_gps,...
Let's say I have a list of gpstime values and want to find the corresponding frame_name for each.
I could just do a loop...
framenames = [frames.objects.filter(start_gps__lte=gpstime[idx],stop_gps__gte=gpstime[idx]).values_list('frame_name',flat=True) for idx in range(len(gpstime))]
This will give me a list of 'frame_name', one for each gpstime. This is what I want. However this is very slow.
What I want to know: Is there a better way to preform this lookup to get a framename for each gpstime that is more efficient than iterating over the list. This list could get faily large.
Thanks!
EDIT: Frames model
class frames(models.Model):
frame_id = models.AutoField(primary_key=True)
frame_name = models.CharField(max_length=20)
start_gps = models.FloatField()
stop_gps = models.FloatField()
def __unicode__(self):
return "%s"%(self.frame_name)
If I understand correctly, gpstime is a list of the times, and you want to produce a list of framenames with one for each gpstime. Your current way of doing this is indeed very slow because it makes a db query for each timestamp. You need to minimize the number of db hits.
The answer that comes first to my head uses numpy. Note that I'm not making any extra assumptions here. If your gpstime list can be sorted, i.e. the ordering does not matter, then it could be done much faster.
Try something like this:
from numpy import array
frame_start_times=array(Frame.objects.all().values_list('start_time'))
frame_end_times=array(Frame.objects.all().values_list('end_time'))
frame_names=array(Frame.objects.all().values_list('frame_name'))
frame_names_for_times=[]
for time in gpstime:
frame_inds=frame_start_times[(frame_start_times<time) & (frame_end_times>time)]
frame_names_for_times.append(frame_names[frame_inds].tostring())
EDIT:
Since the list is sorted, you can use .searchsorted():
from numpy import array as a
gpstimes=a([151,152,153,190,649,652,920,996])
starts=a([100,600,900,1000])
ends=a([180,650,950,1000])
names=a(['a','b','c','d',])
names_for_times=[]
for time in gpstimes:
start_pos=starts.searchsorted(time)
end_pos=ends.searchsorted(time)
if start_pos-1 == end_pos:
print time, names[end_pos]
else:
print str(time) + ' was not within any frame'
The best way to speed things up is to add indexes to those fields:
start_gps = models.FloatField(db_index=True)
stop_gps = models.FloatField(db_index=True)
and then run manage.py dbsync.
The frames table is very large, but I have another value that lowers
the frames searched in this case to under 50. There is not really a
pattern, each frame starts at the same gpstime the previous stops.
I don't quite understand how you lowered the number of searched frames to 50, but if you're searching for, say, 10,000 gpstime values in only 50 frames, then it's probably easiest to load those 50 frames into RAM, and do the search in Python, using something similar to foobarbecue's answer.
However, if you're searching for, say, 10 gpstime values in the entire table which has, say, 10,000,000 frames, then you may not want to load all 10,000,000 frames into RAM.
You can get the DB to do something similar by adding the following index...
ALTER TABLE myapp_frames ADD UNIQUE KEY my_key (start_gps, stop_gps, frame_name);
...then using a query like this...
(SELECT frame_name FROM myapp_frames
WHERE 2.5 BETWEEN start_gps AND stop_gps LIMIT 1)
UNION ALL
(SELECT frame_name FROM myapp_frames
WHERE 4.5 BETWEEN start_gps AND stop_gps LIMIT 1)
UNION ALL
(SELECT frame_name FROM myapp_frames
WHERE 7.5 BETWEEN start_gps AND stop_gps LIMIT 1);
...which returns...
+------------+
| frame_name |
+------------+
| Frame 2 |
| Frame 4 |
| Frame 7 |
+------------+
...and for which an EXPLAIN shows...
+----+--------------+--------------+-------+---------------+--------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------+--------------+-------+---------------+--------+---------+------+------+--------------------------+
| 1 | PRIMARY | myapp_frames | range | my_key | my_key | 8 | NULL | 3 | Using where; Using index |
| 2 | UNION | myapp_frames | range | my_key | my_key | 8 | NULL | 5 | Using where; Using index |
| 3 | UNION | myapp_frames | range | my_key | my_key | 8 | NULL | 8 | Using where; Using index |
| NULL | UNION RESULT | <union1,2,3> | ALL | NULL | NULL | NULL | NULL | NULL | |
+----+--------------+--------------+-------+---------------+--------+---------+------+------+--------------------------+
...so you can do all the lookups in one query which hits that index, and the index should be cached in RAM.