Retrieving only the most recent row in MySQL - python

I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?

As an alternative:
SELECT *
FROM yourtable
WHERE date_and_time = (SELECT MAX(date_and_time) FROM yourtable)
This one would have the advantage of catching the case where multiple rows were inserted very quickly and ended up with the same timestamp.

SELECT * FROM table ORDER BY date, time LIMIT 1

...
ORDER BY `date and time` DESC
LIMIT 1
Note this is not the most recently added row, this is the row with the most "recent" value in the date and time column, which is what I assume you are looking for.

SELECT name,
perf,
date_and_time
FROM table
ORDER BY date_and_time DESC
LIMIT 1

Are date and time separate fields? And would you explain what you mean by "most recent"? If date and time are separate fields or are not refererring to user activity and you mean "most recently updated by user" than you might want to know that time-sensitive tables usually contain a field like "updated_at" which carry a mysql standard timestamp, e.g. '1989-11-09 12:34:32'. Perhaps you should consider introducing a field like this into your schema. You could than set triggers to update the datetime information like so:
CREATE TRIGGER mytable_update BEFORE UPDATE ON `mytable`
FOR EACH ROW SET NEW.updated_at = NOW();
Of course you could than retrieve the record which has been updated most recently by a select statement as answered above:
ORDER BY `updated_at` DESC LIMIT 1;
In addition, time-sensitive tables usually have a field 'created_at' which carries an unchanging timestamp set when the row is inserted. It is often connected to a trigger like so:
CREATE TRIGGER mytable_create BEFORE INSERT ON `mytable`
FOR EACH ROW SET NEW.created_at = NOW(), NEW.updated_at = NOW();
The first trigger above should reflect this and carry the created_at information with an additional statement:
NEW.created_at = OLD.created_at;
If you use a framework (like Django, as you tagged Python), this could be handled by the ORM.

select top 1 * from tablename order by date_and_time DESC (for sql server)
select * from taablename order by date_and_time DESC limit 1(for mysql)

Related

How to execute more than one query on one table at the same time

Can i execute more than one query on one table at the same time?
for example i want to have 4 different searches on one the table at the same time.
like this SELECT * FROM TABLE WHERE id=1,SELECT name FROM TABLE WHERE id=4.
i want to execute all of them at the same time in separate command.
Well you could use WHERE IN (...) logic:
SELECT *
FROM yourTable
WHERE id IN (1, 2, 3, 4)
ORDER BY id;
The only possibile difficultly would be keeping track of whence id each record appears in the result set. Using an ORDER BY id clause might be one way to deal with this Python, as you could then iterate in order, and retrieve the set of records for each id value.

Postgres: autogenerate primary key in postgres using python

cursor.execute('UPDATE emp SET name = %(name)s',{"name": name} where ?)
I don't understand how to get primary key of a particular record.
I have some N number of records present in DB. I want to access those record &
manipulate.
Through SELECT query i got all records but i want to update all those records accordingly
Can someone lend a helping hand?
Thanks in Advance!
Table structure:
ID CustomerName ContactName
1 Alfreds Futterkiste
2 Ana Trujillo
Here ID is auto genearted by system in postgres.
I am accessing CustomerName of two record & updating. So here when i am updating
those record the last updated is overwrtited in first record also.
Here i want to set some condition so that When executing update query according to my record.
After Table structure:
ID CustomerName ContactName
1 xyz Futterkiste
2 xyz Trujillo
Here I want to set first record as 'abc' 2nd record as 'xyz'
Note: It ll done using PK. But i dont know how to get that PK
You mean you want to use UPDATE SQL command with WHERE statement:
cursor.execute("UPDATE emp SET CustomerName='abc' WHERE ID=1")
cursor.execute("UPDATE emp SET CustomerName='xyz' WHERE ID=2")
This way you will UPDATE rows with specific IDs.
Maybe you won't like this, but you should not use autogenerated keys in general. The only exception is when you want to insert some rows and do not do anything else with them. The proper solution is this:
Create a sequencefor your table. http://www.postgresql.org/docs/9.4/static/sql-createsequence.html
Whenever you need to insert a new row, get the next value from the generator (using select nextval('generator_name')). This way you will know the ID before you create the row.
Then insert your row by specifying the id value explicitly.
For the updates:
You can create unique constraints (or unique indexes) on sets of coulmns that are known to be unique
But you should identify the rows with the identifiers internally.
When referring records in other tables, use the identifiers, and create foreign key constraints. (Not always, but usually this is good practice.)
Now, when you need to updatea row (for example: a customer) then you should already know which customer needs to be modified. Because all records are identified by the primary key id, you should already know the id for that row. If you don't know it, but you have an unique index on a set of fields, then you can try to get the id. For example:
select id from emp where CustomerName='abc' -- but only if you have a unique constraing on CustomerName!
In general, if you want to update a single row, then you should NEVER update this way:
update emp set CustomerName='newname' where CustomerName='abc'
even if you have an unique constraint on CustomerName. The explanation is not easy, and won't fit here. But think about this: you may be sending changes in a transaction block, and there can be many opened transactions at the same time...
Of course, it is fine to update rows, if you intention is to update all rows that satisfy your condition.

Select all with latest timestamp in django?

This seems like it should be obvious but I can't get it to work.
I'm using django to query a simple table into which a bunch of data gets inserted periodically. I'd like to write a view that only pulls data with the latest timestamp, something like
select * from mytable
where event_time = (
select max(event_time) from mytable);
What's the proper syntax for that?
You can try with latest method. Here documentation
MyModel.objects.latest('event_time')
assuming event_time has a date value, this will return the latest object (the max date)
You can try:
Assuming your table name is EventTime
EventTime.objects.all().order_by('-timestamp')[0]
[0] at the end will give you the first result.
Remove it and you will have all entries order by timestamp in desc order
EDIT: A better approach suggested by #Gocht will be:
EventTime.objects.all().order_by('-timestamp').first()
This will handle the scenario when there is no object present in the database.

Check Python SQL Query result for variable

I'm new to Python and working with SQL queries. I have a database that contains a table with meetings and their date along with an ID. What I want to do is check what meetings are happening on today's date. The code below results in showing all the meeting ID's that are happening on todays date. However I then want to check if a certain meeting ID is in the results, which I have stored as a variable, and if it is in there to carry out an IF function so I can then elaborate.
cur.execute("SELECT id FROM meeting WHERE DATE(starttime) = DATE(NOW())")
for row in cur.fetchall() :
print row[0]
You can ask the database to tell you if the id is there:
cur.execute("SELECT id FROM meeting WHERE DATE(starttime) = DATE(NOW()) AND id=%s", (id,))
if cur.rowcount:
# The id is there
else:
# The id is not there.
I am assuming that you are using MySQLdb here; different database adapters use slightly different query parameter styles. Others might use ? instead of %s.

Group by date in a particular format in SQLAlchemy

I have a table called logs which has a datetime field.
I want to select the date and count of rows based on a particular date format.
How do I do this using SQLAlchemy?
I don't know of a generic SQLAlchemy answer. Most databases support some form of date formatting, typically via functions. SQLAlchemy supports calling functions via sqlalchemy.sql.func. So for example, using SQLAlchemy over a Postgres back end, and a table my_table(foo varchar(30), when timestamp) I might do something like
my_table = metadata.tables['my_table']
foo = my_table.c['foo']
the_date = func.date_trunc('month', my_table.c['when'])
stmt = select(foo, the_date).group_by(the_date)
engine.execute(stmt)
To group by date truncated to month. But keep in mind that in that example, date_trunc() is a Postgres datetime function. Other databases will be different. You didn't mention the underlyig database. If there's a database independent way to do it I've never found one. In my case I run production and test aginst Postgres and unit tests aginst SQLite and have resorted to using SQLite user defined functions in my unit tests to emulate Postgress datetime functions.
Does counting yield the same result when you just group by the unformatted datetime column? If so, you could just run the query and use Python date's strftime() method afterwards. i.e.
query = select([logs.c.datetime, func.count(logs.c.datetime)]).group_by(logs.c.datetime)
results = session.execute(query).fetchall()
results = [(t[0].strftime("..."), t[1]) for t in results]
I don't know SQLAlchemy, so I could be off-target. However, I think that all you need is:
SELECT date_formatter(datetime_field, "format-specification") AS dt_field, COUNT(*)
FROM logs
GROUP BY date_formatter(datetime_field, "format-specification")
ORDER BY 1;
OK, maybe you don't need the ORDER BY, and maybe it would be better to re-specify the date expression. There are likely to be alternatives, such as:
SELECT dt_field, COUNT(*)
FROM (SELECT date_formatter(datetime_field, "format-specification") AS dt_field
FROM logs) AS necessary
GROUP BY dt_field
ORDER BY dt_field;
And so on and so forth. Basically, you format the datetime field and then proceed to do the grouping etc on the formatted value.

Categories