This seems like it should be obvious but I can't get it to work.
I'm using django to query a simple table into which a bunch of data gets inserted periodically. I'd like to write a view that only pulls data with the latest timestamp, something like
select * from mytable
where event_time = (
select max(event_time) from mytable);
What's the proper syntax for that?
You can try with latest method. Here documentation
MyModel.objects.latest('event_time')
assuming event_time has a date value, this will return the latest object (the max date)
You can try:
Assuming your table name is EventTime
EventTime.objects.all().order_by('-timestamp')[0]
[0] at the end will give you the first result.
Remove it and you will have all entries order by timestamp in desc order
EDIT: A better approach suggested by #Gocht will be:
EventTime.objects.all().order_by('-timestamp').first()
This will handle the scenario when there is no object present in the database.
Related
I´m trying to make an update with the django OMR but I don´t know the id. Only I konw that I want to update the last record. The other columns are not useful because the data is similar except for the date column(could be an aoption but the django OMR don´t recognized my query)
My actual query:
Tracking = TrackEExercises.filter(
exe_date = (fecha + delta).replace(microsecond=0)
).update(data_tracking = dataT)
You can simply get the last record based on the id:
last_record = TrackEExercises.objects.filter(exe_date=your_exe_date).order_by('-id')[0]
# Update
last_record.data_tracking = dataT
last_record.save()
You can also use .last() and .first():
TrackEExercises.objects.filter(exe_date=your_exe_date).order_by('-id').first()
TrackEExercises.objects.filter(exe_date=your_exe_date).order_by('id').last()
which returns the same record.
You can use order_by() to order the results based on any other field like a date:
TrackEExercises.objects.order_by('-exe_date')
I'm using SQLAlchemy ORM and trying to figure out how to produce a PostgreSQL query something along the lines of:
SELECT
payments.*
FROM
payments,
jsonb_array_elements(payments.data #> '{refunds}') refunds
WHERE
(refunds ->> 'created_at')
BETWEEN '2018-12-01T19:30:38Z' AND '2018-12-02T19:30:38Z';
Though with the start date inclusive and stop date exclusive.
I've been able to get close with:
refundsInDB = db_session.query(Payment).\
filter(Payment.data['refunds', 0, 'created_at'].astext >= startTime,
Payment.data['refunds', 0, 'created_at'].astext < stopTime ).\
all()
However, this only works if the refund (which is a nested array in the JSONB data) is the first element in the list of {'refunds':[]} whereas the SQL query above will work regardless of the position in the refund list.
After a good bit of searching it looks like there are some temporary recipes in an old SQLAlchemy github issue, one of which talks about using jsonb_array_elements in a query, but I haven't been able to quite make it work in the way I'd like.
If it helps my Payment.data JSONB is exactly like the Payment object from the Square Connect v1 API and I am trying to search the data using the created_at date found in the nested refunds list of refund objects.
Use Query.select_from() and a function expression alias to perform the query:
# Create a column object for referencing the `value` attribute produced by
# the set of jsonb_array_elements
val = db.column('value', type_=JSONB)
refundsInDB = db_session.query(Payment).\
select_from(
Payment,
func.jsonb_array_elements(Payment.data['refunds']).alias()).\
filter(val['created_at'].astext >= startTime,
val['created_at'].astext < stopTime ).\
all()
Note that unnesting the jsonb array and filtering based on it may produce multiple rows per Payment, but SQLAlchemy will only return distinct entities, when querying for a single entity.
I'm new to Python and working with SQL queries. I have a database that contains a table with meetings and their date along with an ID. What I want to do is check what meetings are happening on today's date. The code below results in showing all the meeting ID's that are happening on todays date. However I then want to check if a certain meeting ID is in the results, which I have stored as a variable, and if it is in there to carry out an IF function so I can then elaborate.
cur.execute("SELECT id FROM meeting WHERE DATE(starttime) = DATE(NOW())")
for row in cur.fetchall() :
print row[0]
You can ask the database to tell you if the id is there:
cur.execute("SELECT id FROM meeting WHERE DATE(starttime) = DATE(NOW()) AND id=%s", (id,))
if cur.rowcount:
# The id is there
else:
# The id is not there.
I am assuming that you are using MySQLdb here; different database adapters use slightly different query parameter styles. Others might use ? instead of %s.
I have a mysql table with coloumns of name, perf, date_time . How can i retrieve only the most recent MySQL row?
As an alternative:
SELECT *
FROM yourtable
WHERE date_and_time = (SELECT MAX(date_and_time) FROM yourtable)
This one would have the advantage of catching the case where multiple rows were inserted very quickly and ended up with the same timestamp.
SELECT * FROM table ORDER BY date, time LIMIT 1
...
ORDER BY `date and time` DESC
LIMIT 1
Note this is not the most recently added row, this is the row with the most "recent" value in the date and time column, which is what I assume you are looking for.
SELECT name,
perf,
date_and_time
FROM table
ORDER BY date_and_time DESC
LIMIT 1
Are date and time separate fields? And would you explain what you mean by "most recent"? If date and time are separate fields or are not refererring to user activity and you mean "most recently updated by user" than you might want to know that time-sensitive tables usually contain a field like "updated_at" which carry a mysql standard timestamp, e.g. '1989-11-09 12:34:32'. Perhaps you should consider introducing a field like this into your schema. You could than set triggers to update the datetime information like so:
CREATE TRIGGER mytable_update BEFORE UPDATE ON `mytable`
FOR EACH ROW SET NEW.updated_at = NOW();
Of course you could than retrieve the record which has been updated most recently by a select statement as answered above:
ORDER BY `updated_at` DESC LIMIT 1;
In addition, time-sensitive tables usually have a field 'created_at' which carries an unchanging timestamp set when the row is inserted. It is often connected to a trigger like so:
CREATE TRIGGER mytable_create BEFORE INSERT ON `mytable`
FOR EACH ROW SET NEW.created_at = NOW(), NEW.updated_at = NOW();
The first trigger above should reflect this and carry the created_at information with an additional statement:
NEW.created_at = OLD.created_at;
If you use a framework (like Django, as you tagged Python), this could be handled by the ORM.
select top 1 * from tablename order by date_and_time DESC (for sql server)
select * from taablename order by date_and_time DESC limit 1(for mysql)
I have a table called logs which has a datetime field.
I want to select the date and count of rows based on a particular date format.
How do I do this using SQLAlchemy?
I don't know of a generic SQLAlchemy answer. Most databases support some form of date formatting, typically via functions. SQLAlchemy supports calling functions via sqlalchemy.sql.func. So for example, using SQLAlchemy over a Postgres back end, and a table my_table(foo varchar(30), when timestamp) I might do something like
my_table = metadata.tables['my_table']
foo = my_table.c['foo']
the_date = func.date_trunc('month', my_table.c['when'])
stmt = select(foo, the_date).group_by(the_date)
engine.execute(stmt)
To group by date truncated to month. But keep in mind that in that example, date_trunc() is a Postgres datetime function. Other databases will be different. You didn't mention the underlyig database. If there's a database independent way to do it I've never found one. In my case I run production and test aginst Postgres and unit tests aginst SQLite and have resorted to using SQLite user defined functions in my unit tests to emulate Postgress datetime functions.
Does counting yield the same result when you just group by the unformatted datetime column? If so, you could just run the query and use Python date's strftime() method afterwards. i.e.
query = select([logs.c.datetime, func.count(logs.c.datetime)]).group_by(logs.c.datetime)
results = session.execute(query).fetchall()
results = [(t[0].strftime("..."), t[1]) for t in results]
I don't know SQLAlchemy, so I could be off-target. However, I think that all you need is:
SELECT date_formatter(datetime_field, "format-specification") AS dt_field, COUNT(*)
FROM logs
GROUP BY date_formatter(datetime_field, "format-specification")
ORDER BY 1;
OK, maybe you don't need the ORDER BY, and maybe it would be better to re-specify the date expression. There are likely to be alternatives, such as:
SELECT dt_field, COUNT(*)
FROM (SELECT date_formatter(datetime_field, "format-specification") AS dt_field
FROM logs) AS necessary
GROUP BY dt_field
ORDER BY dt_field;
And so on and so forth. Basically, you format the datetime field and then proceed to do the grouping etc on the formatted value.