I am using dynamodb with the python api, and have a list attribute, the list contains complex data in it.
I would like to be able to remove a specific item.
I found this tutorial explaining how to remove an item from the list by it's index.
And found this SO question regarding the situation.
Both the tutorial and the SO question show how to remove an item from a list by it's index, I have a more specific situation, where two users can use the same dynamodb table at once, and both of them might be trying to remove the same item, when using index, it can cause a situation as the following: having a list [1,2,3] two users want to remove the item "1" and using remove list[0], the first user removes the item 1, but now the list is [2,3] and the second user removes the item "2".
I found that you can remove a specific item by it's value when using dynamodb set datatype, but there is no set that can contain a complex data, only binary, str and number and I need to store something that is more like: {"att1":[1,2,3], "att2":str, "attr3":{...}} and nested.
How can I remove an item without the risk of removing another item by the index if someone already removed it before me causing me to remove the wrong item?
I don't remember exactly is dynamodb can return hash of the existing record
If not you can try to add it as additional field and create a key by this property
And then you can update your object with where clause
something like
aws dynamodb update-item \
--table-name ProductCatalog \
--key '{"myHash":{"N":"125948abcdef1234"}}' \
--update-expression
Idea is if object was already updated by someone hash also should be different
Related
My Firebase realtime database schema:
Let's suppose above Firebase database schema.
I want to get data with order_by_key() which after first 5 and before first 10 not more. Range should be 5-10. Like in the image.
My key is always starting with -.
I'm trying this but failed. It returns 0. How can I do this?
snapshot = ref.child('tracks').order_by_key().start_at('-\5').end_at(u'-\10').get()
Firebase queries are based on cursor/anchor values, and not on offsets. This means that the start_at and end_at calls expect values of the thing you order on, so in your keys they expect the keys of those notes.
To get the slice you indicate you'll need:
ref.child('tracks').order_by_key().start_at('-MQJ7P').end_at(u'-MQJ8O').get()
If you don't know either of those values, you can't specify them and can only start from the first item or end on the last item.
The only exception is that you can specify a limit_to_first instead of end_at to get a number of items at the start of the slice:
ref.child('tracks').order_by_key().start_at('-MQJ7P').limit_to_first(5).get()
Alternatively if you know only the key of the last item, you can get the five items before that with:
ref.child('tracks').order_by_key().end_at('-MQJ8O').limit_to_last(5).get()
But you'll need to know at least one of the keys, typically because you've shown it as the last item on the previous page/first item on the next page.
I made a program with python and mongodb to do some diaries. Like this
Sometimes I want to delete the last sentence, just by typing "delete!"
But I dont know how to delete in a samrt way. I dont want to use "skip".
Is there a good way to do it?
Be it first or last item, MongoDB maintains unique _id key for each record and thus you can just pass that id field in your delete query either using deleteOne() or deleteMany(). Since only one record to delete you need to use deleteOne() like
db.collection_name.deleteOne({"_id": "1234"}) // replace 1234 with actual id
I'm using Python 3.2.3, with the MySQL/Connector 1.0.7 module. Is there a way to return the column names, if the MySQL query returns an empty result?
For example. Say I have this query:
SELECT
`nickname` AS `team`,
`w` AS `won`,
`l` AS `lost`
WHERE `w`>'10'
Yet, if there's nobody over 10, it returns nothing, obviously. Now, I know I can check if the result is None, but, can I get MySQL to return the column name and a NULL value for it?
If you're curious, the reason I'm wondering if this is possible, is because I'm dynamically building dict's based on the column names. So, the above, would end up looking something like this if nobody was over 10...
[{'team':None,'won':None,'lost':None}]
And looks like this, if it found 3 teams over 10...
[{'team':'Tigers','won':14,'lost':6},
{'team':'Cardinals','won':12,'lost':8},
{'team':'Giants','won':15,'lost':4}]
If this kind of thing is possible, then I won't have to write a ton of exception checks all over the code in case of empty dict's all over the place.
You could use a DESC table_name first, you should get the column names in the first column
Also you already know the keys in the dict so you can construct yourself and then append things to it if the result has values.
[{'team':None,'won':None,'lost':None}]
But what I fail to see why you need this. If you have a list of dictionaries, I am guessing you will have for loop operations. For loop will not do anything to a empty list, so you would not have to bother about exception checks
If you have to do something like result[0]['team'] then you should definitely check if len(result)>0
I want to have multiple linked list in a SQL table, using MySQL and SQLAlchemy (0.7). All lists with it's first node with parent being 0, and ends with child being 0. The id represents the list, and not the indevidiual element. The element is identified by PK
With some omitted syntax (not relevant to the problem) it should look something like this:
id(INT, PK)
content (TEXT)
parent(INT, FK(id), PK)
child(INT, FK(id), PK)
As the table has multiple linked lists how can return the entire list from the database I select a specific ID and parent is 0?
For example:
SELECT * FROM ... WHERE id = 3 AND parent = 0
Given that you have multiple linked lists stored in the same table, I assume that you store either the HEAD and/or the TAIL of those in some other tables. Few ideas:
1) Keep the linked list:
The first big improvement (also proposed in the comments) from the data-querying perspective would be to have some common identifier (lets call it ListID) of all the nodes in the same list. Here there are few options:
If each list is referenced only from one object (data row) [I would even phrase the question like "Does the list belong to a single object?], then this ListID could simply be the (primary) identifier of the holder object with the ForeignKey on top to ensure data integrity.
In this case, querying all list is very simple. In fact, you can define the relationship and navigate it like my_object.my_list_items.
If the list is used/referenced by multiple objects, then one could create another table which will consist only of one column ListID (PK), and each Node/Item will again have a ForeignKey to it, or something similar
Else, large lists can be loaded in two queries/SQL statements:
query the HEAD/TAIL by its ID
query the whole list based on received ListID of the HEAD/TAIL
In fact, this can be done with one query like the one below (Single-query example), which is more efficient from the IO perspective, but doing it in two steps has the advantage that you immediately have a reference to the HEAD (or TAIL) node.
Single-query example:
# single-query using join (not tested)
Head = alias(Node)
qry = session.query(Node).join(Head, Node.ListID == Head.ListID).filter(Head.ID == head_node_id)
Iin any case, in order to traverse the linked list, you would have to get the HEAD/TAIL by its ID, then traverse as usual.
Note: Here I am not certain if SA would recognize that the reference objects are already loaded into session, or will issue other SQL statements for each of these, which will defeat the purpose of bulk loading.
2) Replace linked list with Ordering List extension:
Please read the Ordering List documentation. It well might be that Ordering List implementation will be good enough for you to use instead of the linked list
I have a Python program for deleting duplicates from a list of names.
But I'm in a dilemma and searching out for a most efficient way out of both means.
I have uploaded a list of names to a SQLite DB, into a column in a table.
Whether comparing the names and deleting the duplicates out of them in a DB is good or loading them to Python means getting them into Python and deleting the duplicates and pushing them back to the DB is good?
I'm confused and here is a piece of code to do it on SQLite:
dup_killer (member_id, date) SELECT * FROM talks GROUP BY member_id,
If you use the names as a key in the database, the database will make sure they are not duplicated. So there would be no reason to ship the list to Python and de-dup there.
If you haven't inserted the names into the database yet, you might as well de-dup them in Python first. It is probably faster to do it in Python using the built-in features than to incur the overhead of repeated attempts to insert to the database.
(By the way: you can really speed up the insertion of many names if you wrap all the inserts in a single transaction. Start a transaction, insert all the names, and finish the transaction. The database does some work to make sure that the database is consistent, and it's much more efficient to do that work once for a whole list of names, rather than doing it once per name.)
If you have the list in Python, you can de-dup it very quickly using built-in features. The two common features that are useful for de-duping are the set and the dict.
I have given you three examples. The simplest case is where you have a list that just contains names, and you want to get a list with just unique names; you can just put the list into a set. The second case is that your list contains records and you need to extract the name part to build the set. The third case shows how to build a dict that maps a name onto a record, then inserts the record into a database; like a set, a dict will only allow unique values to be used as keys. When the dict is built, it will keep the last value from the list with the same name.
# list already contains names
unique_names = set(list_of_all_names)
unique_list = list(unique_names) # lst now contains only unique names
# extract record field from each record and make set
unique_names = set(x.name for x in list_of_all_records)
unique_list = list(unique_names) # lst now contains only unique names
# make dict mapping name to a complete record
d = dict((x.name, x) for x in list_of_records)
# insert complete record into database using name as key
for name in d:
insert_into_database(d[name])