How do I do a Simple Scan in DynamoDB? - python

Trying to understand the docs has been very difficult in relation to trying to understand how to do a simple scan in AWS DynamoDB.
Can someone please explain to me in simple terms how to do a basic scan?

What is a Scan?
The Scan operation returns one or more items and item attributes by accessing every item in a table or a secondary index.
Explanation
A scan operation in it's simplest form looks through everything in your table. Most of the time, you probably don't need the whole table to be returned or even looked at. As a result, many often decide to use filters to cut down on the stuff to look through, process and return.
How do I Scan?
Here is a simple scan operation in python. Even if you aren't using python, this guide will be very helpful.
# Table = 'grades'
# Year_levels = {0-12}
# Sort_key = overall_rank
# Attribute_categories = math, english, science | out of 100
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table("grades")
result = table.scan(
FilterExpression ='math > :math AND english > :eng',
ExpressionAttributeValues = {':math': 80,':eng': 70},
Select='SPECIFIC_ATTRIBUTES',
ProjectionExpression='year_level,overall_rank,math,english',
Limit = 50 #This is the amount of items to SCAN, not necessarily RETURN.
)
# return or print result
Explanation
FilterExpression and ExpressionAttributeValues. There are multiple ways of understanding how these work. One way to understand it is by seeing it as an item attribute value checker. In other words, every item that the scan goes through, the filter's applied upon it's attributes must be true for the item to be returned. e.g (a math score of 80%+ and an english score of 70%+)
Select and Projection Expression. In technical terms, the way in which I explain this is incorrect, however, in practical terms this way of understanding holds up: You can see that there is a SECOND filter, not for the item, but the ATTRIBUTES of the item that will be returned. e.g (I only want the
year_level, overall_rank, math, english to be returned, but no science)
Now if we combine the two we have an example: If an item is checked and matches the criteria placed upon it by the FilterExpression, it will be returned. HOWEVER, we only want SPECIFIC_ATTRIBUTES to be returned. At this point, the item will then be checked AGAIN against, this time, the Select criteria. The select criteria tells you what attributes FROM the item to return.
Limit is just the amount of items to check through, but not necessarily return.
References
References

Related

Handling data pulled from MySQL database in Python

I am trying to figure out a good way to handle blacklists for words via a MySQL database. I have hit a roadblock when it comes to handling the data returned from the database.
cursor.execute('SELECT word FROM blacklist')
blacklist1 = []
for word in cursor.fetchall():
if word in blacklist1:
return
else:
blacklist1.append(word)
The above code is what I am using to pull the info which I know works. However, I need some help with converting this:
[('word1',), ('word2',), ('word3',), ('word4',), ('word5',)]
into this:
['word1', 'word2', 'word3', 'word4', 'word5']
my biggest issue is that I need it to scale so that it will check each word within the blacklist from no words to several thousand if necessary. I know a for loop would work when it comes to checking them versus the message it checks. but I know I will not be able to check the words till it is a normal list. any help would be appreciated.
In each iteration of for word in cursor.fetchall(), the variable word is a tuple, or a collection of values. This is documented here.
These correspond to each column returned, i.e. if you had a second column in your select statement ('SELECT word, replacement FROM blacklist') you would get tuples of two elements.
Use a set, and add the one and only element of the tuple, instead of the tuple itself:
for word_tuple in cursor.fetchall():
blacklist1.add(word[0])
Looking at the code more closely, if word in blacklist1: return may be a logical error - as soon as you see a duplicate, you'll stop reading rows from the database. You were likely looking to just skip that duplicate - you don't actually need that logic anymore because sets automatically remove duplicates.
Your list currently contains one element tuples. If you want to extract the strings you could try this:
blacklist1 = []
for word_tuple in cursor.fetchall():
if word_tuple[0] in blacklist1:
return
else:
blacklist1.append(word_tuple[0])
For your use case you might also benefit from having blacklist1 be a set, that way you can check for membership in O(1) time:
blacklist1 = set()
for word_tuple in cursor.fetchall():
if word_tuple[0] in blacklist1:
return
else:
blacklist1.add(word_tuple[0])
First, your actual problem is that the cursor is a wrapper of an iterator over rows returned from MySQL, so it can be operated on similarly to a list of tuples. That being said, my advice would be to split your "business" logic from your data access logic. This might seem trivial but it will make debugging much easier. The overall approach will look like this:
def get_from_database():
cursor.execute('SELECT word FROM blacklist')
return [row[0] for row in cursor.fetchall()]
def get_blacklist():
words = get_from_database()
return list(set(words))
In this approach, get_from_database retrieves all the words from MySQL and returns them in the format your program needs. get_blacklist encapsulates this logic and also makes the returned list unique. So now, if there's a bug, you can verify each independently.

A more efficient way of finding value in dictionary and its position

I have a dictionary which contains (roughly) 6 elements, each of an element which looks like the following:
What I want to do is find a particular domain (that I pass through a method) and if it exists, it stores the keyword and its position within an object. I have tried the following
def parseGoogleResponse(response, website):
i = 0
for item in response['items']:
if(item['formattedUrl'] == website):
print i
break;
i++
This approach seems to be a bit tedious and also i also remains the same at i = 10 and I'm pretty sure that this is a more efficient way. I also have to keep in consideration that if the website is not found the first time, it then queries the API for a maximum up to 5 pages, each page contains 6 search results so I somehow have to calculate the position if it is on a different page.
Any ideas
Dictionaries in Python are not ordered. There is no way to find something's position in a dictionary, unlike list type objects.
You can rather easily check for the existence of a value in the dictionary with something like:
if website in response['items'].values():
# If you enter this section, you know it's in the dictionary
else:
# If you end up here, it isn't in the dictionary

write table cell real-time python

I would like to loop trough a database, find the appropriate values and insert them in the appropriate cell in a separate file. It maybe a csv, or any other human-readable format.
In pseudo-code:
for item in huge_db:
for list_of_objects_to_match:
if itemmatch():
if there_arent_three_matches_yet_in_list():
matches++
result=performoperationonitem()
write_in_file(result, row=object_to_match_id, col=matches)
if matches is 3:
remove_this_object_from_object_to_match_list()
can you think of any way other than going every time through all the outputfile line by line?
I don't even know what to search for...
even better, there are better ways to find three matching objects in a db and have the results in real-time? (the operation will take a while, but I'd like to see the results popping out RT)
Assuming itemmatch() is a reasonably simple function, this will do what I think you want better than your pseudocode:
for match_obj in list_of_objects_to_match:
db_objects = query_db_for_matches(match_obj)
if len(db_objects) >= 3:
result=performoperationonitem()
write_in_file(result, row=match_obj.id, col=matches)
else:
write_blank_line(row=match_obj.id) # if you want
Then the trick becomes writing the query_db_for_matches() function. Without detail, I'll assume you're looking for objects that match in one particular field, call it type. In pymongo such a query would look like:
def query_db_for_matches(match_obj):
return pymongo_collection.find({"type":match_obj.type})
To get this to run efficiently, make sure your database has an index on the field(s) you're querying on by first calling:
pymongo_collection.ensure_index({"type":1})
The first time you call ensure_index it could take a long time for a huge collection. But each time after that it will be fast -- fast enough that you could even put it into query_db_for_matches before your find and it would be fine.

Iterating through large lists with potential conditions in Python

I have large chunks of data, normally at around 2000+ entries, but in this report we have the ability to look as far as we want so it could be up to 10,000 records
The report is split up into: Two categories and then within each Category, we split by Currency so we have several sub categories within the list.
My issue comes in efficiently calculating the various subtotals. I am using Django and pass a templatetag the currency and category, if it applies, and then the templatetag renders the total. Note that sometimes I have a subtotal just for the category, with no currency passed.
Initially, I was using a seperate query for each subtotal by just using .filter() if there was a currency/category like so:
if currency:
entries = entries.filter(item_currency=currency)
This became a problem as I would have too many queries, and too long of a generation time (2,000+ ms), so I opted to use list(entries) to execute my query right off the bat, and then loop through it with simple list comprehensions:
totals['quantity'] = sum([e.quantity for e in entries])
My problem if you don't see it yet, lies in .. how can I efficiently add the condition for currency / category on each list comprehension? Sometimes they won't be there, sometimes they will so I can't simply type:
totals['quantity'] = sum([e.quantity for e in entries if item_currency = currency])
I could make a huge if-block, but that's not very clean and is a maintenance disaster, so I'm reaching out to the Stackoverflow community for a bit of insight .. thanks in advance :)
You could define a little inline function:
def EntryMatches(e):
if use_currency and not (e.currency == currency):
return False
if use_category and not (e.category == category):
return False
return True
then
totals['quantity'] = sum([e.quantity for e in entries if EntryMatches(e)])
EntryMatches() will have access to all variables in enclosing scope, so no need to pass in any more arguments. You get the advantage that all of the logic for which entries to use is in one place, you still get to use the list comprehension to make the sum() more readable, but you can have arbitrary logic in EntryMatches() now.

What is the best way to store set data in Python?

I have a list of data in the following form:
[(id\__1_, description, id\_type), (id\__2_, description, id\_type), ... , (id\__n_, description, id\_type))
The data are loaded from files that belong to the same group. In each group there could be multiples of the same id, each coming from different files. I don't care about the duplicates, so I thought that a nice way to store all of this would be to throw it into a Set type. But there's a problem.
Sometimes for the same id the descriptions can vary slightly, as follows:
IPI00110753
Tubulin alpha-1A chain
Tubulin alpha-1 chain
Alpha-tubulin 1
Alpha-tubulin isotype M-alpha-1
(Note that this example is taken from the uniprot protein database.)
I don't care if the descriptions vary. I cannot throw them away because there is a chance that the protein database I am using will not contain a listing for a certain identifier. If this happens I will want to be able to display the human readable description to the biologists so they know roughly what protein they are looking at.
I am currently solving this problem by using a dictionary type. However I don't really like this solution because it uses a lot of memory (I have a lot of these ID's). This is only an intermediary listing of them. There is some additional processing the ID's go through before they are placed in the database so I would like to keep my data-structure smaller.
I have two questions really. First, will I get a smaller memory footprint using the Set type (over the dictionary type) for this, or should I use a sorted list where I check every time I insert into the list to see if the ID exists, or is there a third solution that I haven't thought of? Second, if the Set type is the better answer how do I key it to look at just the first element of the tuple instead of the whole thing?
Thank you for reading my question,
Tim
Update
based on some of the comments I received let me clarify a little. Most of what I do with data-structure is insert into it. I only read it twice, once to annotate it with additional information,* and once to do be inserted into the database. However down the line there may be additional annotation that is done before I insert into the database. Unfortunately I don't know if that will happen at this time.
Right now I am looking into storing this data in a structure that is not based on a hash-table (ie. a dictionary). I would like the new structure to be fairly quick on insertion, but reading it can be linear since I only really do it twice. I am trying to move away from the hash table to save space. Is there a better structure or is a hash-table about as good as it gets?
*The information is a list of Swiss-Prot protein identifiers that I get by querying uniprot.
Sets don't have keys. The element is the key.
If you think you want keys, you have a mapping. More-or-less by definition.
Sequential list lookup can be slow, even using a binary search. Mappings use hashes and are fast.
Are you talking about a dictionary like this?
{ 'id1': [ ('description1a', 'type1'), ('description1b','type1') ],
'id2': [ ('description2', 'type2') ],
...
}
This sure seems minimal. ID's are only represented once.
Perhaps you have something like this?
{ 'id1': ( ('description1a', 'description1b' ), 'type1' ),
'id2': ( ('description2',), 'type2' ),
...
}
I'm not sure you can find anything more compact unless you resort to using the struct module.
I'm assuming the problem you try to solve by cutting down on the memory you use is the address space limit of your process. Additionally you search for a data structure that allows you fast insertion and reasonable sequential read out.
Use less structures except strings (str)
The question you ask is how to structure your data in one process to use less memory. The one canonical answer to this is (as long as you still need associative lookups), use as little other structures then python strings (str, not unicode) as possible. A python hash (dictionary) stores the references to your strings fairly efficiently (it is not a b-tree implementation).
However I think that you will not get very far with that approach, since what you face are huge datasets that might eventually just exceed the process address space and the physical memory of the machine you're working with altogether.
Alternative Solution
I would propose a different solution that does not involve changing your data structure to something that is harder to insert or interprete.
Split your information up in multiple processes, each holding whatever datastructure is convinient for that.
Implement inter process communication with sockets such that processes might reside on other machines altogether.
Try to divide your data such as to minimize inter process communication (i/o is glacially slow compared to cpu cycles).
The advantage of the approach I outline is that
You get to use two ore more cores on a machine fully for performance
You are not limited by the address space of one process, or even the physical memory of one machine
There are numerous packages and aproaches to distributed processing, some of which are
linda
processing
If you're doing an n-way merge with removing duplicates, the following may be what you're looking for.
This generator will merge any number of sources. Each source must be a sequence.
The key must be in position 0. It yields the merged sequence one item at a time.
def merge( *sources ):
keyPos= 0
for s in sources:
s.sort()
while any( [len(s)>0 for s in sources] ):
topEnum= enumerate([ s[0][keyPos] if len(s) > 0 else None for s in sources ])
top= [ t for t in topEnum if t[1] is not None ]
top.sort( key=lambda a:a[1] )
src, key = top[0]
#print src, key
yield sources[ src ].pop(0)
This generator removes duplicates from a sequence.
def unique( sequence ):
keyPos= 0
seqIter= iter(sequence)
curr= seqIter.next()
for next in seqIter:
if next[keyPos] == curr[keyPos]:
# might want to create a sub-list of matches
continue
yield curr
curr= next
yield curr
Here's a script which uses these functions to produce a resulting sequence which is the union of all the sources with duplicates removed.
for u in unique( merge( source1, source2, source3, ... ) ):
print u
The complete set of data in each sequence must exist in memory once because we're sorting in memory. However, the resulting sequence does not actually exist in memory. Indeed, it works by consuming the other sequences.
How about using {id: (description, id_type)} dictionary? Or {(id, id_type): description} dictionary if (id,id_type) is the key.
Sets in Python are implemented using hash tables. In earlier versions, they were actually implemented using sets, but that has changed AFAIK. The only thing you save by using a set would then be the size of a pointer for each entry (the pointer to the value).
To use only a part of a tuple for the hashcode, you'd have to subclass tuple and override the hashcode method:
class ProteinTuple(tuple):
def __new__(cls, m1, m2, m3):
return tuple.__new__(cls, (m1, m2, m3))
def __hash__(self):
return hash(self[0])
Keep in mind that you pay for the extra function call to __hash__ in this case, because otherwise it would be a C method.
I'd go for Constantin's suggestions and take out the id from the tuple and see how much that helps.
It's still murky, but it sounds like you have some several lists of [(id, description, type)...]
The id's are unique within a list and consistent between lists.
You want to create a UNION: a single list, where each id occurs once, with possibly multiple descriptions.
For some reason, you think a mapping might be too big. Do you have any evidence of this? Don't over-optimize without actual measurements.
This may be (if I'm guessing correctly) the standard "merge" operation from multiple sources.
source1.sort()
source2.sort()
result= []
while len(source1) > 0 or len(source2) > 0:
if len(source1) == 0:
result.append( source2.pop(0) )
elif len(source2) == 0:
result.append( source1.pop(0) )
elif source1[0][0] < source2[0][0]:
result.append( source1.pop(0) )
elif source2[0][0] < source1[0][0]:
result.append( source2.pop(0) )
else:
# keys are equal
result.append( source1.pop(0) )
# check for source2, to see if the description is different.
This assembles a union of two lists by sorting and merging. No mapping, no hash.

Categories