Dynamodb scan() using FilterExpression - python

First post here on Stack and fairly new to programming with Python and using DynamoDB, but I'm simply trying to run a scan on my table that returns results based on two pre-defined attributes.
---Here is my Python code snippet---
shift = "3rd"
date = "2017-06-21"
if shift != "":
response = table.scan(
FilterExpression=Attr("Date").eq(date) and Attr("Shift").eq(shift)
)
My DynamoDB has 4 fields.
ID
Date
Shift
Safety
Now for the issue, upon running I'm getting two table entries returned when I should only be getting the first entry... the one with "No safety issues" based on my scan criteria.
---Here is my DynamoDB return results---
[
{
"Shift": "3rd",
"Safety": "No safety issues",
"Date": "2017-06-21",
"ID": "2"
},
{
"Shift": "3rd",
"Safety": "Cut Finger",
"Date": "2017-06-22",
"ID": "4"
}
]
Items Returned: 2
I believe that by applying the FilterExpression with the logical 'and' specified that the scan operation is looking for entries that meet BOTH criteria since I used 'and'.
Could this be because the 'shift' attribute "3rd" is found in both entries? How do I ensure it returns entries based on BOTH criteria being meet and not just giving me results from one attribute type?
I have a feeling this is simple but I've looked at the available documentation at: http://boto3.readthedocs.io/en/latest/reference/services/dynamodb.html#DynamoDB.Table.scan and am still having trouble. Any help would be greatly appreciated!
P.S. I tried to keep the post simple and easy to understand (not including all my program code) however, if additional information is needed I can provide it!

This is because you used Python's and keyword in your expression, instead of the & operator.
If a and b are both considered True, a and b returns the latter, b:
>>> 2 and 3
3
If any of them is False, or if both of them are, the first False object is returned:
>>> 0 and 3
0
>>> 0 and ''
0
>>>
The general rule is, and returns the first object that allows it to decide the truthiness of the whole expression.
Python objects are always considered True in boolean context. So, your expression:
Attr("Date").eq(date) and Attr("Shift").eq(shift)
will evaluate as the last True object, that is:
Attr("Shift").eq(shift)
which explains why you only filtered on the shift.
You need to use the & operator. It usually means "bitwise and" between integers in Python, it is redefined for Attr objects to mean what you want: "both conditions".
So you must use the "bitwise and":
FilterExpression=Attr("Date").eq(date) & Attr("Shift").eq(shift)
According to the documentation,
You are also able to chain conditions together using the logical
operators: & (and), | (or), and ~ (not).

Using parts from each of the above answers, here's a compact way I was able to get this working:
from functools import reduce
from boto3.dynamodb.conditions import Key, And
response = table.scan(FilterExpression=reduce(And, ([Key(k).eq(v) for k, v in filters.items()])))
Allows filtering upon multiple conditions in filters as a dict. For example:
{
'Status': 'Approved',
'SubmittedBy': 'JackCasey'
}

Dynamodb scan() using FilterExpression
For multiple filters, you can use this approach:
import boto3
from boto3.dynamodb.conditions import Key, And
filters = dict()
filters['Date'] = "2017-06-21"
filters['Shift'] = "3rd"
response = table.scan(FilterExpression=And(*[(Key(key).eq(value)) for key, value in filters.items()]))

Expanding on Maxime Paille's answer, this covers the case when only one filter is present vs many.
from boto3.dynamodb.conditions import And, Attr
from functools import reduce
from operator import and_
filters = dict()
filters['Date'] = "2017-06-21"
filters['Shift'] = "3rd"
table.scan("my-table", **build_query_params(filters))
def build_query_params(filters):
query_params = {}
if len(filters) > 0:
query_params["FilterExpression"] = add_expressions(filters)
return query_params
def add_expressions(self, filters: dict):
if filters:
conditions = []
for key, value in filters.items():
if isinstance(value, str):
conditions.append(Attr(key).eq(value))
if isinstance(value, list):
conditions.append(Attr(key).is_in([v for v in value]))
return reduce(and_, conditions)

Related

Trouble converting "for key in dict" to == for exact matching

Good morning,
I am having trouble pulling the correct value from my dictionary because there are similar keys. I believe I need to use the == instead of in however when I try to change if key in c_item_number_one: to if key == c_item_number_one: it just returns my if not_found: print("Specify Size One") however I know 12" is in the dictionary.
c_item_number_one = ('12", Pipe,, SA-106 GR. B,, SCH 40, WALL smls'.upper())
print(c_item_number_one)
My formula is as follows:
def item_one_size_one():
not_found = True
for key in size_one_dict:
if key in c_item_number_one:
item_number_one_size = size_one_dict[key]
print(item_number_one_size)
not_found = False
break
if not_found:
print("Specify Size One")
item_one_size_one()
The current result is:
12", PIPE,, SA-106 GR. B,, SCH 40, WALL SMLS
Specify Size One
To split the user input into fields, use re.split
>>> userin
'12", PIPE,, SA-106 GR. B,, SCH 40, WALL SMLS'
>>> import re
>>> fields = re.split('[ ,]*',userin)
>>> fields
['12"', 'PIPE', 'SA-106', 'GR.', 'B', 'SCH', '40', 'WALL', 'SMLS']
Then compare the key to the first field, or to all fields:
if key == fields[0]:
There are two usages of the word in here - the first is in the context of a for loop, and the second entirely distinct one is in the context of a comparison.
In the construction of a for loop, the in keyword connects the variable that will be used to hold the values extracted from the loop to the object containing values to be looped over.
e.g.
for x in list:
Meanwhile, the entirely distinct usage of the in keyword can be used to tell python to perform a collection test where the left-hand side item is tested to see whether it exists in the rhs-object's collection.
e.g.
if key in c_item_number_one:
So the meaning of the in keyword is somewhat contextual.
If your code is giving unexpected results then you should be able to replace the if-statement to use an == test, while keeping everything else the same.
e.g.
if key == c_item_number_one:
However, since the contents of c_item_number_one is a tuple, you might only want to test equality for the first item in that tuple - the number 12 for example. You should do this by indexing the element in the tuple for which you want to do the comparison:
if key == c_item_number_one[0]:
Here the [0] is telling python to extract only the first element from the tuple to perform the == test.
[edit] Sorry, your c_item_number_one isn't a tuple, it's a long string. What you need is a way of clearly identifying each item to be looked up, using a unique code or value that the user can enter that will uniquely identify each thing. Doing a string-match like this is always going to throw up problems.
There's potential then for a bit of added nuance, the 1st key in your example tuple is a string of '12'. If the key in your == test is a numeric value of 12 (i.e. an integer) then the test 12 == '12' will return false and you won't extract the value you're after. That your existing in test succeeds currently suggests though that this isn't a problem here, but might be something to be aware of later.

Python Query Processing and Boolean Search

I have an inverted index (as a dictionary) and I want to take a boolean search query as an input to process it and produce a result.
The inverted index is like this:
{
Test : { FileName1: [213, 1889, 27564], FileName2: [133, 9992866, 27272781, 78676818], FileName3: [9211] },
Try : { FileName4 ...
.....
}
Now, given a boolean search query, I have to return the result.
Examples:
Boolean Search Query: test AND try
The result should be all documents that have the words test and try.
Boolean Search Query: test OR try
The result should be all documents that have either test or try.
Boolean Search Query: test AND NOT try
The result should be all documents that have test but not try.
How can I build this search engine to process the given boolean search query?
Thanks in advance!
EDIT: I am retaining the first part of my answer, because if this WASN'T a school assignment, this would be in my opinion still a better way to go about the task. I replace the second part of the answer with update matching OP's question.
What you appear to want to do is to create a query string parser, which would read the query string and translate it into a series of AND/OR/NOT combos to return the correct keys.
There are 2 approaches to this.
According to what you wrote that you need, by far the simplest solution would be to load the data into any SQL database (SQLite, for example, which does not require a full-blown running SQL server), load dictionary keys as a separate field (the rest of your data may all be in a single another field, if you don't care about normal forms &c), and translate incoming queries to SQL, approximately like this:
SQL table has at least this:
CREATE TABLE my_data(
dictkey text,
data text);
python_query="foo OR bar AND NOT gazonk"
sql_keywords=["AND","NOT","OR"]
sql_query=[]
for word in python_query.split(" "):
if word in sql_keywords:
sql_query+=[ word ]
else:
sql_query+=["dictkey='%s'" % word]
real_sql_query=" ".join(sql_query)
This needs some escaping and control checking for SQL injections and special chars, but in general it would just translate your query into SQL, which, when run against the SQL datbase would return the keys (and possibly data) for further processing.
Now for the pure Python version.
What you need to do is to analyze the string you get and apply the logic to your existing Python data.
Analyzing the string to reduce it to specific components (and their interactions) is parsing. If you actually wanted to build your own fully fledged parser, there would be Python modules for that, however, for a school assignment, I expect you are tasked to build your own.
From your description, the query can be expressed in quasi BNF form as:
(<[NOT] word> <AND|OR>)...
Since you say that priority of is not relevant all, you can do it the easy way and parse word by word.
Then you have to match the keywords to the filenames, which, as mentioned in another answer, is easiest to do with sets.
So, it could go approximately like this:
import re
query="foo OR bar AND NOT gazonk"
result_set=set()
operation=None
for word in re.split(" +(AND|OR) +",query):
#word will be in ['foo', 'OR', 'bar', 'AND', 'NOT gazonk']
inverted=False # for "NOT word" operations
if word in ['AND','OR']:
operation=word
continue
if word.find('NOT ') == 0:
if operation is 'OR':
# generally "OR NOT" operation does not make sense, but if it does in your case, you
# should update this if() accordingly
continue
inverted=True
# the word is inverted!
realword=word[4:]
else:
realword=word
if operation is not None:
# now we need to match the key and the filenames it contains:
current_set=set(inverted_index[realword].keys())
if operation is 'AND':
if inverted is True:
result_set -= current_set
else:
result_set &= current_set
elif operation is 'OR':
result_set |= current_set
operation=None
print result_set
Note that this is not a complete solution (for example it does not include dealing with the first term of the query, and it requires the boolean operators to be in uppercase), and is not tested. However, it should serve the primary purpose of showing you how to go about it. Doing more would be writing your course work for you, which would be bad for you. Because you are expected to learn how to do it so you can understand it. Feel free to ask for clarifications.
Another approach could be an in-memory intersection of the posting lists (for your AND cases, you can enhance this for OR, NOT, etc).
Attached a simple merge algorithm to be performed on the posting lists, assuming that the lists are sorted (increasing doc_id order, this can be easily achieved if we index our terms correctly) - this will improve time complexity (O(n1+n2)) as we will perform linear-merge on sorted list and might stop earlier.
Now assume our positional inverted index looks like this: (similar to yours but with posting lists as lists and not dict's- this will be allow compression in future uses) where it maps- String > Terms, while each term consists of (tf, posting list ([P1, P2,...])) and each Posting has (docid, df, position list). Now we can perform a simple AND to all of our postings lists iteratively:
def search(self, sq: BoolQuery) -> list:
# Performs a search from a given query in boolean retrieval model,
# Supports AND queries only and returns sorted document ID's as result:
if sq.is_empty():
return super().search(sq)
terms = [self.index[term] for term in sq.get_terms() if term in self.index]
if not terms:
return []
# Iterate over posting lists and intersect:
result, terms = terms[0].pst_list, terms[1:]
while terms and result:
result = self.intersect(result, terms[0].pst_list)
terms = terms[1:]
return [p.id for p in result]
Now lets look at the intersection:
def intersect(p1: list, p2: list) -> list:
# Performs linear merge of 2x sorted lists of postings,
# Returns the intersection between them (== matched documents):
res, i, j = list(), 0, 0
while i < len(p1) and j < len(p2):
if p1[i].id == p2[j].id:
res.append(p1[i])
i, j = i + 1, j + 1
elif p1[i].id < p2[j].id:
i += 1
else:
j += 1
return res
This simple algorithm can be later expanded when performing phrase search (edit the intersection to calculate slop distance, e.g: |pos1-pos2| < slop)
Taking into account you have that inverted index and that is a dict with test and try as keys you can define the following functions and play with them:
def intersection(list1, list2):
return list(set(list1).intersection(list2))
def union(list1, list2):
return list(set(list1).union(list2))
def notin(list1, list2)
return [filter(lambda x: x not in list1, sublist) for sublist in list2]
intersection(inverted_index['people'].keys(), intersection(inverted_index['test'].keys(), inverted_index['try'].keys()))

Skip keys without Type checking in Python (pymssql)

I need to access all the non-integer keys for a dict that looks like:
result = {
0 : "value 1",
1 : "value 2",
"key 1" : "value 1",
"key 2" : "value 2",
}
I am currently doing this by:
headers = [header for header in tmp_dict.keys() if not isinstance(header, int)]
My question:
Is there a way to do this without type checking?
This tmp_dict is coming out of a query using pymssql with the as_dict=True attribute, and for some reason it returns all the column names with data as expected, but also includes the same data indexed by integers. How can I get my query result as a dictionary with only the column values and data?
Thanks for your help!
PS - Despite my issues being resolved by potentially answering 2, I'm curious how this can be done without type checking. Mainly for the people who say "never do type checking, ever."
With regard to your question about type checking, the duck-type approach would be to see whether it can be converted to or used as an int.
def can_be_int(obj):
try:
int(obj)
except (TypeError, ValueError):
return False
return True
headers = [header for header in tmp_dict.keys() if not can_be_int(header)]
Note that floats can be converted to ints by truncating them, so this isn't necessarily exactly equivalent.
A slight variation on the above would be to use coerce(0, obj) in place of int(obj). This will allow any kind of object that can be converted to a common type with an integer. You could also do something like 0 + obj and 1 * obj which will check for something that can be used in a mathematical expression with integers.
You could also check to see whether its string representation is all digits:
headers = [header for header in tmp_dict.keys() if not str(header).isdigit()]
This is probably closer to a solution that doesn't use type-checking, although it will be slower, and it's of course entirely possible that a column name would be a string that is only digits! (Which would fail with many of these approaches, to be honest.)
Sometimes explicit type-checking really is the best choice, which is why the language has tools for letting you check types. In this situation I think you're fine, especially since the result dictionary is documented to have only integers and strings as keys. And you're doing it the right way by using isinstance() rather than explicitly checking type() == int.
Looking at the source code of pymssql (1.0.2), it is clear that there is no option for the module to not generate data indexed by integers. But note that data indexed by column name can be omitted if the column name is empty.
/* mssqldbmodule.c */
PyObject *fetch_next_row_dict(_mssql_connection *conn, int raise) {
[...]
for (col = 1; col <= conn->num_columns; col++) {
[...]
// add key by column name, do not add if name == ''
if (strlen(PyString_AS_STRING(name)) != 0)
if ((PyDict_SetItem(dict, name, val)) == -1)
return NULL;
// add key by column number
if ((PyDict_SetItem(dict, PyInt_FromLong(col-1), val)) == -1)
return NULL;
}
[...]
}
Regarding your first question, filtering result set by type checking is surely the best way to do that. And this is exactly how pymssql is returning data when as_dict is False:
if self.as_dict:
row = iter(self._source).next()
self._rownumber += 1
return row
else:
row = iter(self._source).next()
self._rownumber += 1
return tuple([row[r] for r in sorted(row.keys()) if type(r) == int])
The rationale behind as_dict=True is that you can access by index and by name. Normally you'd get a tuple you index into, but for compatibility reasons being able to index a dict as though it was a tuple means that code depending on column numbers can still work, without being aware that column names are available.
If you're just using result to retrieve columns (either by name or index), I don't see why you're concerned about removing them? Just carry on regardless. (Unless for some reason you plan to pickle or otherwise persist the data elsewhere...)
The best way to filter them out though, is using isinstance - duck typing in this case is actually unpythonic and inefficient. Eg:
names_only = dict( (k, v) for k,v in result.iteritems() if not isinstance(k, int) )
Instead of a try and except dance.
>>> sorted(result)[len(result)/2:]
['key 1', 'key 2']
This will remove the duplicated integer-keyed entrys. I think what you're doing is fine though.

Finding partial strings in a list of strings - python

I am trying to check if a user is a member of an Active Directory group, and I have this:
ldap.set_option(ldap.OPT_REFERRALS, 0)
try:
con = ldap.initialize(LDAP_URL)
con.simple_bind_s(userid+"#"+ad_settings.AD_DNS_NAME, password)
ADUser = con.search_ext_s(ad_settings.AD_SEARCH_DN, ldap.SCOPE_SUBTREE, \
"sAMAccountName=%s" % userid, ad_settings.AD_SEARCH_FIELDS)[0][1]
except ldap.LDAPError:
return None
ADUser returns a list of strings:
{'givenName': ['xxxxx'],
'mail': ['xxxxx#example.com'],
'memberOf': ['CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group2,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group3,OU=Projects,OU=Office,OU=company,DC=domain,DC=com',
'CN=group4,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'],
'sAMAccountName': ['myloginid'],
'sn': ['Xxxxxxxx']}
Of course in the real world the group names are verbose and of varied structure, and users will belong to tens or hundreds of groups.
If I get the list of groups out as ADUser.get('memberOf')[0], what is the best way to check if any members of a separate list exist in the main list?
For example, the check list would be ['group2', 'group16'] and I want to get a true/false answer as to whether any of the smaller list exist in the main list.
If the format example you give is somewhat reliable, something like:
import re
grps = re.compile(r'CN=(\w+)').findall
def anyof(short_group_list, adu):
all_groups_of_user = set(g for gs in adu.get('memberOf',()) for g in grps(gs))
return sorted(all_groups_of_user.intersection(short_group_list))
where you pass your list such as ['group2', 'group16'] as the first argument, your ADUser dict as the second argument; this returns an alphabetically sorted list (possibly empty, meaning "none") of the groups, among those in short_group_list, to which the user belongs.
It's probably not much faster to just a bool, but, if you insist, changing the second statement of the function to:
return any(g for g in short_group_list if g in all_groups_of_user)
might possibly save a certain amount of time in the "true" case (since any short-circuits) though I suspect not in the "false" case (where the whole list must be traversed anyway). If you care about the performance issue, best is to benchmark both possibilities on data that's realistic for your use case!
If performance isn't yet good enough (and a bool yes/no is sufficient, as you say), try reversing the looping logic:
def anyof_v2(short_group_list, adu):
gset = set(short_group_list)
return any(g for gs in adu.get('memberOf',()) for g in grps(gs) if g in gset)
any's short-circuit abilities might prove more useful here (at least in the "true" case, again -- because, again, there's no way to give a "false" result without examining ALL the possibilities anyway!-).
You can use set intersection (& operator) once you parse the group list out. For example:
> memberOf = 'CN=group1,OU=Projects,OU=Office,OU=company,DC=domain,DC=com'
> groups = [token.split('=')[1] for token in memberOf.split(',')]
> groups
['group1', 'Projects', 'Office', 'company', 'domain', 'com']
> checklist1 = ['group1', 'group16']
> set(checklist1) & set(groups)
set(['group1'])
> checklist2 = ['group2', 'group16']
> set(checklist2) & set(groups)
set([])
Note that a conditional evaluation on a set works the same as for lists and tuples. True if there are any elements in the set, False otherwise. So, "if set(checklist2) & set(groups): ..." would not execute since the condition evaluates to False in the above example (the opposite is true for the checklist1 test).
Also see:
http://docs.python.org/library/sets.html

Python: DISTINCT on GQuery result set (GQL, GAE)

Imagine you got an entity in the Google App Engine datastore, storing links for anonymous users.
You would like to perform the following SQL query, which is not supported:
SELECT DISTINCT user_hash FROM links
Instead you could use:
user = db.GqlQuery("SELECT user_hash FROM links")
How to use Python most efficiently to filter the result, so it returns a DISTINCT result set?
How to count the DISTINCT result set?
Reviving this question for completion:
The DISTINCT keyword has been introduced in release 1.7.4.
You can find the updated GQL reference (for example for Python) here.
A set is good way to deal with that:
>>> a = ['google.com', 'livejournal.com', 'livejournal.com', 'google.com', 'stackoverflow.com']
>>> b = set(a)
>>> b
set(['livejournal.com', 'google.com', 'stackoverflow.com'])
>>>
One suggestion w/r/t the first answer, is that sets and dicts are better at retrieving unique results quickly, membership in lists is O(n) versus O(1) for the other types, so if you want to store additional data, or do something like create the mentioned unique_results list, it may be better to do something like:
unique_results = {}
>>> for item in a:
unique_results[item] = ''
>>> unique_results
{'livejournal.com': '', 'google.com': '', 'stackoverflow.com': ''}
One option would be to put the results into a set object:
http://www.python.org/doc/2.6/library/sets.html#sets.Set
The resulting set will consist only of the distinct values passed into it.
Failing that, building up a new list containing only the unique objects would work. Something like:
unique_results = []
for obj in user:
if obj not in unique_results:
unique_results.append(obj)
That for loop can be condensed into a list comprehension as well.
Sorry to dig this question up but in GAE I cannot compare objects like that, I must use .key() for comparison like that:
Beware, this is very inefficient :
def unique_result(array):
urk={} #unique results with key
for c in array:
if c.key() not in urwk:
urk[str(c.key())]=c
return urk.values()
If anyone has a better solution, please share.

Categories