How I do multiple eq_joins? - python

I'm trying to do multiple eq_joins. The error I get is:
ReqlQueryLogicError: Primary keys must be either a number, string, bool, pseudotype or array (got type OBJECT):
{
"First name": "John",
"Last name": "Urquhart",
"employers": [
{
"date_hired": "2-Mar-88",
"organization_id": "2a5e2e3d-275a-426e-9ecd-0bd5601bff6b"
}
],
"id": "e70d5350-c1e0-41ee-a1cc-6638c7136d89",
"primary_photo": "http://www.kingcounty.gov/~/media/safety/sheriff/Sheriff_Urquh in:
r.db('public').table(u'police_internal_affairs_allegations').filter(lambda var_24: var_24.coerce_to('string').match(u'(?i).*?Urquhart.*?')).eq_join(u'organization_id', r.db('public').table(u'organizations')).merge(lambda var_25: r.expr({'right': var_25['right'].coerce_to('array').map(lambda var_26: [(r.expr(u'organization_') + var_26[0]), var_26[1]]).coerce_to('object')})).zip().eq_join(u'person_id', r.db('public').table('people')).merge(lambda var_27: r.expr({'right': var_27['right'].coerce_to('array').map(lambda var_28: [(r.expr(u'person_') + var_28[0]), var_28[1]]).coerce_to('object')})).zip()
My code is:
ids_for_other_tables = [field for field in fields if field.endswith('_id')]
modified_joined_data = []
for field in ids_for_other_tables:
special_names = {'person': 'people'}
t = special_names[field[:-3]] if field[:-3] in special_names else field[:-3]+'s'
dbobj = getattr(dbobj, 'eq_join')(field, r.db("public").table(t))
dbobj = dbobj.merge( lambda row: {'right': row['right'].coerce_to('array').map(
lambda pair: [r.expr(field[:-2]) + pair[0], pair[1]]
).coerce_to('object')})
dbobj = dbobj.zip()
The purpose of this code is auto join info from tables for all fields ending in _id

It's hard to say without looking at the data in your table, but it looks like the problem is that one of the fields you're trying to eq_join on has an object in that field instead of an ID. I'd run the part of the query before the eq_join and make sure it has the format you're expecting.

Related

returning a filtered list from DB by a list of ids contained in another query with SQLAlchemy Flask

I am trying to write an SQLAlchemy DB query that returns a list of provider_service_files that have NOT been assigned to a client:
My first query gets all the service files that have been assigned to a user:
client_resource_guide_urls = db_session.query(ClientResourceGuideUrl).filter(\
ClientResourceGuideUrl.client_id == client_id).all()
returns a list with this output:
[
{
"client_id": 20,
"id": 9,
"service_file_url_id": 3
},
{
"client_id": 20,
"id": 10,
"service_file_url_id": 2
}
]
Using the service_file_url_id in that list, I query for ServiceFileUrl and if an entry contains an ID that is equal to any service_file_url_id I want that entry filtered out of the list.
for client_resource_guide_url in client_resource_guide_urls:
print(client_resource_guide_url) # only shows first entry
service_files = db_session.query(ServiceFileUrl.id, ServiceFileUrl.service_id, \
ServiceFileUrl.file_label, ServiceFileUrl.file_url, \
ProviderService.name, ServiceFileUrl.provider_id,\
ServiceFileUrl.created_at)\
.select_from(ServiceFileUrl, ProviderService)\
.join(ProviderService).filter(\
ServiceFileUrl.provider_id == provider_id, \
ServiceFileUrl.id != client_resource_guide_url.service_file_url_id).all()
This query is only filtering out the FIRST entry and not the second. A print of client_resource_guide_url only shows the first entry is being looped through. My approach is cleary not ideal here. Any advice would be appreciated.
To leverage the power of SQL, rather than a Python loop, pass a list of values to the Column.not_in() function ref.
ids_to_filter = [i['service_file_url_id'] for i in client_resource_guide_urls]
service_files = db_session.query(
ServiceFileUrl.id,
ServiceFileUrl.service_id,
ServiceFileUrl.file_label,
ServiceFileUrl.file_url,
ProviderService.name,
ServiceFileUrl.provider_id,
ServiceFileUrl.created_at)\
.select_from(
ServiceFileUrl,
ProviderService)\
.join(ProviderService)\
.filter(
(ServiceFileUrl.provider_id == provider_id), \
(ServiceFileUrl.id.not_in(ids_to_filter)))\
.all()

return populated fields from a join table in SQLalchemy Flask

UserService is a join table connecting Users and Services tables. I have a query that returns all the tables that have a user_id = to the id passed in the route.
#bp.route('/user/<id>/services', methods=['GET'])
def get_services_from_user(id):
user = db_session.query(User).filter(id == User.id).first()
if not user:
return jsonify({'Message': f'User with id: {id} does not exist', 'Status': 404})
user_services = db_session.query(UserService).filter(user.id == UserService.user_id).all()
result = user_services_schema.dump(user_services)
for service in result:
user_service = db_session.query(Service).filter(service['service_id'] == Service.id).all()
result = services_schema.dump(user_service)
return jsonify(result)
result holds a list that looks as such:
[
{
"id": 1,
"service_id": 1,
"user_id": 1
},
{
"id": 2,
"service_id": 2,
"user_id": 1
}
]
how could I then continue this query or add another query to get all the actual populated services (Service class) instead of just the service_id and return all of them in a list? The for loop is my attempt at that but currently failing. I am only getting back a list with one populated service, and not the second.
You could try something like this:
userServies = db_session.query(Users, Services).filter(Users.id == Services.id).all()
userServices would be an iterable. You should use:
for value, index in userServices:
to iterate through it. (Could be index, value I'm not 100% sure of the order)
There is another way using .join() and adding the columns that you need with .add_columns().
There is also another way using
db_session.query(Users.id, Services.id, ... 'all the columns that you need' ...).filter(Users.id == Services.id).all()

Access one item from a dict and store it into a variable

I am trying to get all the "uuid"'s from an API, and the issue is that it is stored into a dict (I think). Her is how it looks on the API:
{"guild": {
"_id": "5eba1c5f8ea8c960a61f38ed",
"name": "Creators Club",
"name_lower": "creators club",
"coins": 0,
"coinsEver": 0,
"created": 1589255263630,
"members":
[{ "uuid": "db03ceff87ad4909bababc0e2622aaf8",
"rank": "Guild Master",
"joined": 1589255263630,
"expHistory": {
"2020-06-01": 280,
"2020-05-31": 4701,
"2020-05-30": 0,
"2020-05-29": 518,
"2020-05-28": 1055,
"2020-05-27": 136665,
"2020-05-26": 34806}}]
}
}
Now I am interested in the "uuid" part there, and take note: There is multiple players, it can be 1 to 100 players, and I am going to need every UUID.
Now I have done this in my python to get the UUID's displayed on the website:
try:
f = requests.get(
"https://api.hypixel.net/guild?key=[secret]&id=" + guild).json()
guildName = f["guild"]["name"]
guildMembers = f["guild"]["members"]
members = client.getPlayer(uuid=guildMembers) #this converts UUID to player names
#I need to store all uuid's in variables and put them at "guildMembers"
And that gives me all the "UUID codes", and I will be using client.getPlayer(uuid=---) to convert the UUID into the Player Names. I have to loop through each "UUID" into that code client.getPlayer(uuid=---) . But first of I need to save the UUID'S in variables, I have been doing members.uuid to access the UUID on my HTML file, but I don't know how you do the .uuid part in python
If you need anything else, just comment :)
List comprehension is a powerful concept:
members = [client.getPlayer(member['uuid']) for member in guildMembers]
Edit:
If you want to insert the names back into your data (in guildMembers),
use a dictionary comprehension with {uuid: member_name,} format:
members = {member['uuid']: client.getPlayer(uuid=member['uuid']) for member in guildMembers}
Than you can update guildMembers with your results:
for member in guildMembers:
guildMembers[member]['name'] = members[member['uuid']]
Assuming that guild is the main dictionary in which a key called members exists with a list of "sub dictionaries", you can try
uuid = list()
for x in guild['members']:
uuid.append(x['uuid'])
uuid now has all the uuids
If i understood situation right, You just need to loop through all received uuids and get players' data. Something like this:
f = requests.get("https://api.hypixel.net/guild?key=[secret]&id=" + guild).json()
guildName = f["guild"]["name"]
guildMembers = f["guild"]["members"]
guildMembersData = dict() # Here we will save member's data from getPlayer method
for guildMember in guildMembers:
uuid = guildMember["uuid"]
memberData = client.getPlayer(uuid=uuid)
guildMembersData[uuid] = client.getPlayer(uuid=guildMember["uuid"])
print(guildMembersData) # Here will be players' Data.

Python container troubles

Basically what I am trying to do is generate a json list of SSH keys (public and private) on a server using Python. I am using nested dictionaries and while it does work to an extent, the issue lies with it displaying every other user's keys; I need it to list only the keys that belong to the user for each user.
Below is my code:
def ssh_key_info(key_files):
for f in key_files:
c_time = os.path.getctime(f) # gets the creation time of file (f)
username_list = f.split('/') # splits on the / character
user = username_list[2] # assigns the 2nd field frome the above spilt to the user variable
key_length_cmd = check_output(['ssh-keygen','-l','-f', f]) # Run the ssh-keygen command on the file (f)
attr_dict = {}
attr_dict['Date Created'] = str(datetime.datetime.fromtimestamp(c_time)) # converts file create time to string
attr_dict['Key_Length]'] = key_length_cmd[0:5] # assigns the first 5 characters of the key_length_cmd variable
ssh_user_key_dict[f] = attr_dict
user_dict['SSH_Keys'] = ssh_user_key_dict
main_dict[user] = user_dict
A list containing the absolute path of the keys (/home/user/.ssh/id_rsa for example) is passed to the function. Below is an example of what I receive:
{
"user1": {
"SSH_Keys": {
"/home/user1/.ssh/id_rsa": {
"Date Created": "2017-03-09 01:03:20.995862",
"Key_Length]": "2048 "
},
"/home/user2/.ssh/id_rsa": {
"Date Created": "2017-03-09 01:03:21.457867",
"Key_Length]": "2048 "
},
"/home/user2/.ssh/id_rsa.pub": {
"Date Created": "2017-03-09 01:03:21.423867",
"Key_Length]": "2048 "
},
"/home/user1/.ssh/id_rsa.pub": {
"Date Created": "2017-03-09 01:03:20.956862",
"Key_Length]": "2048 "
}
}
},
As can be seen, user2's key files are included in user1's output. I may be going about this completely wrong, so any pointers are welcomed.
Thanks for the replies, I read up on nested dictionaries and found that the best answer on this post, helped me solve the issue: What is the best way to implement nested dictionaries?
Instead of all the dictionaries, I simplfied the code and just have one dictionary now. This is the working code:
class Vividict(dict):
def __missing__(self, key): # Sets and return a new instance
value = self[key] = type(self)() # retain local pointer to value
return value # faster to return than dict lookup
main_dict = Vividict()
def ssh_key_info(key_files):
for f in key_files:
c_time = os.path.getctime(f)
username_list = f.split('/')
user = username_list[2]
key_bit_cmd = check_output(['ssh-keygen','-l','-f', f])
date_created = str(datetime.datetime.fromtimestamp(c_time))
key_type = key_bit_cmd[-5:-2]
key_bits = key_bit_cmd[0:5]
main_dict[user]['SSH Keys'][f]['Date Created'] = date_created
main_dict[user]['SSH Keys'][f]['Key Type'] = key_type
main_dict[user]['SSH Keys'][f]['Bits'] = key_bits

Partial updates via PATCH: how to parse JSON data for SQL updates?

I am implementing 'PATCH' on the server-side for partial updates to my resources.
Assuming I do not expose my SQL database schema in JSON requests/responses, i.e. there exists a separate mapping between keys in JSON and columns of a table, how do I best figure out which column(s) to update in SQL given the JSON of a partial update?
For example, suppose my table has 3 columns: col_a, col_b, and col_c, and the mapping between JSON keys to table columns is: a -> col_a, b -> col_b, c -> col_c. Given JSON-PATCH data:
[
{"op": "replace", "path": "/b", "value": "some_new_value"}
]
What is the best way to programmatically apply this partial update to col_b of the table corresponding to my resource?
Of course I can hardcode these mappings in a keys_to_columns dict somewhere, and upon each request with some patch_data, I can do sth like:
mapped_updates = {keys_to_columns[p['path'].split('/')[-1]]: p['value'] for p in patch_data}
then use mapped_updates to construct the SQL statement for DB update. If the above throws a KeyError I know the request data is invalid and can throw it away. And I will need to do this for every table/resource I have.
I wonder if there is a better way.
This is similar to what you're thinking of doing, but instead of creating maps, you can create classes for each table instead. For example:
class Table(object):
"""Parent class of all tables"""
def get_columns(self, **kwargs):
return {getattr(self, k): v for k, v in kwargs.iteritems()}
class MyTable(Table):
"""table MyTable"""
# columns mapping
a = "col_a"
b = "col_b"
tbl = MyTable()
tbl.get_columns(a="value a", b="value b")
# the above returns {"col_a": "value a", "col_b": "value b"}
# similarly:
tbl.get_columns(**{p['path'].split('/')[-1]: p['value'] for p in patch_data})
This is just something basic to get inspired from, these classes can be extended to do much more.
patch_json = [
{"op": "replace", "path": "/b", "value": "some_new_value"},
{"op": "replace", "path": "/a", "value": "some_new_value2"}
]
def fix_key(item):
item['path'] = item['path'].replace('/', 'col_')
return item
print map(fix_key, patch_json)

Categories