I have a table stored in Parse.com, and I'm using ParsePy to get and filter the data in my Python Django program.
My table has three columns, objectId (string), name (string), and type (array). I want to query the name column and return any objects that contain the partial term xyz. For example, if I search for amp, and there's a row where name: Example Name this row should be returned.
Here's my code so far:
def searchResults(self, searchTerm):
register('parseKey', 'parseRestKey')
myParseObject = ParseObject()
allData = myParseObject.Query.filter(name = searchTerm)
return allData
The problem with this code is it only works if searchTerm is exactly the same as what's in the name column. The Parse REST API says that the queries accept regex parameters, but I'm not sure how to use them in ParsePy.
Yes, you have to use regex for that. And, this is how it would be:
allData = myParseObject.Query.filter(name__regex = "<your_regex>")
Related
I'm trying to create an sql query that takes records from a File table and a Customer table. A file can have multiple customers. I want to show only one record per File.id and Concatenate the last names based on alphabetical order of the clients if the names are different or only show one if they are the same.
Below is a picture of the Relationship.
Table Relationship
The results from my query look like this currently.
enter image description here
I would like the query to look like this.
File ID
Name
1
Dick Dipe
2
Bill
3
Lola
Originally I had tried doing a subquery but I had issues that there were multiple results and it couldn't list more than one. If I could do a loop and add to an array, I feel like that would work.
If I were to do it in Python, I would write this but when I try to translate that into SQL, I get errors that either the subquery can only display one result or the second name under file two gets cut off.
clients = ['Dick','Dipe','Bill','Lola', 'Lola']
files = [1,2,3]
fileDetails = [[1,0],[1,1],[2,2],[3,3],[3,4]]
file_clients = {}
for file_id, client_index in fileDetails:
if file_id not in file_clients:
file_clients[file_id] = []
client_name = clients[client_index]
file_clients[file_id].append(client_name)
for file_id, client_names in file_clients.items():
client_names = list(dict.fromkeys(client_names))
client_names_string = " ".join(client_names)
print(f"File {file_id}: {client_names_string}")
Funtion that save Close,Symbol, Timeframe
def Save_(self,collection,symbol,price,TF):
db = self.get_db('MTF')[collection]
B = {'ts':time.time(),"Symbol":symbol,
"Price":price,'TimeFrame':TF}
data = db.insert_one(B)
return data
Function to get data from mongodb
def find_all(self,collection):
db = self.get_db('MTF')[collection]
Symbols ={}
data = db.find({})
for i in data:
Symbols[i['Symbol']] = [i['Price'],i['TimeFrame']]
return Symbols
images from mongodb
[2]: https://i.stack.imgur.com/RLtnz.png
images from B Function
[1]: https://i.stack.imgur.com/AtwSy.png
if u see the image from Function B only gave me on timeframe but Function Save have 4 timeframe
Looking at this loop:
for i in data:
Symbols[i['Symbol']] = [i['Price'],i['TimeFrame']]
If you have the same Symbol coming from MongoDB, it will overwrite any previous value, so you will only get the final value for each Symbol which is what you are seeing.
To fix it you have a few options: you could check the key and either create or append the values to Symbols; or you could use $push in an aggregate query.
I am developing a web application with Flask, Python, SQLAlchemy, and Mysql.
I have 2 tables:
TaskUser:
- id
- id_task (foreign key of id column of table Task)
- message
Task
- id
- id_type_task
I need to extract all the tasksusers (from TaskUser) where the id_task is in a specific list of Task ids.
For example, all the taskusers where id_task is in (1,2,3,4,5)
Once I get the result, I do some stuff and use some conditions.
When I make this request :
all_tasksuser=TaskUser.query.filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
for item in all_tasksuser:
item.message="something"
if item.id_type_task == 2:
#do some stuff
if item.id_task == 7 or item.id_task == 7:
#do some stuff
I get this output error:
if item.id_type_task == 2:
AttributeError: 'TaskUser' object has no attribute 'id_type_task'
It is normal as my SQLAlchemy request is calling only one table. I can't access to columns of table Task.
BUT I CAN call the columns of TaskUser by their names (see item.id_task).
So I change my SQLAlchemy to this:
all_tasksuser=db_mysql.session.query(TaskUser,Task.id,Task.id_type_task).filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
This time I include the table Task in my query BUT I CAN'T call the columns by their names. I should use the [index] of columns.
I get this kind of error message:
AttributeError: 'result' object has no attribute 'message'
The problem is I have many more columns (around 40) on both tables. It is too complicated to handle data with index numbers of columns.
I need to have a list of rows with data from 2 different tables and I need to be able to call the data by column name in a loop.
Do you think it is possible?
The key point leading to the confusion is the fact that when you perform a query for a mapped class like TaskUser, the sqlalchemy will return instances of that class. For example:
q1 = TaskUser.query.filter(...).all() # returns a list of [TaskUser]
q2 = db_mysql.session.query(TaskUser).filter(...).all() # ditto
However, if you specify only specific columns, you will receive just a (special) list of tuples:
q3 = db_mysql.session.query(TaskUser.col1, TaskUser.col2, ...)...
If you switch your mindset to completely use the ORM paradigm, you will work mostly with objects. In your specific example, the workflow could be similar to below, assuming you have relationships defined on your models:
# model
class Task(...):
id = ...
id_type_task = ...
class TaskUser(...):
id = ...
id_task = Column(ForeignKey(Task.id))
message = ...
task = relationship(Task, backref="task_users")
# query
all_tasksuser = TaskUser.query ...
# work
for item in all_tasksuser:
item.message = "something"
if item.task.id_type_task == 2: # <- CHANGED to navigate the relationship ...
#do some stuff
if item.task.id_task == 7 or item.task.id_task == 7: # <- CHANGED
#do some stuff
First error message is the fact that query and filter without join (our any other joins) cannot give you columns from both tables. You need to either put both tables into session query, or join those two tables in order to gather column values from different tables, so this code:
all_tasksuser=TaskUser.query.filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
needs to look more like this:
all_tasksuser=TaskUser.query.join(Task) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
or like this:
all_tasksuser=session.query(TaskUser, Task).filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
Another thing is that the data will be structured differently so in the first example, you will need two for loops:
for taskuser in all_taskuser:
for task in taskuser.task:
# to reference to id_type_task : task.id_type_task
and in the second example your result is the tuple, so for loop should look like this
for taskuser, task in all_taskuser:
# to reference to id_type_task : task.id_type_task
NOTE: I haven't check all these examples, so there may be errors, but concepts are there. For more info, please refer yourself to this page:
https://www.tutorialspoint.com/sqlalchemy/sqlalchemy_orm_working_with_joins.htm
I have two lists : one contains the column names of categorical variables and the other numeric as shown below.
cat_cols = ['stat','zip','turned_off','turned_on']
num_cols = ['acu_m1','acu_cnt_m1','acu_cnt_m2','acu_wifi_m2']
These are the columns names in a table in Redshift.
I want to pass these as a parameter to pull only numeric columns from a table in Redshift(PostgreSql),write that into a csv and close the csv.
Next I want to pull only cat_cols and open the csv and then append to it and close it.
my query so far:
#1.Pull num data:
seg = ['seg1','seg2']
sql_data = str(""" SELECT {num_cols} """ + """FROM public.""" + str(seg) + """ order by random() limit 50000 ;""")
df_data = pd.read_sql(sql_data, cnxn)
# Write to csv.
df_data.to_csv("df_sample.csv",index = False)
#2.Pull cat data:
sql_data = str(""" SELECT {cat_cols} """ + """FROM public.""" + str(seg) + """ order by random() limit 50000 ;""")
df_data = pd.read_sql(sql_data, cnxn)
# Append to df_seg.csv and close the connection to csv.
with open("df_sample.csv",'rw'):
## Append to the csv ##
This is the first time I am trying to do selective querying based on python lists and hence stuck on how to pass the list as column names to select from table.
Can someone please help me with this?
If you want, to make a query in a string representation, in your case will be better to use format method, or f-strings (required python 3.6+).
Example for the your case, only with built-in format function.
seg = ['seg1', 'seg2']
num_cols = ['acu_m1','acu_cnt_m1','acu_cnt_m2','acu_wifi_m2']
query = """
SELECT {} FROM public.{} order by random() limit 50000;
""".format(', '.join(num_cols), seg)
print(query)
If you want use only one item from the seg array, use seg[0] or seg[1] in format function.
I hope this will help you!
I am using sqlalchemy 0.8 and I want to get the column name of the input only, not all the column in the table.
here is the code:
rec = raw_input("Enter keyword to search: ")
res = session.query(test.__table__).filter(test.fname == rec).first()
data = ','.join(map(str, res)) +","
print data
#saw this here # SO but not the one I wanted. It displays all of the columns
columns = [m.key for m in data.columns]
print columns
You can just query for the columns you want. Like if you had some model MyModel
You can do:
session.query(MyModel.wanted_column1, ...) ... # rest of the query
This would only select all the columns mentioned there.
You can use the select syntax.
Or if you still want the model object to be returned and certain columns not loaded, you can use deferred column loading.