How can I enrich in viur relations with further data fields.
With SQL Alchemy I can do this with Association Objects.
Is this also possible in viur?
I have tried the following:
skeleton.relation.setBoneValue(skeleton, "relation", {"key":keyObj,"afield":"avalue}, True)
But this does not work.
the third parameter of the setBoneValue function must be a tuple containing the keyObj as the first value and a RelSkel as the second value.
So the correct way looks like this:
class exampleRelSkel(RelSkel):
afield= stringBone(descr="a Field Description")
myRelSkel = exampleRelSkel()
myRelSkel["afield"] = "avalue" #set the value
skeleton.relation.setBoneValue(skeleton, "relation",(keyObj,myRelSkel),True)
Related
How can I use dict_to_model() to populate (insert data into) a table from dictionary values without changing the model attributes? I have a model, Video, with attributes:
class Video(model):
id = IntegerField()
camera = CharField()
channel = IntegerField()
filename = CharField()
I also have a dictionary of data:
data = {'video_id': 1234, 'camera_name': "canon", 'channel_id': 5, 'video_name' = "intro.mp4"}
Each key:value pair in the dict corresponds to a column and its data. From what I understand, I can use the dict_to_model function from playhouse.shortcuts to map the dict keys to table columns and act on the values inside the dict. How can I do this without changing the names of my class attributes? The only way I could get this to work was by changing Video.id to Video.video_id so it matches the dictionary and so on. Even then, a .save() statement does not push the data to a table. If I do not change my model attributes, I get:
AttributeError: Unrecognized attribute "video_id" for model class <Model: Video>
If I change my attributes to match the dictionary, it accepts the mapping but will not send the data to the table.
Dict and model fields should match, so you have to rename either dict fields or model fields.
Also take a look at "Storing Data" section in peewee quickstart
uncle_bob = Person(name='Bob', birthday=date(1960, 1, 15))
uncle_bob.save() # bob is now stored in the database
Probably after converting dict to model you need to call .save() method
I'm looking to generate(export) a csv from a flask-sqlalchemy app i'm developing. But i'm getting some unexpected outcomes in my csv i.e. instead of the actual data from the MySQL DB table populated in the csv file, i get the declarative class model entries (placeholders??). The issue possibly could be the way i structured the query or even, the entire function.
Oddly enough - judging from the csv output (pic) - it would seem i'm on the right track since the row/column count is the same as the DB table but actual data is just not populated. I'm fairly new to SQLAlchemy ORM and Flask, so looking for some guidance here to pull through. Constructive feedback appreciated.
#class declaration with DB object (divo)
class pearl(divo.Model):
__tablename__ = 'users'
work_id = divo.Column(divo.Integer, primary_key=True)
user_fname = divo.Column(divo.String(length=255))
user_lname = divo.Column(divo.String(length=255))
user_category = divo.Column(divo.String(length=255))
user_status = divo.Column(divo.String(length=1))
login_id = divo.Column(divo.String(length=255))
login_passwd = divo.Column(divo.String(length=255))
#user report function
#app.route("/reports/users")
def users_report():
with open(r'C:\Users\Xxxxxxx\Projects\_repository\zzz.csv', 'w') as s_key:
x15 = pearl.query.all()
for i in x15:
# x16 = tuple(x15)
csv_out = csv.writer(s_key)
csv_out.writerow(x15)
flash("Report generated. Please check designated repository.", "green")
return redirect(url_for('reports_landing')) # return redirect(url_for('other_tasks'))
#csv outcome (see attached pic)
instead of the actual data from the MySQL DB table populated in the csv file, i get the declarative class model entries (placeholders??)
Each object in the list
x15 = pearl.query.all()
represents a row in your users table.
What you're seeing in the spreadsheet are not placeholders, but string representations of each row object (See object.repr).
You could get the value of a column for a particular row object by the column name attribute, for example:
x15[0].work_id # Assumes there is at least one row object in x15
What you could do instead is something like this:
with open(r'C:\Users\Xxxxxxx\Projects\_repository\zzz.csv', 'w') as s_key:
x15 = divo.session.query(pearl.work_id, pearl.user_fname) # Add columns to query as needed
for i in x15:
csv_out = csv.writer(s_key)
csv_out.writerow(i)
i in the code above is a tuple of the form:
('work_id value', 'user_fname value')
I am developing a web application with Flask, Python, SQLAlchemy, and Mysql.
I have 2 tables:
TaskUser:
- id
- id_task (foreign key of id column of table Task)
- message
Task
- id
- id_type_task
I need to extract all the tasksusers (from TaskUser) where the id_task is in a specific list of Task ids.
For example, all the taskusers where id_task is in (1,2,3,4,5)
Once I get the result, I do some stuff and use some conditions.
When I make this request :
all_tasksuser=TaskUser.query.filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
for item in all_tasksuser:
item.message="something"
if item.id_type_task == 2:
#do some stuff
if item.id_task == 7 or item.id_task == 7:
#do some stuff
I get this output error:
if item.id_type_task == 2:
AttributeError: 'TaskUser' object has no attribute 'id_type_task'
It is normal as my SQLAlchemy request is calling only one table. I can't access to columns of table Task.
BUT I CAN call the columns of TaskUser by their names (see item.id_task).
So I change my SQLAlchemy to this:
all_tasksuser=db_mysql.session.query(TaskUser,Task.id,Task.id_type_task).filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
This time I include the table Task in my query BUT I CAN'T call the columns by their names. I should use the [index] of columns.
I get this kind of error message:
AttributeError: 'result' object has no attribute 'message'
The problem is I have many more columns (around 40) on both tables. It is too complicated to handle data with index numbers of columns.
I need to have a list of rows with data from 2 different tables and I need to be able to call the data by column name in a loop.
Do you think it is possible?
The key point leading to the confusion is the fact that when you perform a query for a mapped class like TaskUser, the sqlalchemy will return instances of that class. For example:
q1 = TaskUser.query.filter(...).all() # returns a list of [TaskUser]
q2 = db_mysql.session.query(TaskUser).filter(...).all() # ditto
However, if you specify only specific columns, you will receive just a (special) list of tuples:
q3 = db_mysql.session.query(TaskUser.col1, TaskUser.col2, ...)...
If you switch your mindset to completely use the ORM paradigm, you will work mostly with objects. In your specific example, the workflow could be similar to below, assuming you have relationships defined on your models:
# model
class Task(...):
id = ...
id_type_task = ...
class TaskUser(...):
id = ...
id_task = Column(ForeignKey(Task.id))
message = ...
task = relationship(Task, backref="task_users")
# query
all_tasksuser = TaskUser.query ...
# work
for item in all_tasksuser:
item.message = "something"
if item.task.id_type_task == 2: # <- CHANGED to navigate the relationship ...
#do some stuff
if item.task.id_task == 7 or item.task.id_task == 7: # <- CHANGED
#do some stuff
First error message is the fact that query and filter without join (our any other joins) cannot give you columns from both tables. You need to either put both tables into session query, or join those two tables in order to gather column values from different tables, so this code:
all_tasksuser=TaskUser.query.filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
needs to look more like this:
all_tasksuser=TaskUser.query.join(Task) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
or like this:
all_tasksuser=session.query(TaskUser, Task).filter(TaskUser.id_task==Task.id) \
.filter(TaskUser.id_task.in_(list_all_tasks),Task.id_type_task).all()
Another thing is that the data will be structured differently so in the first example, you will need two for loops:
for taskuser in all_taskuser:
for task in taskuser.task:
# to reference to id_type_task : task.id_type_task
and in the second example your result is the tuple, so for loop should look like this
for taskuser, task in all_taskuser:
# to reference to id_type_task : task.id_type_task
NOTE: I haven't check all these examples, so there may be errors, but concepts are there. For more info, please refer yourself to this page:
https://www.tutorialspoint.com/sqlalchemy/sqlalchemy_orm_working_with_joins.htm
Assume I have a SQLAlchemy table which looks like:
class Country:
name = VARCHAR
population = INTEGER
continent = VARCHAR
num_states = INTEGER
My application allow seeing name and population for all Countries. So I have a TextClause which looks like
"select name, population from Country"
I allow raw queries in my application so I don't have option to change this to selectable.
At runtime, I want to allow my users to choose a field name and put a field value on which I want to allow filtering. eg: User can say I only want to see name and population for countries where Continent is Asia. So I dynamically want to add the filter
.where(Country.c.continent == 'Asia')
But I can't add .where to a TextClause.
Similarly, my user may choose to see name and population for countries where num_states is greater than 10. So I dynamically want to add the filter
.where(Country.c.num_states > 10)
But again I can't add .where to a TextClause.
What are the options I have to solve this problem?
Could subquery help here in any way?
Please add a filter based on the conditions. filter is used for adding where conditions in sqlalchemy.
Country.query.filter(Country.num_states > 10).all()
You can also do this:
query = Country.query.filter(Country.continent == 'Asia')
if user_input == 'states':
query = query.filter(Country.num_states > 10)
query = query.all()
This is not doable in a general sense without parsing the query. In relational algebra terms, the user applies projection and selection operations to a table, and you want to apply selection operations to it. Since the user can apply arbitrary projections (e.g. user supplies SELECT id FROM table), you are not guaranteed to be able to always apply your filters on top, so you have to apply your filters before the user does. That means you need to rewrite it to SELECT id FROM (some subquery), which requires parsing the user's query.
However, we can sort of cheat depending on the database that you are using, by having the database engine do the parsing for you. The way to do this is with CTEs, by basically shadowing the table name with a CTE.
Using your example, it looks like the following. User supplies query
SELECT name, population FROM country;
You shadow country with a CTE:
WITH country AS (
SELECT * FROM country
WHERE continent = 'Asia'
) SELECT name, population FROM country;
Unfortunately, because of the way SQLAlchemy's CTE support works, it is tough to get it to generate a CTE for a TextClause. The solution is to basically generate the string yourself, using a custom compilation extension, something like this:
class WrappedQuery(Executable, ClauseElement):
def __init__(self, name, outer, inner):
self.name = name
self.outer = outer
self.inner = inner
#compiles(WrappedQuery)
def compile_wrapped_query(element, compiler, **kwargs):
return "WITH {} AS ({}) {}".format(
element.name,
compiler.process(element.outer),
compiler.process(element.inner))
c = Country.__table__
cte = select(["*"]).select_from(c).where(c.c.continent == "Asia")
query = WrappedQuery("country", cte, text("SELECT name, population FROM country"))
session.execute(query)
From my tests, this only works in PostgreSQL. SQLite and SQL Server both treat it as recursive instead of shadowing, and MySQL does not support CTEs.
I couldn't find anything nice for this in the documentation for this. I ended up resorting to pretty much just string processing.... but at least it works!
from sqlalchemy.sql import text
query = """select name, population from Country"""
if continent is not None:
additional_clause = """WHERE continent = {continent};"""
query = query + additional_clause
text_clause = text(
query.format(
continent=continent,
),
)
else:
text_clause = text(query)
with sql_connection() as conn:
results = conn.execute(text_clause)
You could also chain this logic with more clauses, although you'll have to create a boolean flag for the first WHERE clause and then use AND for the subsequent ones.
I have a model (declared using Declarative base) called DevicesGpsTelemetry. I make query like this:
models = session.query(
DevicesGps.ReceivedDateUtc,
DevicesGps.ReceivedTimeUtc,
DevicesGps.Latitude,
DevicesGps.Longitude)
And it renders as:
SELECT
devices_gps."ReceivedDateUtc" AS "devices_gps_ReceivedDateUtc",
devices_gps."ReceivedTimeUtc" AS "devices_gps_ReceivedTimeUtc",
devices_gps."Latitude" AS "devices_gps_Latitude",
devices_gps."Longitude" AS "devices_gps_Longitude"
FROM devices_gps
My question: how to change the names which go after AS statement (like "gps_telemetry_ReceivedDateUtc") to something I want?
Background: these names are important for me because I do pandas.read_sql with this query and the names become DataFrame's column names
Add .label('desired_name') after each column. In your case it would look like
models = session.query(
DevicesGps.ReceivedDateUtc.label("gps_telemetry_ReceivedDateUtc"),
DevicesGps.ReceivedTimeUtc.label("gps_telemetry_ReceivedTimeUtc"),
DevicesGps.Latitude.label("gps_telemetry_Latitude"),
DevicesGps.Longitude.label("gps_telemetry_Longitude")
)