I have this model
class Type(models.Model):
type = models.CharField(max_length=50)
value = models.CharField(max_length=1)
And into it, I have some data from an sql file:
INSERT INTO quest_type (type, value) VALUES ('Noun', '1');
INSERT INTO quest_type (type, value) VALUES ('Adjective', '2');
INSERT INTO quest_type (type, value) VALUES ('Duration', '3');
How do I access these values in the python shell? For example, if I know the type, how do I get the value (and vice verse)? I'm not sure how the syntax works.
you should be able to get that with
Type.objects.filter(type=typeImInterestedIn)
A couple of things to be leary of:
-you probably want to avoid manually writing to a DB that you're using an ORM in. It just creates potential for mismatches.
-naming an object Type is little problematic since it's so close to the python native function type.
It's unclear from your question how much about databases you understand, so I apologize if this answer is too basic for you (if so, please edit your question to include information about what actual database engine you're using and show some sample code trying to read from the database).
The SQL file you have is not the same as an SQL database. It is a series of commands that will create records in an SQL database. First you must install and configure a database engine on your machine then "run" that .sql file so that the records are created in the database.
After you have an actual database, you will have to configure Django so that it knows what kind of SQL engine you're using and the name and location of the database.
Finally, once the database is created and Django configured to talk to the engine, you will write python code to instantiate an instance of the Type class, read a record from the database, and inspect the values.
Also, let me point out that Type is a really, really bad name for a class in any programming language, and type and value are both bad names for columns in SQL databases.
If you are using python shell from django (python manage.py shell) firstly You have to import to your namespace your model, so type from my_app.models import Type.
Now if You want to get only one object from db syntax is:
result = Type.objects.get(type='your_query')
If you want to fetch more then one object syntax goes like this:
result = Type.objects.filter(type='your_query')
second method returns list instead of single object
To loop through list after using filter write:
for item in result:
item.value #will print values from matched rows
Related
This question already has answers here:
How to use variables in SQL statement in Python?
(5 answers)
Closed 2 months ago.
def update_inv_quant():
new_quant = int(input("Enter the updated quantity in stock: "))
Hello! I'm wondering how to insert a user variable into an sql statement so that a record is updated to said variable. Also, it'd be really helpful if you could also help me figure out how to print records of the database into the actual python console. Thank you!
I tried doing soemthing like ("INSERT INTO Inv(ItemName) Value {user_iname)") but i'm not surprised it didnt work
It would have been more helpful if you specified an actual database.
First method (Bad)
The usual way (which is highly discouraged as Graybeard said in the comments) is using python's f-string. You can google what it is and how to use it more in-depth.
but basically, say you have two variables user_id = 1 and user_name = 'fish', f-string turns something like f"INSERT INTO mytable(id, name) values({user_id},'{user_name}')" into the string INSERT INTO mytable(id,name) values(1,'fish').
As we mentioned before, this causes something called SQL injection. There are many good youtube videos that demonstrate what that is and why it's dangerous.
Second method
The second method is dependent on what database you are using. For example, in Psycopg2 (Driver for PostgreSQL database), the cursor.execute method uses the following syntax to pass variables cur.execute('SELECT id FROM users WHERE cookie_id = %s',(cookieid,)), notice that the variables are passed in a tuple as a second argument.
All databases use similar methods, with minor differences. For example, I believe SQLite3 uses ? instead of psycopg2's %s. That's why I said that specifying the actual database would have been more helpful.
Fetching records
I am most familiar with PostgreSQL and psycopg2, so you will have to read the docs of your database of choice.
To fetch records, you send the query with cursor.execute() like we said before, and then call cursor.fetchone() which returns a single row, or cursor.fetchall() which returns all rows in an iterable that you can directly print.
Execute didn't update the database?
Statements executing from drivers are transactional, which is a whole topic by itself that I am sure will find people on the internet who can explain it better than I can. To keep things short, for the statement to physically change the database, you call connection.commit() after cursor.execute()
So finally to answer both of your questions, read the documentation of the database's driver and look for the execute method.
This is what I do (which is for sqlite3 and would be similar for other SQL type databases):
Assuming that you have connected to the database and the table exists (otherwise you need to create the table). For the purpose of the example, i have used a table called trades.
new_quant = 1000
# insert one record (row)
command = f"""INSERT INTO trades VALUES (
'some_ticker', {new_quant}, other_values, ...
) """
cur.execute(command)
con.commit()
print('trade inserted !!')
You can then wrap the above into your function accordingly.
I have work in Perl where I am able to get the newly created data object ID by passing the result back to a variable. For example:
my $data_obj = $schema->resultset('PersonTable')->create(\%psw_rec_hash);
Where the $data_obj contains the primary key's column value.
I want to be able to do the same thing using Python 3.7, Flask and flask-mysqldb,
but without having to do another query. I want to be able to use the specific
record's primary key column value for another method.
Python and flask-mysqldb inserts data like so:
query = "INSERT INTO PersonTable (fname, mname, lname) VALUES('Phil','','Vil')
cursor = db.connection.cursor()
cursor.execute(query)
db.connection.commit()
cursor.close()
The PersonTable has a primary key column called, id. So, the newly inserted data row would look
like:
23, 'Phil', 'Vil'
Because there are 22 rows of data before the last inserted data, I don't want to perform a search
for the data, because there could be more than one entry with the same data. However, all I want
the most recent data row.
Can I do something similar to Perl with python 3.7 and flask-mysqldb?
You may want to consider the Flask-SQLAlchemy package to help you with this.
Although the syntax is going to be slightly different from Perl, what you can do is, when you create the model object, you can set it to a variable. Then, when you either flush or commit on the Database session, you can pull up your primary key attribute on that model object you had created (whether it's "id" or something else), and use it as needed.
SQLAlchemy supports MySQL, as well as several other relational databases. In addition, it is able to help prevent SQL injection attacks so long as you use model objects and add/delete them to your database session, as opposed to straight SQL commands.
I'm working on a iOS application and I use Flask(a Python framework) to build my backend.
I store my data in mysql database.
Now I need to store a bunch of IDs in one attribute.
Firstly I convert the array which stores the IDs to a JSON format object.
Then I ran into a problem. How to store this object?
As the length of the object can be rather large(I cannot make sure how many IDs I store), and SQLAlchemy requires the attribute to have a exact length when I create the table, so how to determine the length of the attribute?
In case you use MySQL 5.7 or newer
you should look at the new JSON type.
You can use this MySQL feature through sqlalchemy's type.JSON. This will greatly simplify column data management.
data_table = Table('data_table', metadata,
Column('id', Integer, primary_key=True),
Column('loosely_related_ids', JSON)
)
with engine.connect() as conn:
conn.execute(
data_table.insert(),
loosely_related_ids = [1, 54, 56, 99, 104]
)
Later on accessing the loosely_related_ids field will return a python array that you access normally.
If you are using an older version of MySQL
you should use a TEXT field or a wrapper around a similar type.
SQLAlchemy provides the PickleType field which is implemented on top of a BLOB field and will handle pickling and unpickling the array for you. Keep in mind that all the caveats of pickling python objects and sharing them across interpreters still apply here.
I don't quiet know the situation you meet , But it's not recommended to store multi records in one column , It's more normalised to build a relation map between ID owner and ID .
For example , you can create a new table called 'IDs' with schema like that :
id int auto increment ,
idbla varchar(<Your ID Length>)
owner int not null
When you are trying to get all idbla of some user x you can use
SELECT * idbla from IDs where owner = x
Another choice :
You can use nosql (non relational database) to store your data , It's document like and fit your situation pretty well .
I am trying to learn how to use peewee with mysql.
I have an existing database on a mysql server with an existing table. The table is currently empty (I am just testing right now).
>>> db = MySQLDatabase('nhl', user='root', passwd='blahblah')
>>> db.connect()
>>> class schedule(Model):
... date = DateField()
... team = CharField()
... class Meta:
... database = db
>>> test = schedule.select()
>>> test
<class '__main__.schedule'> SELECT t1.`id`, t1.`date`, t1.`team` FROM `nhl` AS t1 []
>>> test.get()
I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/site-packages/peewee.py", line 1408, in get
return clone.execute().next()
File "/usr/lib/python2.6/site-packages/peewee.py", line 1437, in execute
self._qr = QueryResultWrapper(self.model_class, self._execute(), query_meta)
File "/usr/lib/python2.6/site-packages/peewee.py", line 1232, in _execute
return self.database.execute_sql(sql, params, self.require_commit)
File "/usr/lib/python2.6/site-packages/peewee.py", line 1602, in execute_sql
res = cursor.execute(sql, params or ())
File "/usr/lib64/python2.6/site-packages/MySQLdb/cursors.py", line 201, in execute
self.errorhandler(self, exc, value)
File "/usr/lib64/python2.6/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (1054, "Unknown column 't1.id' in 'field list'")
Why is peewee adding the 'id' column into the select query? I do not have an id column in the table that already exists in the database. I simply want to work with the existing table and not depend on peewee having to create one every time I want to interact with the database. This is where I believe the error is.
The result of the query should be empty since the table is empty but since I am learning I just wanted to try out the code. I appreciate your help.
EDIT
Based on the helpful responses by Wooble and Francis I come to wonder whether it even makes sense for me to use peewee or another ORM like sqlalchemy. What are the benefits of using an ORM instead of just running direct queries in python using MySQLdb?
This is what I expect to be doing:
-automatically downloading data from various web servers. Most of the data is in xls or csv format. I can convert the xls into csv using the xlrd package.
-parsing/processing the data in list objects before inserting/bulk-inserting into a mysql db table.
-running complex queries to export data from mysql into python into appropriate data structured (lists for example) for various statistical computation that is easier to do in python instead of mysql. Anything that can be done in mysql will be done there but I may run complex regressions in python.
-run various graphical packages on the data retrieved from queries. Some of this may include using the ggplot2 package (from R-project), which is an advanced graphical package. So I will involve some R/Python integration.
Given the above - is it best that I spend the hours hacking away to learn ORM/Peewee/SQLAlchemy or stick to direct mysql queries using MySQLdb?
Most simple active-record pattern ORMs need an id column to track object identity. PeeWee appears to be one of them (or at least I am not aware of any way to not use an id). You probably can't use PeeWee without altering your tables.
Your existing table doesn't seem to be very well designed anyway, since it appears to lack a key or compound key. Every table should have a key attribute - otherwise it is impossible to distinguish one row from another.
If one of these columns is a primary key, try adding a primary_key=True argument as explained in the docs concerning non-integer primary keys
date = DateField(primary_key=True)
If your primary key is not named id, then you must set your table's actual primary key to a type of "PrimaryKeyField()" in your peewee Model for that table.
You should investigate SQLAlchemy, which uses a data-mapper pattern. It's much more complicated, but also much more powerful. It doesn't place any restrictions on your SQL table design, and in fact it can automatically reflect your table structure and interrelationships in most cases. (Maybe not as well in MySQL since foreign key relationships are not visible in the default table engine.) Most importantly for you, it can handle tables which lack a key.
If your primary key column name is other than 'id' you should add additional field to that table model class:
class Table(BaseModel):
id_field = PrimaryKeyField()
That will tell your script that your table has primary keys stored in the column named 'id_field' and that column is INT type with Auto Increment enabled.
Here is the documentation describing field types in peewee.
If you want more control on your primary key field, as already pointed by Francis Avila, you should use primary_key=True argument when creating field:
class Table(BaseModel):
id_field = CharField(primary_key=True)
See this link on non-integer primary keys documentation
You have to provide a primary_key field for this model.
If your table doesn't have a single primary_key field(just like mine), a CompositeKey defined in Meta will help.
primary_key = peewee.CompositeKey('date', 'team')
You need to us peewee's create table method to create the actual database table before you can call select(), which will create an id column in the table.
python 2.7
pyramid 1.3a4
sqlalchemy 7.3
sqlite3.7.9
from sqlite prompt > I can do:
insert into risk(travel_dt) values ('')
also
insert into risk(travel_dt) values(Null)
Both result in a new row with a null value for risk.travel_dt but when I try those travel_dt values from pyramid, Sqlalchemy gives me an error.
In the first case, I get sqlalchemy.exc.StatementError:
SQLite Date type only accepts python date objects as input
In the second case, I get Null is not defined. When I use "Null", I get the first case error
I apologize for another question on nulls: I have read a lot of material but must have missed something simple. Thanks for any help
Clemens Herschel
While you didn't provide any insight into the table definition you're using or any example code, I am guessing the issue is due to confusing NULL (the database reserved word) and None (the Python reserved word).
The error message is telling you that you need to call your SQLA methods with valid python date objects, rather than strings such as "Null" or ''.
Assuming you have a Table called risk containing a Column called travel_dt, you should be able to create a row in that table with something sort of like:
risk.insert().values(travel_dt=None)
Note that this is just a snippet, you would need to execute such a call within an engine context like that defined in the SA Docs SQL Expression Language Tutorial.