I am during creating my first database project in SQLAlchemy and SQLite. I want to connect two entity as relational database's relational model. Here is the source:
class Models(Base):
__tablename__ = "models"
id_model = Column(Integer, primary_key=True)
name_of_model = Column(String, nullable = False)
price = Column(Integer, nullable = False)
def __init__(self, name_of_model):
self.name_of_model = name_of_model
class Cars(Base):
__tablename__ = "cars"
id_car = Column(Integer, primary_key=True)
id_equipment = Column(Integer, nullable = False)
id_package = Column(Integer, nullable = False)
id_model = Column(Integer, ForeignKey('Models'))
model = relationship("Models", backref=backref('cars', order_by = id_model))
I want to achieve a relationship like this:
https://imgur.com/af62zli
The error which occurs is:
The foreign key associated with column 'cars.id_model' could not find table 'Models' with which to generate a foreign key to target column 'None'.
Any ideas how to solve this problem?
From the docs:
The argument to ForeignKey is most commonly a string of the form
<tablename>.<columnname>, or for a table in a remote schema or “owner”
of the form <schemaname>.<tablename>.<columnname>. It may also be an
actual Column object...
In defining your ForeignKey on Cars.id_model you pass the string form of a class name ('Models') which is not an accepted form.
However, you can successfully define your foreign key using one of the below options:
ForeignKey(Models.id_model)
This uses the actual Column object to specify the foreign key. The disadvantage of this method is that you need to have the column in your namespace adding extra complexity in needing to import the model into a module if it is not defined there, and also may cause you to care about the order of instantiation of your models. This is why it's more common to use one of the string-based options, such as:
ForeignKey('models.id_model')
Notice that this example doesn't include the string version of the class name (not Models.id_model) but rather the string version of the table name. The string version means that table objects required are only resolved when needed and as such avoid the complexities of dealing with Column objects themselves.
Another interesting example that works in this case:
ForeignKey('models')
If the two columns are named the same on both tables, SQLAlchemy seems to infer the column from the table. If you alter the name of either of the id_model columns definitions in your example so that they are named differently, this will cease to work. Also I haven't found this to be well documented and it is less explicit, so not sure if really worth using and am really just mentioning for completeness and because I found it interesting. A comment in the source code of ForeignKey._column_tokens() seemed to be more explicit than the docs with respect to acceptable formatting of the column arg:
# A FK between column 'bar' and table 'foo' can be
# specified as 'foo', 'foo.bar', 'dbo.foo.bar',
# 'otherdb.dbo.foo.bar'. Once we have the column name and
# the table name, treat everything else as the schema
# name.
I define a table with flask-sqlalchemy. A special field start with a number, as below:
class Foo(db.Model):
6F78 = db.Column(db.String(10))
The field name 6F78 causes SyntaxError: invalid syntax.But the field name can't modify to another name, as it is constant.
So, what should I do? Thanks!
Python identifiers can't start with a number, which is why you can't create a member on an object name 6F78.
You can however point your representation of that column in your code to a column in the database of a different label, try this:
class Foo(db.Model):
my_column_name_6F78 = db.Column( "6F78", db.String() )
Then in code you refer to my_column_name_6F78 instead of 6F78. If course you could chose a more concise name for the column in your code, like c6F78.
I have no experience with sqlalchemy, and I have the following code:
class ForecastBedSetting(Base):
__tablename__ = 'forecast_bed_settings'
__table_args__ = (
Index('idx_forecast_bed_settings_forecast_id_property_id_beds',
'forecast_id', 'property_id',
'beds'),
)
forecast_id = Column(ForeignKey(u'forecasts.id'), nullable=False)
property_id = Column(ForeignKey(u'properties.id'), nullable=False, index=True)
# more definition of columns
Although I have checked this, I cannot understand what is the purpose of __table_args__, so I have no clue what this line is doing:
__table_args__ = (
Index('idx_forecast_bed_settings_forecast_id_property_id_beds',
'forecast_id', 'property_id',
'beds'),
)
Could somebody please explain me what is the purpose of __table_args__, and what the previous piece of code is doing.
This attribute accommodates both positional as well as keyword arguments that are normally sent to the Table constructor.
During construction of a declarative model class – ForecastBedSetting in your case – the metaclass that comes with Base creates an instance of Table. The __table_args__ attribute allows passing extra arguments to that Table. The created table is accessible through
ForecastBedSetting.__table__
The code defines an index inline, passing the Index instance to the created Table. The index uses string names to identify columns, so without being passed to a table SQLAlchemy could not know what table it belongs to.
I'm defining a sqlalchemy model like this:
class Dog(Model):
__tablename__ = 'dog'
id = db.Column(db.Integer, primary_key=True)
owner = db.Column(db.Integer, db.ForeignKey('owner.id'))
I want to use the inspector to figure out a type for each attribute, however I'm having trouble figuring out how to access the things I want, which include (a) a type for each attribute and (b) all of the other properties that I've passed into Column.
I tried the following:
for column in inspect(target_class).columns:
print column.type
This returns:
INTEGER
INTEGER
but really I'd like something like
INTEGER
FOREIGN_KEY
or at least some way to recognize that I'm using the second Integer to identify another table.
What is the most correct way to do this? If possible, I'm also interested in grabbing all of the kwargs that I passed into db.Column.
You can look at the foreign_keys property on Column:
for column in inspect(Dog).columns:
print column.foreign_keys
# set([])
# set([ForeignKey('owner.id')]
There are other properties set from kwargs, like primary_key:
for column in inspect(Dog).columns:
print column.primary_key
# True
# False
This is my declarative model:
import datetime
from sqlalchemy import Column, Integer, DateTime
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Test(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
created_date = DateTime(default=datetime.datetime.utcnow)
However, when I try to import this module, I get this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "orm/models2.py", line 37, in <module>
class Test(Base):
File "orm/models2.py", line 41, in Test
created_date = sqlalchemy.DateTime(default=datetime.datetime.utcnow)
TypeError: __init__() got an unexpected keyword argument 'default'
If I use an Integer type, I can set a default value. What's going on?
Calculate timestamps within your DB, not your client
For sanity, you probably want to have all datetimes calculated by your DB server, rather than the application server. Calculating the timestamp in the application can lead to problems because network latency is variable, clients experience slightly different clock drift, and different programming languages occasionally calculate time slightly differently.
SQLAlchemy allows you to do this by passing func.now() or func.current_timestamp() (they are aliases of each other) which tells the DB to calculate the timestamp itself.
Use SQLALchemy's server_default
Additionally, for a default where you're already telling the DB to calculate the value, it's generally better to use server_default instead of default. This tells SQLAlchemy to pass the default value as part of the CREATE TABLE statement.
For example, if you write an ad hoc script against this table, using server_default means you won't need to worry about manually adding a timestamp call to your script--the database will set it automatically.
Understanding SQLAlchemy's onupdate/server_onupdate
SQLAlchemy also supports onupdate so that anytime the row is updated it inserts a new timestamp. Again, best to tell the DB to calculate the timestamp itself:
from sqlalchemy.sql import func
time_created = Column(DateTime(timezone=True), server_default=func.now())
time_updated = Column(DateTime(timezone=True), onupdate=func.now())
There is a server_onupdate parameter, but unlike server_default, it doesn't actually set anything serverside. It just tells SQLalchemy that your database will change the column when an update happens (perhaps you created a trigger on the column ), so SQLAlchemy will ask for the return value so it can update the corresponding object.
One other potential gotcha:
You might be surprised to notice that if you make a bunch of changes within a single transaction, they all have the same timestamp. That's because the SQL standard specifies that CURRENT_TIMESTAMP returns values based on the start of the transaction.
PostgreSQL provides the non-SQL-standard statement_timestamp() and clock_timestamp() which do change within a transaction. Docs here: https://www.postgresql.org/docs/current/static/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT
UTC timestamp
If you want to use UTC timestamps, a stub of implementation for func.utcnow() is provided in SQLAlchemy documentation. You need to provide appropriate driver-specific functions on your own though.
DateTime doesn't have a default key as an input. The default key should be an input to the Column function. Try this:
import datetime
from sqlalchemy import Column, Integer, DateTime
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Test(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
created_date = Column(DateTime, default=datetime.datetime.utcnow)
You can also use sqlalchemy builtin function for default DateTime
from sqlalchemy.sql import func
DT = Column(DateTime(timezone=True), default=func.now())
You likely want to use onupdate=datetime.now so that UPDATEs also change the last_updated field.
SQLAlchemy has two defaults for python executed functions.
default sets the value on INSERT, only once
onupdate sets the value to the callable result on UPDATE as well.
Using the default parameter with datetime.now:
from sqlalchemy import Column, Integer, DateTime
from datetime import datetime
class Test(Base):
__tablename__ = 'test'
id = Column(Integer, primary_key=True)
created_at = Column(DateTime, default=datetime.now)
updated_at = Column(DateTime, default=datetime.now, onupdate=datetime.now)
The default keyword parameter should be given to the Column object.
Example:
Column(u'timestamp', TIMESTAMP(timezone=True), primary_key=False, nullable=False, default=time_now),
The default value can be a callable, which here I defined like the following.
from pytz import timezone
from datetime import datetime
UTC = timezone('UTC')
def time_now():
return datetime.now(UTC)
For mariadb thats worked for me:
from sqlalchemy import Column, Integer, String, DateTime, TIMESTAMP, text
from sqlalchemy.sql import func
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Test(Base):
__tablename__ = "test"
id = Column(Integer, primary_key=True, autoincrement=True)
name = Column(String(255), nullable=False)
email = Column(String(255), nullable=False)
created_at = Column(TIMESTAMP, nullable=False, server_default=func.now())
updated_at = Column(DateTime, server_default=text("CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"))
In the sqlalchemy documentation for mariadb, it is recommended to import the textfrom sqlalchemy itself and set the server_default with the text, inserting the custom command.
updated_at=Column(DateTime, server_default=text("CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP"))
To understand func.now you can read the sql alchemy documentation.
Hope I helped in some way.
As per PostgreSQL documentation:
now, CURRENT_TIMESTAMP, LOCALTIMESTAMP return the time of transaction
This is considered a feature: the intent is to allow a single
transaction to have a consistent notion of the "current" time, so that
multiple modifications within the same transaction bear the same time stamp.
You might want to use statement_timestamp or clock_timestamp if you don't want transaction timestamp.
statement_timestamp()
returns the start time of the current statement (more specifically,
the time of receipt of the latest command message from the client).
statement_timestamp
clock_timestamp()
returns the actual current time, and therefore its value changes even
within a single SQL command.
Jeff Widman said on his answer that you need to create your own implementation of UTC timestamps for func.utcnow()
As I didnt want to implement it myself, I have searched for and found a python package which already does the job and is maintained by many people.
The package name is spoqa/sqlalchemy-ut.
A summary of what the package does is:
Long story short, UtcDateTime does:
take only aware datetime.datetime,
return only aware datetime.datetime,
never take or return naive datetime.datetime,
ensure timestamps in database always to be encoded in UTC, and
work as you’d expect.
Note that for server_default=func.now() and func.now() to work :
Local_modified = Column(DateTime, server_default=func.now(), onupdate=func.now())
you need to set DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP in your table DDL.
For example
create table test
(
id int auto_increment
primary key,
source varchar(50) null,
Local_modified datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
)
collate=utf8mb4_bin;
Otherwise, server_default=func.now(), onupdate=func.now() makes no effects.
You can use TIMESTAMP with sqlalchemy.
from sqlalchemy import TIMESTAMP, Table, MetaData, Column, ...
... ellipsis ...
def function_name(self) -> Table:
return Table(
"table_name",
self._metadata,
...,
Column("date_time", TIMESTAMP),
)
... ellipsis ...