I'm working on in the terminal on a shell script following this tutorial http://docs.sqlalchemy.org/en/latest/orm/tutorial.html SQLAlchemy tutorial on Declare A Mapping. I needed to type in
>>> from sqlalchemy import Column, Integer, String
>>> class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
fullname = Column(String)
password = Column(String)
def __repr__(self):
return "<\User(name='%s', fullname='%s', password='%s')>" % (
self.name, self.fullname, self.password)
Issue is after I typed the password = Column(String) I hit enter twice and the .... changed to >>>. I then retyped everything back in but then an error was thrown because the class already exists... I'm not totally sure how to fix this. How do I open up that class in the shell script and edit it (add in the def repr)
The error thrown is below:
/Users/GaryPeters/TFsqlAlc001/lib/python2.7/site-packages/sqlalchemy/ext/declarative/clsregistry.py:160: SAWarning: This declarative base already contains a class with the same class name and module name as __main__.User, and will be replaced in the string-lookup table.
existing.add_item(cls)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/GaryPeters/TFsqlAlc001/lib/python2.7/site-packages/sqlalchemy/ext/declarative/api.py", line 53, in __init__
_as_declarative(cls, classname, cls.__dict__)
File "/Users/GaryPeters/TFsqlAlc001/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 251, in _as_declarative
**table_kw)
File "/Users/GaryPeters/TFsqlAlc001/lib/python2.7/site-packages/sqlalchemy/sql/schema.py", line 339, in __new__
"existing Table object." % key)
sqlalchemy.exc.InvalidRequestError: Table 'users' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object.
Just close and reopen the shell, and type everything in again, this time making sure to hit enter only once, not twice.
Alternatively, make sure to add the indents whenever you encounter a blank line -- if you hit enter and then hit tab or space the appropriate amount of times so you're indented to the right level, then you should be able to hit enter again without the shell ending your definition and displaying >>> again.
You should also be to redefine the class in the shell, so I'm not quite sure what you mean by "an error was thrown" -- it might be helpful if you were to edit your post to include the specific stack trace.
Related
When working with modular imports with FastAPI and SQLModel, I am getting the following error if I open /docs:
TypeError: issubclass() arg 1 must be a class
Python 3.10.6
pydantic 1.10.2
fastapi 0.85.2
sqlmodel 0.0.8
macOS 12.6
Here is a reproducible example.
user.py
from typing import List, TYPE_CHECKING, Optional
from sqlmodel import SQLModel, Field
if TYPE_CHECKING:
from item import Item
class User(SQLModel):
id: int = Field(default=None, primary_key=True)
age: Optional[int]
bought_items: List["Item"] = []
item.py
from sqlmodel import SQLModel, Field
class Item(SQLModel):
id: int = Field(default=None, primary_key=True)
price: float
name: str
main.py
from fastapi import FastAPI
from user import User
app = FastAPI()
#app.get("/", response_model=User)
def main():
return {"message": "working just fine"}
I followed along the tutorial from sqlmodel https://sqlmodel.tiangolo.com/tutorial/code-structure/#make-circular-imports-work.
If I would put the models in the same file, it all works fine. As my actual models are quite complex, I need to rely on the modular imports though.
Traceback:
Traceback (most recent call last):
File "/Users/felix/opt/anaconda3/envs/fastapi_test/lib/python3.10/site-packages/fastapi/utils.py", line 45, in get_model_definitions
m_schema, m_definitions, m_nested_models = model_process_schema(
File "pydantic/schema.py", line 580, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 621, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 254, in pydantic.schema.field_schema
File "pydantic/schema.py", line 461, in pydantic.schema.field_type_schema
File "pydantic/schema.py", line 847, in pydantic.schema.field_singleton_schema
File "pydantic/schema.py", line 698, in pydantic.schema.field_singleton_sub_fields_schema
File "pydantic/schema.py", line 526, in pydantic.schema.field_type_schema
File "pydantic/schema.py", line 921, in pydantic.schema.field_singleton_schema
File "/Users/felix/opt/anaconda3/envs/fastapi_test/lib/python3.10/abc.py", line 123, in __subclasscheck__
return _abc_subclasscheck(cls, subclass)
TypeError: issubclass() arg 1 must be a class
TL;DR
You need to call User.update_forward_refs(Item=Item) before the OpenAPI setup.
Explanation
So, this is actually quite a bit trickier and I am not quite sure yet, why this is not mentioned in the docs. Maybe I am missing something. Anyway...
If you follow the traceback, you'll see that the error occurs because in line 921 of pydantic.schema in the field_singleton_schema function a check is performed to see if issubclass(field_type, BaseModel) and at that point field_type is not in fact a type instance.
A bit of debugging reveals that this occurs, when the schema for the User model is being generated and the bought_items field is being processed. At that point the annotation is processed and the type argument for List is still a forward reference to Item. Meaning it is not the actual Item class itself. And that is what is passed to issubclass and causes the error.
This is a fairly common problem, when dealing with recursive or circular relationships between Pydantic models, which is why they were so kind to provide a special method just for that. It is explained in the Postponed annotations section of the documentation. The method is update_forward_refs and as the name suggests, it is there to resolve forward references.
What is tricky in this case, is that you need to provide it with an updated namespace to resolve the Item reference. To do that you need to actually have the real Item class in scope because that is what needs to be in that namespace. Where you do it does not really matter. You could for example import User model into your item module and call it there (obviously below the definition of Item):
from sqlmodel import SQLModel, Field
from .user import User
class Item(SQLModel):
id: int = Field(default=None, primary_key=True)
price: float
name: str
User.update_forward_refs(Item=Item)
But that call needs to happen before an attempt is made to set up that schema. Thus you'll at least need to import the item module in your main module:
from fastapi import FastAPI
from .user import User
from . import item
api = FastAPI()
#api.get("/", response_model=User)
def main():
return {"message": "working just fine"}
At that point it is probably simpler to have a sub-package with just the model modules and import all of them in the __init__.py of that sub-package.
The reason I gave the example of putting the User.update_forward_refs call in below your Item definition is that these situations typically occur, when you actually have a circular relationship, i.e. if your Item class had a users field for example, which was typed as list[User]. Then you'd have to import User there anyway and might as well just update the references there.
In your specific example, you don't actually have any circular dependencies, so there is strictly speaking no need for the TYPE_CHECKING escape. You can simply do from .item import Item inside user.py and put the actual class in your annotation as bought_items: list[Item]. But I assume you simplified the actual use case and simply forgot to include the circular dependency.
Maybe I am missing something and someone else here can find a way to call update_forward_refs without the need to provide Item explicitly, but this way should definitely work.
For anyone ending up here who (just like me) got the same error but couldn't resolve it using the solution above, my script looked like this. It seems that SQLModel relies on the pydantic.BaseModel so this solution also applies here.
from pydantic import BaseModel
class Model(BaseModel):
values: list[int, ...]
class SubModel(Model):
values = list[int, int, int]
It took me a long time to realize what my mistake was, but in SubModel I used = (assignment) whereas I should have used : (type hint).
The strangest thing was that it did work in a docker container (Linux) but not locally (Windows). Also, mypy did not pick up on this.
I am using short lived sqlalchemy sessions to add objects to a sqlite database. A few objects outlive the session in a readonly, detached state. Unfortunately, accessing attributes of a detached object throws an exception if the session has been closed. Here is a simplified code example
from sqlalchemy import Column, String, Integer, create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
class Foo(Base):
__tablename__ = 'foo'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
engine = create_engine('sqlite:///foo.db', echo=False)
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
f = Foo(name='foo1')
print('state=transient : name=', f.name)
session.add(f)
print('state=pending : name=', f.name)
session.commit()
session.close()
print('state=detached : name=', f.name)
# output
state=transient : name= foo1
state=pending : name= foo1
Traceback (most recent call last):
File "scratch_46.py", line 23, in <module>
print('state=detached : name=', f.name)
File "lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 282, in __get__
return self.impl.get(instance_state(instance), dict_)
File "lib/python3.7/site-packages/sqlalchemy/orm/attributes.py", line 705, in get
value = state._load_expired(state, passive)
File "lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 660, in _load_expired
self.manager.deferred_scalar_loader(self, toload)
File "lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 913, in load_scalar_attributes
"attribute refresh operation cannot proceed" % (state_str(state))
sqlalchemy.orm.exc.DetachedInstanceError: Instance <Foo at 0x7f266c758ac8> is not bound to a Session; attribute refresh operation cannot proceed (Background on this error at: http://sqlalche.me/e/bhk3)
Oddly enough, the error is not thrown if I do either of these
Read the name attribute between the commit and close calls while the object is in the persistent state
Expunge the object from the session and leave the session open. Still detached state, but I can access the attribute.
Should I be able to read attributes of a detached object? Is there a way to have objects safely outlive transient sessions? I read the suggested link in the output, but it talks mostly about parent/child relationships.
I created a an issue on the sqlalchemy repo because I thought this was a bug at first, but now I am not so certain.
Dug into this more and found that I was getting bit by the default value of expire_on_commit being true on the session. When that is on, the commit call expires the object, which forced sqlalchemy to reload it the next time someone reads an attribute. If that is delayed until after session close, then the attributes can't be fetched.
I don't really want this auto expiration, since I know my objects are in a good state and are readonly after being saved. Setting expire_on_commit to false in the session maker resolves the issue.
I create models for MySQL the foreign key
constraints always returning error
The model is
class AirPort(models.Model):
code = models.CharField(max_length=3)
city = models.CharField(max_length=100)
def __str__(self):
return f"{self.id} - CODE =>{self.code} :: CITY=> {self.city}"
class Flight(models.Model):
orgin_id = models.ForeignKey(AirPort,on_delete=models.CASCADE,related_name="dep")
dest_id = models.ForeignKey(AirPort,on_delete=models.CASCADE,related_name="arrival")
duration = models.IntegerField()
def __str__(self):
return f"{self.id} - {self.orgin} TO {self.dest} will take {self.duration} minutes"
and the shell output is
a=Flight(orgin_id=1,dest_id=2,duration=120)
Traceback (most recent call last):
File "", line 1, in
File "/home/kid/PycharmProjects/hardward/venv/lib/python3.6/site-packages/django/db/models/base.py", line 467, in init
_setattr(self, field.name, rel_obj)
File "/home/kid/PycharmProjects/hardward/venv/lib/python3.6/site-packages/django/db/models/fields/related_descriptors.py", line 210, in set
self.field.remote_field.model._meta.object_name,
ValueError: Cannot assign "1": "Flight.orgin_id" must be a "AirPort" instance.
Try
a=Flight(orgin=AirPort.object.get(id=1),dest=AirPort.object.get(id=2),duration=120)
You may try this
flight_result=Flight()
flight_result.orgin_id = AirPort.object.first()
flight_result.dest_id = AirPort.object.last()
flight_result.duration = 1000
flight_result.save()
Have you run python manage.py makemigrations...and migrated the data with python manage.py migrate
I received this error because I did not see the comma at the end
order.employee=Employee.objects.get(employee_id=x),
Its origin was that I used Order.objects.create() before, for which one uses comma separated attribute assignments and I did not immediately delete the commas. May it help someone who also sat too long in front of the computer :)
In my flask sqlalchemy based app, I have a model like this:
class Foo(object):
start_date = db.Column(db.DateTime())
It works fine but when I use the template engine to print out the date onto HTML page by doing {{ foo.start_date }}, it'll print a ugly line like 2015-8-8 00:00:00. Trying to come up with my own format, I thought of something clever (at least before I hit the problem)
class Foo(object):
start_date = db.Column(db.DateTime())
def __getattr__(self, name):
return "{0}-{1}-{2}".format(self.start_date.year....)
However, something internal of SQLAlchemy doesn't like this. When a new object is created, it'll raise the following exception:
File "<string>", line 6, in __init__
File "/home/wenliang/work/template/flask/local/lib/python2.7/site-packages/sqlalchemy/ext/declarative/base.py", line 526, in _declarative_constructor
setattr(self, k, kwargs[k])
File "/home/wenliang/work/template/flask/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 226, in __set__
instance_dict(instance), value, None)
File "/home/wenliang/work/template/flask/local/lib/python2.7/site-packages/sqlalchemy/orm/attributes.py", line 694, in set
state._modified_event(dict_, self, old)
AttributeError: 'NoneType' object has no attribute '_modified_event'
Any one seen this before?
SQLAlchemy uses __getattr__ internally. If you overload it, the whole alchemy magic will not work anymore.
The field you are accessing is a DateTime object. If you want to format that into a string in any other way as the default rendering, you should use strftime().
https://stackoverflow.com/a/4830620/3929826 gives an example.
https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior has the documenatation on the format string.
I'm trying to learn more about the .egg concept and overriding methods in Python. Here's the error message I'm receiving:
Traceback (most recent call last):
File "C:/local/work/scripts/plmr/plmr_db.py", line 42, in <module>
insp.reflecttable(reo_daily_table, column_list)
File "build\bdist.win32\egg\sqlalchemy\engine\reflection.py", line 370, in reflecttable
File "build\bdist.win32\egg\sqlalchemy\engine\reflection.py", line 223, in get_columns
File "build\bdist.win32\egg\sqlalchemy\engine\base.py", line 260, in get_columns
NotImplementedError
Here's the specific function from base.py:
def get_columns(self, connection, table_name, schema=None, **kw):
"""Return information about columns in `table_name`.
Given a :class:`.Connection`, a string
`table_name`, and an optional string `schema`, return column
information as a list of dictionaries with these keys:
name
the column's name
type
[sqlalchemy.types#TypeEngine]
nullable
boolean
default
the column's default value
autoincrement
boolean
sequence
a dictionary of the form
{'name' : str, 'start' :int, 'increment': int}
Additional column attributes may be present.
"""
raise NotImplementedError()
So my question is - do I override this function by writing a new method in my main module? Or am I missing a step somewhere along the way with my imports? Or am I just completely off track here?
Any and all help is appreciated :)
edit: adding my code
import sys
from sqlalchemy import create_engine, select, Table, MetaData
from sqlalchemy.engine import reflection
dbPath = 'connection_string'
engine = create_engine(dbPath, echo=True)
connection = engine.connect()
#reflect tables into memory
meta = MetaData()
reo_daily_table = Table('reo_daily',meta)
insp = reflection.Inspector.from_engine(engine)
column_list=[...]
insp.reflecttable(reo_daily_table, column_list)
connection.close()
EDIT:
The Sybase dialect currently lacks the ability to reflect tables.
You have misunderstood completely. You do not need to subclass anything and this problem has nothing to do with eggs and .ini files at all.
You are not supposed to instantiate Inspector this way. If you read
SQLAlchemy docs carefully, you will notice that you are not supposed to use Reflection constructor directly; instead you should write
insp = reflection.Inspector.from_engine(engine)