The docs say to put this somewhere:
from sqlalchemy import event
from colanderalchemy import setup_schema
event.listen(mapper, 'mapper_configured', setup_schema)
Where should this go in Pyramid? Should I be using Pyramid events instead of SQLAlchemy's?
When I tried putting it at the top of the models.py file, it complained about mapper not existing; should I still be using that?
You need to use the SQLAlchemy events as they tell what is happening inside the SQLAlchemy (they do not relate to the pyramid events at all).
The documentation for the ColanderAlchemy is confusing; what they call for a mapper here is your model class (it is not a mapper).
Thus in your models you should be doing something like:
class MyModelClass(Base):
...
event.listen(
MyModelClass,
"mapper_configured",
setup_schema)
The test suite shows it working like this:
from sqlalchemy import event
from colanderalchemy import setup_schema
from sqlalchemy.orm import mapper
event.listen(mapper, 'mapper_configured', setup_schema)
Please let me know if that fixes it for you and I can go update the documentation accordingly.
Related
I am trying to mock a sqlalchemy create_engine method called during import of a python script. But I am not able to properly do so. Here is my file structure:
File main.py
from database_wrapper import update_database
def do_something():
# do something and
update_database()
database_wrapper.py
from sqlalchemy import create_engine
engine = create_engine("connection_uri")
def update_database() -> bool:
con = engine.connect()
# do something with the connection, and so on...
In my test unit i do something like:
class TestMicroservice(TestCase):
#patch('database_wrapper.create_engine')
#patch('main.update_database')
def test_1(self, update_database_mock, create_engine_mock):
create_engine_mock.return_value = "Fake engine, babe"
update_database_mock.return_value = True
from main import do_something
do_something()
I have also tried #patch('main.database_wrapper.create_engine'), #patch('main.database_wrapper.sqlalchemy.create_engine'), #patch('sqlalchemy.create_engine') and many other variations but with no luck. I have read a few similar posts but couldn't find this exact case.
I would rather avoid changing database_wrapper.py, hence i cannot move engine = create_engine("connection_uri"). Also because I am happy to create the engine when the program start without having to pass it around.
I found a solution. Thank to MrBean Bremen for setting me on the right route. In fact engine = create_engine("connection_uri") is already lazy initialized, as explained here. The only thing I needed to change was to pass a reasonable string as connection uri, something like
engine = create_engine("postgresql://user:pass#localhost:5432/db?sslmode=disable")
This way the engine would be initialised, but because i will never actually use it in the update_database() method, it would never give a problem.
I have a module models.py with some data logic:
db = PostgresqlDatabase(database='database', user='user')
# models logic
and flask app which actually interacts with database:
from models import db, User, ...
But I want to move initializing all setting from one config file in flask app:
So I could separate importing db from other stuff (I need this for access to module variable db in models):
import models
from models import User, ...
app.config.from_object(os.environ['APP_SETTINGS'])
models.db = PostgresqlDatabase(database=app.config['db'],
user=app.config['db'])
and use further db as models.db
But seems it is kinda ugly. Duplicating imports, different usage of module stuff..
Is there any better way how to handle this situation?
I'd recommend 1 level of indirection, so that your code becomes like this:
import const
import runtime
def foo():
runtime.db.execute("SELECT ... LIMIT ?", (..., const.MAX_ROWS))
You get:
clear separation of leaf module const
mocking and/or reloading is possible
uniform and concise access in all user modules
To get rich API on runtime, use the "replace module with object at import" trick ( see __getattr__ on a module )
In my app, I have a model folder / package.
In that model folder, I generally have a whole bunch of import statements (most of them from SQLAlchemy, such as String, Integer, Session, etc.
I find myself writing the same import statements again and again for stuff like User, Blog, Article, Etc. and have become pretty sloppy at it.
Understanding that explicit is better than implicit and then going ahead and disregarding that, is there a way to do this once?
this probably isnt what you want but who knows
common_imports.py
from sqlalchemy import Model,String,...
in other scripts just do
from common_imports import *
[edit]
actually I came up with a really awful and hacky way to do what you want
a really hacky way to do it might be something like
\root_of_package
-main.py
\models
-__init__.py
-model1.py
-model2.py
then in models\__init__.py
from sqlalchemy import String,Integer,...
from glob import glob
Base = ...
for fname in glob("%s/*.py"%dirname(__file__)):
with open(fname) as f:
eval(f.read())
__all__ = locals()
then in models/model1.py
#just use all the imports/locals from __init__.py as if they already existed
#this will be hellish if you use an IDE that checks for errors for you
class Model1(Base):
id = Column(Integer,primary_key=True)
...
then in main.py
from models import Model1
I do this in one of my applications.
# approot/m.py
from models.model import String, Integer, Session, (etc)
# Otherwise blank!
Then in everything else:
import m
Then you can do m.String, m.Integer, m.Session and etc. This also means I can further encapsulate my code so that later when I write up user models I can just add to m.py
from models.user import User, Group, PermissionSet
and continue to access m.User, m.Group and etc.
I'm somewhat new to Python (but not at all to programming, already fluent in Perl, PHP, & Ruby).
I'm running into a major difference between all my previous languages and Python: Mainly that I can easily read in data from another file into an entire project.
there are two major examples of this that I would like to solve:
1) I would like to have a settings/config file (in YAML) that gets read in and parsed in a config.py file. I then want a resulting dict() to be accessible to all my other files in the project. I have figured out how to do this by from lib.project.config import cfg but that means that for each page that is importing the configs the system has to REparse the yaml. That seems just silly to me. Is there no way to have that file parsed just once and then have the results accessible to any other file in my project?
2) I would like to import a database.py file that then looks at my configs to see if we need to import a sqlite3, mysql, or postgresql version of a database class. Again, I can manage this by putting the logic directly in each page that I need the database class for. But I hate having to paste code like
if cfg.get('db_type') == 'sqlite':
from lib.project.databases.sqlite3 import database
elif cfg.get('db_type') == 'mysql':
from lib.project.databases.mysql import database
at the top of each file that needs the database class. I'd much rather just add:
import lib.project.database
Any help would be very much appreciated.
I've done a good deal of googling and SO searches and not found my answers. Hopefuly one of you wiz's out there can help.
Thanks.
UPDATE:
The reason I'm doing things this way (for #2) is because I'm also trying to make other classes inherit the database class.
So lib/project/databases/sqlite.py is a definition of a database class.
And so is lib/project/databases/mysql.py.
The idea being that after this import is complete I can import classes like the users class and define it like so:
class user(database):
...
And thus inherit all of the structure and methods of the database class.
Your suggestions to simply create an instance based on the sqlite/mysql logic/decision and then pass that where it needs to be is a good solution for that. But I need a bit more...
Ideas?
Thank you all for your help in understanding more about Python and how to get what I'm looking for done.
Here is what I ended up doing:
1) The config file issue:
This apparently was mostly solved to begin with. I confirmed what everyone was saying: that you can "import" a file as many times as you like, but it is only ever parsed/processed/compiled once. As for making it reachable to any file that needs it:
Config class:
import os
import yaml
class Config:
def __init__(self):
self.path = os.getcwd()
stream = open(self.path+"/conf/config.yaml", 'r')
data = yaml.load(stream)
config_keys = data.keys()
for k in config_keys:
setattr(self, k, data.get(k))
if (os.path.isfile(self.path+"/conf/config-override.yaml") ):
stream = open(self.path+"/conf/config-override.yaml", 'r')
data = yaml.load(stream)
config_keys = data.keys()
for k in config_keys:
setattr(self, k, data.get(k))
config = Config()
And then any file that wants to use it:
from lib.project.Config import config
This is all working swimmingly so far.
2) Dynamic database type for Database class:
I just altered my overall design a tiny bit to make a Database class (mostly empty) inherit from either the Sqlite or Mysql classes (both custom builds which are wrappers to existing Sqlite3 and mysql-connector classes). This way there is always a solid Database class to inherit from, I only load in what files I need to, and it's all defined by my config file. Example:
Database class:
from lib.project.Config import config
if config.db_type == 'sqlite' :
from lib.project.databases.Sqlite import Sqlite
elif config.db_type == 'mysql':
from lib.project.databases.Mysql import Mysql
class Database(Sqlite if config.db_type == 'sqlite' else Mysql):
''' documentation '''
I'd still love to hear people's feedback on this code/method.
As I said, I'm still newish to Python and could still be missing something.
Thanks again everyone.
1) you're all set - the file will only be read once.
2) Agreed you don't want to copy any paste code like that - if you want to use 1 instance of the database throughout your project, you'd do something like:
import lib.project
db = lib.project.database
where db is just a local variable used to access your already created database. You would instantiate your database as database (resolving as you've done with the if/elif code whether to use sqlite3 or mysql) in lib/project/databases.py and then in your lib/project/__init__.py file you would do a from .databases import database.
if you want to use multiple databases (of the same sqlite3/mysql type) throughout your project, you would resolve which constructor Database is bound to (or inherits from) in databases.py:
from lib.project.Config import config
if config.db_type == 'sqlite' :
import lib.project.databases.Sqlite as Db
elif config.db_type == 'mysql':
import lib.project.databases.Mysql as Db
class Database(Db):
'''docs'''
I'm creating website based on Django (I know it's pure Python, so maybe it could be also answered by people who knows Python well) and I need to call some methods dynamically.
For example I have few applications (modules) in my website with the method "do_search()" in the views.py. Then I have one module called for example "search" and there I want to have an action which will be able to call all the existing "do_search()" in other applications. Of course I don't like to add each application to the import, then call it directly. I need some better way to do it dynamically.
I can read INSTALLED_APPS variable from settings and somehow run through all of the installed apps and look for the specific method? Piece of code will help here a lot :)
Thanks in advance!
Ignas
I'm not sure if I truly understand the question, but please clarify in a comment to my answer if I'm off.
# search.py
searchables = []
def search(search_string):
return [s.do_search(search_string) for s in searchables]
def register_search_engine(searchable):
if hasattr(searchable, 'do_search'):
# you want to see if this is callable also
searchables.append(searchable)
else:
# raise some error perhaps
# views.py
def do_search(search_string):
# search somehow, and return result
# models.py
# you need to ensure this method runs before any attempt at searching can begin
# like in models.py if this app is within installed_apps. the reason being that
# this module may not have been imported before the call to search.
import search
from views import do_search
search.register_search_engine(do_search)
As for where to register your search engine, there is some sort of helpful documentation in the signals docs for django which relates to this.
You can put signal handling and registration code anywhere you like. However, you'll need to make sure that the module it's in gets imported early on so that the signal handling gets registered before any signals need to be sent. This makes your app's models.py a good place to put registration of signal handlers.
So your models.py file should be a good place to register your search engine.
Alternative answer that I just thought of:
In your settings.py, you can have a setting that declares all your search functions. Like so:
# settings.py
SEARCH_ENGINES = ('app1.views.do_search', 'app2.views.do_search')
# search.py
from django.conf import settings
from django.utils import importlib
def search(search_string):
search_results = []
for engine in settings.SEARCH_ENGINES
i = engine.rfind('.')
module, attr = engine[:i], engine[i+1:]
mod = importlib.import_module(module)
do_search = getattr(mod, attr)
search_results.append(do_search(search_string))
return search_results
This works somewhat similar to registering MIDDLEWARE_CLASSES and TEMPLATE_CONTEXT_PROCESSORS. The above is all untested code, but if you look around the django source, you should be able to flesh this out and remove any errors.
If you can import the other applications through
import other_app
then it should be possible to perform
method = getattr(other_app, 'do_' + method_name)
result = method()
However your approach is questionable.