Gettings settings and config from INI file for Pyramid functional testing - python

In a real Pyramid app it does not work per docs http://docs.pylonsproject.org/projects/pyramid//en/latest/narr/testing.html :
class FunctionalTests(unittest.TestCase):
def setUp(self):
from myapp import main
app = main({})
Exception:
Traceback (most recent call last):
File "C:\projects\myapp\tests\model\task_dispatcher_integration_test.py", line 35, in setUp
app = main({})
File "C:\projects\myapp\myapp\__init__.py", line 207, in main
engine = engine_from_config(settings, 'sqlalchemy.')
File "C:\projects\myapp\ve\lib\site-packages\sqlalchemy\engine\__init__.py", line 407, in engine_from_config
url = options.pop('url')
KeyError: 'url'
The reason is trivial: an empty dictionary is passed to main, while it seems that while running real app (from __init__.py) it gets settings pre-filled with values from [app:main] section of development.ini / production.ini:
settings {'ldap_port': '4032', 'sqlalchemy.url': 'postgresql://.....}
Is there some way of reconstructing settings easily from an .ini file for functional testing?

pyramid.paster.get_appsettings is the only thing you need:
from pyramid.paster import get_appsettings
settings = get_appsettings('test.ini', name='main')
app = main(settings)
That test.ini can include all the settings of another .ini file easily like this:
[app:main]
use = config:development.ini#main
and then you only need to override those keys that change (I guess you'd want to rather test against a separate DB):
[app:main]
use = config:development.ini#main
sqlalchemy.uri = postgresql://....

In case anyone else doesn't get #antti-haapala's answer right away:
Create a test.ini filled with:
[app:main]
use = config:development.ini#main
(Actually this step is not necessary. You could also keep your development.ini and use it instead of test.ini in the following code. A separate test.ini might however be useful if you want separate settings for testing)
In your tests.py add:
from pyramid.paster import get_appsettings
settings = get_appsettings('test.ini', name='main')
and replace
app = TestApp(main({}))
with
app = TestApp(main(global_config = None, **settings))
Relevant for this answer was the following comment: Pyramid fails to start when webtest and sqlalchemy are used together

Actually, you don't need import get_appsettings, just add the
parameters like this:
class FunctionalTests(unittest.TestCase):
def setUp(self):
from myapp import main
settings = {'sqlalchemy.url': 'sqlite://'}
app = main({}, **settings)
here is the source: functional test, it is in the second block code, line 31.

Yes there is, though the easy is a subject to debate.
I am using the following py.test test fixture to make --ini option passed to the tests. However this approach is limited to py.test test runner, as other test runner do not have such flexibility.
Also my test.ini has special settings like disabling outgoing mail and instead printing it out to terminal and test accessible backlog.
#pytest.fixture(scope='session')
def ini_settings(request):
"""Load INI settings for test run from py.test command line.
Example:
py.test yourpackage -s --ini=test.ini
:return: Adictionary representing the key/value pairs in an ``app`` section within the file represented by ``config_uri``
"""
if not getattr(request.config.option, "ini", None):
raise RuntimeError("You need to give --ini test.ini command line option to py.test to find our test settings")
# Unrelated, but if you need to poke standard Python ConfigParser do it here
# from websauna.utils.configincluder import monkey_patch_paster_config_parser
# monkey_patch_paster_config_parser()
config_uri = os.path.abspath(request.config.option.ini)
setup_logging(config_uri)
config = get_appsettings(config_uri)
# To pass the config filename itself forward
config["_ini_file"] = config_uri
return config
Then I can set up app (note that here pyramid.paster.bootstrap parses the config file again:
#pytest.fixture(scope='session')
def app(request, ini_settings, **settings_overrides):
"""Initialize WSGI application from INI file given on the command line.
TODO: This can be run only once per testing session, as SQLAlchemy does some stupid shit on import, leaks globals and if you run it again it doesn't work. E.g. trying to manually call ``app()`` twice::
Class <class 'websauna.referral.models.ReferralProgram'> already has been instrumented declaratively
:param settings_overrides: Override specific settings for the test case.
:return: WSGI application instance as created by ``Initializer.make_wsgi_app()``.
"""
if not getattr(request.config.option, "ini", None):
raise RuntimeError("You need to give --ini test.ini command line option to py.test to find our test settings")
data = bootstrap(ini_settings["_ini_file"])
return data["app"]
Furthermore setting up a functional test server:
import threading
import time
from wsgiref.simple_server import make_server
from urllib.parse import urlparse
from pyramid.paster import bootstrap
import pytest
from webtest import TestApp
from backports import typing
#: The URL where WSGI server is run from where Selenium browser loads the pages
HOST_BASE = "http://localhost:8521"
class ServerThread(threading.Thread):
""" Run WSGI server on a background thread.
Pass in WSGI app object and serve pages from it for Selenium browser.
"""
def __init__(self, app, hostbase=HOST_BASE):
threading.Thread.__init__(self)
self.app = app
self.srv = None
self.daemon = True
self.hostbase = hostbase
def run(self):
"""Open WSGI server to listen to HOST_BASE address
"""
parts = urlparse(self.hostbase)
domain, port = parts.netloc.split(":")
self.srv = make_server(domain, int(port), self.app)
try:
self.srv.serve_forever()
except Exception as e:
# We are a background thread so we have problems to interrupt tests in the case of error
import traceback
traceback.print_exc()
# Failed to start
self.srv = None
def quit(self):
"""Stop test webserver."""
if self.srv:
self.srv.shutdown()
#pytest.fixture(scope='session')
def web_server(request, app) -> str:
"""py.test fixture to create a WSGI web server for functional tests.
:param app: py.test fixture for constructing a WSGI application
:return: localhost URL where the web server is running.
"""
server = ServerThread(app)
server.start()
# Wait randomish time to allows SocketServer to initialize itself.
# TODO: Replace this with proper event telling the server is up.
time.sleep(0.1)
assert server.srv is not None, "Could not start the test web server"
host_base = HOST_BASE
def teardown():
server.quit()
request.addfinalizer(teardown)
return host_base

Related

Unit testing Flask app running under uwsgi

I’m relatively new to python and am looking for a pythonic way to handle this practice.
I’ve inherited a fairly trivial Python 2.7 Flask app that runs under uwsgi that I want to add some unit tests to. It does some initialization at indentation level 0 that is required when it’s running in uwsgi but needs to be skipped when under test.
I’m given to understand that often python apps use the
if __name__ == '__main__':
pattern to isolate code that should run when the script is run on its own and should not run when it’s imported. In this case, however, both when the script is run under uwsgi and when the script is imported into the unit tests, __name__ is the same; the name of the script, so I can’t use that to differentiate between uwsgi and unit-testing environments.
This sample code illustrates what I'm working with.
In the Flask application (flask_app.py):
import logging
import bcrypt
from flask import Flask, jsonify, abort, make_response, request
from ConfigParser import SafeConfigParser
# some initialization that only makes sense when running from the uwsgi context on the server
PARSER = SafeConfigParser()
PARSER.read('config.ini')
LOG_FILE = PARSER.get('General', 'logfile')
APP = Flask(__name__)
#APP.route('/', methods=['GET'])
def index
...
#APP.route('/<p1>/<p2>', methods=['PUT'])
def put(p1, p2):
...
if __name__ == '__main__':
APP.run(debug = True, host='0.0.0.0')
In the unit tests (tests.py):
import os
import unittest
from flask import json
from flask_app import APP
class FlaskAppTestCase(unittest.TestCase):
def setUp(self):
self.APP = APP.test_client()
def test_GET(self):
resp = self.APP.get('/')
assert 'Some Html' in resp.data
def test_PUT(self):
resp = self.APP.put('/1/2')
assert 'Got 1, 2' in resp.data
if __name__ == '__main__':
unittest.main()
What I was thinking of doing was to move the initialization so that it only runs when flask_app is being executed by uwsgi and not when it's running via tests.py, perhaps by checking name and determining which path to execute based on that, but when I examine the output of print(name) either when running flask_app under uwsgi or by executing tests.py the output is "flask_app", so I can't seem to use that as a discriminator.
Is there an idiomatic way in python to handle this?
As it turns out the Python module for uWSGI actually offers a mechanism to determine if the app is being run under uWSGI. The uwsgi module is available for import if you are in a uWSGI context, so I ended up checking whether i could import that module and only executing the initialization code if I could.
# detect if we're running under uWSGI; only init if we are, not if we're testing
try:
import uwsgi
IN_UWSGI = True
except ImportError:
IN_UWSGI = False
Then wrap the init code with
if IN_UWSGI:
This seems much more reliable then checking the module name of the module that's doing the import, which was the only other thing I could think of to do.

Flask don't work with debug mode

I working on my little package, and I came across with the weird werkzeug problem.
Problem:
When I tried to run my application in debug mode, and in order to run my application I use the command python3 -m my_app --start
I got the error ModuleNotFoundError: No module named '__main__.helpers'; '__main__' is not a package
the files are very simple
app.py
class Application:
"""Main class"""
def __init__(self):
self.reader = ApplicationReader()
self.app = Flask(self.reader.name)
def __start_server(self):
"""Custom method, for starting flask application"""
self.app.run(debug=True)
def run(self):
"""Run the server"""
return self.__start_server()
# Instance of Application class
application = Application()
def start_app():
"""Start the mock server
"""
return application.run()
__main__.py
def cli(run, init, start):
if run:
print('Your API is running...')
if init:
create_init_file()
if start:
start_app()
if __name__ == '__main__':
cli()
And seems like the bug can't be fixed, GitHub issue
So, maybe someone have a solution for this case? Thanks.
I created a different file for the start_app function because I found that in debug mode, the import could not be realized.
start_app.py (next to app.py)
import sys, os
sys.path.append(os.path.join(os.path.dirname(__file__)))
print("sys.path", sys.path) # use this to check if the last entry has the right folder
from app import start_app
start_app()
Now, you can run it with python -m start_app.py in debug mode.

Flask app.config during unit testing

What is the best way handle unit tests that rely on calling code that in turn relies on the current app's configuration?
eg
code.py
from flask import current_app
def some_method():
app = current_app._get_current_object()
value = (app.config['APP_STATIC_VAR'])*10
return value
test_code.py
class TestCode(unittest.TestCase):
def test_some_method(self):
app = create_app('app.settings.TestConfig')
value = some_method()
self.assertEqual(10, value)
Running the test above I get an 'RuntimeError: working outside of application context' error when the app = create_app('app.settings.TestConfig') line is executed.
Calling app = create_app during the test doesn't do the trick. What is the best way to unit test in this case where I am needing the config to be read in the the application?
You are using accessing the app within an app context when you call some_method() to fix it replace your call with:
with app.app_context():
value = some_method()

How to use correctly importlib in a flask controller?

I am trying to load a module according to some settings. I have found a working solution but I need a confirmation from an advanced python developer that this solution is the best performance wise as the API endpoint which will use it will be under heavy load.
The idea is to change the working of an endpoint based on parameters from the user and other systems configuration. I am loading the correct handler class based on these settings. The goal is to be able to easily create new handlers without having to modify the code calling the handlers.
This is a working example :
./run.py :
from flask import Flask, abort
import importlib
import handlers
app = Flask(__name__)
#app.route('/')
def api_endpoint():
try:
endpoint = "simple" # Custom logic to choose the right handler
handlerClass = getattr(importlib.import_module('.'+str(endpoint), 'handlers'), 'Handler')
handler = handlerClass()
except Exception as e:
print(e)
abort(404)
print(handlerClass, handler, handler.value, handler.name())
# Handler processing. Not yet implemented
return "Hello World"
if __name__ == "__main__":
app.run(host='0.0.0.0', port=8080, debug=True)
One "simple" handler example. A handler is a module which needs to define an Handler class :
./handlers/simple.py :
import os
class Handler:
def __init__(self):
self.value = os.urandom(5)
def name(self):
return "simple"
If I understand correctly, the import is done on each query to the endpoint. It means IO in the filesystem with lookup for the modules, ...
Is it the correct/"pythonic" way to implement this strategy ?
Question moved to codereview. Thanks all for your help : https://codereview.stackexchange.com/questions/96533/extension-pattern-in-a-flask-controller-using-importlib
I am closing this thread.

Flask, WSGI, and Global Variables to Hold Database Pool

I'm using WSGI/Apache2 and am trying to declare my database pool on init, to be accessible via a global var from my endpoints. I'm using Redis and Cassandra (DSE, specifically). It's my understanding that both the Redis and DSE libs offer pool management so this shouldn't be an issue.
My folder structure for my WSGI app looks something akin to
folder/
tp.wsgi
app/
__init__.py
decorators/
cooldec.py
mod_api/
controllers.py
tp.wsgi looks like the following
#! /usr/bin/env python2.7
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0, "/opt/tp")
from app import app
def application(environ, start_response):
return app(environ, start_response)
__init__.py looks like the following
#! /usr/bin/env python2.7
from flask import Flask
from cassandra.cluster import Cluster
# Import our handlers
from app.mod_api.files import mod_files
# Setup routine
def setup():
# Instantiate Flask
app = Flask('app')
# Set up a connection to Cassandra
cassandraSession = Cluster(['an ip address', 'an ip address']).connect('keyspace')
cassandraSession.default_timeout = None
# Register our blueprints
app.register_blueprint(mod_files)
...
return app, cassandraSession
app, cassandraSession = setup()
I'm calling a decorator defined in cooldec.py that handles authentication (I use that term loosely, for a reason. I ask that we not go down the path of using Flask extensions for authentication, that's out of scope for this question and isn't applicable in my use-use [see: loose usage of the term 'authentication'])
In cooldec.py and controllers.py I'm trying to access the cassandraSession global but I keep getting global name 'cassandraSession' is not defined. I know what the error means, but I'm not sure why I'm seeing this. It's my understanding that the way I've set my WSGI app up allows for cassandraSession to be accessible within the scope of the app, no?
I found Preserving state in mod_wsgi Flask application but .. it hasn't really shed any light on to what I'm doing wrong.
My issue was the location of my imports. I made a few changes to tp.wsgi and __init__.py and I've got what I need working. That is, calling from app import cassandraSession from within cooldec.py and controllers.py
Below is how I've set up the aforementioned.
tp.wsgi
#! /usr/bin/env python2.7
import sys
import logging
logging.basicConfig(stream=sys.stderr)
sys.path.insert(0, "/opt/tp")
from app import app as application
__init__.py
#! /usr/bin/env python2.7
# API Module
from flask import Flask, jsonify
from cassandra.cluster import Cluster
# Create our API
app = Flask('app')
# Define a cassandra cluster/session we can use
cassandraSession = Cluster(['an ip address' 'an ip address']).connect('keyspace')
cassandraSession.default_timeout = None
... Register blueprints
These are overly simplified edits, but it gives the idea of what I was doing wrong (eg: declaring in wrong file and trying to import improperly.
In both cooldec.py and controllres.py we can now do
from app import cassandraSession
rows = cassandraSession.execute('select * from table')
Tip for new WSGI developers: Continue to think "in python".
+ WARNING +
I have yet to find an absolute answer on whether or not this is safe to do. Doing this using sqlalchemy is perfectly OK due to how sqlalchemy handles connection pooling. I am, as of yet, unaware if this is safe to do with Cassandra/DSE, so proceed with caution if you utilize this post.

Categories