First of all I am aware of flask-testing library with LiveServerTestCase class but it hasn't updated since 2017 and GitHub full of issues of it not working neither on Windows or MacOs and I haven't found any other solutions.
I am trying to write some tests for flask app using selenium to validate FlaskForms inside this app.
Simple test like this:
def test_start(app):
driver.get("http://127.0.0.1:5000/endpoint")
authenticate(driver)
falls on selenium.common.exceptions.WebDriverException: Message: unknown error: net::ERR_CONNECTION_REFUSED error. (As far as I understood in my case app creates in #pytest.fixtures and immediately shuts down and I need to find a way to keep it running for the whole test duration)
My question is: Is it possible to to create some live server in each test that will remain working so I could call API endpoints via selenium?
Simple fixtures if it helps:
#pytest.fixture
def app():
app = create_app()
...
with app.context():
# creating db
...
yield app
also:
#pytest.fixture
def client(app):
"""Test client"""
return app.test_client()
Finally got it all working. My conftest.py
import multiprocessing
import pytest
from app import create_app
#pytest.fixture(scope="session")
def app():
app = create_app()
multiprocessing.set_start_method("fork")
return app
#pytest.fixture
def client(app):
return app.test_client()
Important note that using python <3.8 line multiprocessing.set_start_method("fork") is not necessary (as far as I understood in v.3.8 they refactored multiprocessing module so further upon without this line you would get pickle Error on windows and Mac).
And one simple test looks like
def test_add_endpoint_to_live_server(live_server):
#live_server.app.route('/tests-endpoint')
def test_endpoint():
return 'got it', 200
live_server.start()
res = urlopen(url_for('.te', _external=True))# ".te is a method path I am calling"
assert url_for('.te', _external=True) == "some url"
assert res.code == 200
assert b'got it' in res.read()
Also I am using url_for. The point is every time live server starts on a random port and url_for function generates url with correct port internally. So now live server is running and it is possible to implement selenium tests.
I'm using a application factory pattern, and when I tried to run my test, I get "Attempted to generate a URL without the application context being". I created a fixture to create the application:
#pytest.fixture
def app():
yield create_app()
but when I run my test
def test_get_activation_link(self, app):
user = User()
user.set_password(self.VALID_PASS)
generated_link = user.get_activation_link()
I get the above error (from the line of code url = url_for("auth.activate")). I'm also trying to figure out to have the app creation run for every test, without having to import it into every test, but I can't seem to find if that's possible.
This works for my app
import pytest
from xxx import create_app
#pytest.fixture
def client():
app = create_app()
app.config['TESTING'] = True
with app.app_context():
with app.test_client() as client:
yield client
def smoke_test_homepage(client):
"""basic tests to make sure test setup works"""
rv = client.get("/")
assert b"Login" in rv.data
So, you missed the application context.
At this year's Flaskcon there was an excellent talk about the Flask context - I highly recommend this video.
https://www.youtube.com/watch?v=fq8y-9UHjyk
I am trying to test a Flask web app within a docker container, which is new for me. My stack is the following:
firefox
selenium
pytest-selenium
pytest-flask
Here is my Flask app file:
from flask import Flask
def create_app():
app = Flask(__name__)
return app
app = create_app()
#app.route('/')
def index():
return render_template('index.html')
Now, my test file which verifies the title of my index page:
import pytest
from app import create_app
# from https://github.com/pytest-dev/pytest-selenium/issues/135
#pytest.fixture
def firefox_options(request, firefox_options):
firefox_options.add_argument('--headless')
return firefox_options
# from https://pytest-flask.readthedocs.io/en/latest/tutorial.html#step-2-configure
#pytest.fixture
def app():
app = create_app()
return app
# from https://pytest-flask.readthedocs.io/en/latest/features.html#start-live-server-start-live-server-automatically-default
#pytest.mark.usefixtures('live_server')
class TestLiveServer:
def test_homepage(self, selenium):
selenium.get('http://0.0.0.0:5000')
h1 = selenium.find_element_by_tag_name('h1')
assert h1 == 'title'
When I run my tests with:
pytest --driver Firefox --driver-path /usr/local/bin/firefox test_app.py
I get the following error (which seems due to firefox not in headless mode).
selenium.common.exceptions.WebDriverException: Message: Service /usr/local/bin/firefox unexpectedly exited. Status code was: 1
Error: no DISPLAY environment variable specified
I am able to run firefox --headless but it seems my pytest fixture didn't manage to do the setup. Is there a better way to do this?
Now, if I replace selenium.get() by urlopen just to try the correct initialization of the app and its connection:
def test_homepage(self):
res = urlopen('http://0.0.0.0:5000')
assert b'OK' in res.read()
assert res.code == 200
I get the error:
urllib.error.URLError:
Do I need to boot the live server differently? Or should I change my host + port config somewhere?
Regarding the problem with a direct call with urllib:
Pytest's live server uses random port by default. You can add this parameter to pytest invocation:
--live-server-port 5000
Or without this parameter you can make direct calls to live server like:
import pytest
import requests
from flask import url_for
#pytest.mark.usefixtures('live_server')
def test_something():
r = requests.get(url_for('index', _external=True))
assert r.status_code == 200
I suppose you have view function called index. It would add a correct port number automatically.
But this doesn't have to do anything with docker, how do you run it?
Regarding the problem with Selenium itself - I can imagine docker networks related problem. How do you use it? Do you have eg. docker-compose configuration? Can you share it?
The referenced pytest-selenium issue has:
#pytest.fixture
def firefox_options(firefox_options, pytestconfig):
if pytestconfig.getoption('headless'):
firefox_options.add_argument('-headless')
return firefox_options
Note the - (single dash) preceding headless in add_argument()
(Source)
For late comers, it might be worthwhile taking a look at Xvfb and even more helpful can be this tutorial
Then (in Linux shell) you can enter:
Xvfb :99 &
export DISPLAY=:99
pytest --driver Firefox --driver-path /usr/local/bin/firefox test_app.py
This provides a virtual frame buffer (fake screen) for the application and it outputs all the graphical content there.
Note that I did not encountered this problem, just providing a solution that helped me overcome the mentioned error with an another app.
I've minimized my problem to a self-contained flask app + unit test. When this is run with pytest app.py it fails roughly half the time (29 out of 50 runs) with this error:
E werkzeug.routing.BuildError: Could not build url for endpoint 'thing' with values ['_sa_instance_state']. Did you forget to specify values ['id']?
The frustrating part about this is that adding debugging statements in the post() method make it always pass (see comment below).
This feels like a race condition somewhere in the framework. Is SQLAlchemy spawning a thread to perform the commit and update t.id?
I can force it to fail by doing a del t.id at the site of the comment (confirming that the error is coming from a missing t.id). I can force it to pass by doing a t.id = 999 at the same spot.
Am I doing something obviously wrong here or is this a bug in one of the packages?
I'm running python 3.5.2 and my requirements.txt is:
Flask==1.0.2
Flask-RESTful==0.3.6
Flask-SQLAlchemy==2.3.2
Jinja2==2.10
pytest==3.2.2
pytest-repeat==0.4.1
SQLAlchemy==1.2.8
Werkzeug==0.14.1
It may be worth noting that this also failed with earlier versions of most of these packages (flask 0.12, sqlalchemy 1.1.14, etc).
It may also be worth noting that when run with pytest --count=20 app.py it will always pass or fail the entire count, i.e. 20 passes or 20 failures. But about half the overall runs will still fail.
Here's the app:
#!/usr/bin/env python3
import json
from flask import Flask
from flask_restful import Api, Resource, fields, marshal, reqparse
from flask_sqlalchemy import SQLAlchemy
import pytest
app = Flask(__name__)
api = Api(app)
db = SQLAlchemy(app)
class Thing(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True)
thing_fields = {
'name': fields.String,
'uri': fields.Url('thing'),
}
class ThingListAPI(Resource):
def __init__(self):
self.reqparse = reqparse.RequestParser()
self.reqparse.add_argument('name', type=str, location='json')
super().__init__()
def post(self):
args = self.reqparse.parse_args()
t = Thing(name=args['name'])
db.session.add(t)
db.session.commit()
### <<< at this point inserting pretty much any statement
### will make the test pass >>>
return {'thing': marshal(t, thing_fields)}, 201
class ThingAPI(Resource):
def get(self, id):
pass
api.add_resource(ThingListAPI, '/things', endpoint='things')
api.add_resource(ThingAPI, '/things/<int:id>', endpoint='thing')
#pytest.fixture
def stub_app():
app.config['TESTING'] = True
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///:memory:'
client = app.test_client()
db.create_all()
yield client
db.drop_all()
def test_thing_post(stub_app):
resp = stub_app.post('/things', data=json.dumps({'name': 'stuff'}),
content_type='application/json')
assert(resp.status_code == 201)
If you add db.session.refresh(t) in post() after commit, it will solve the problem. I don't know whether it's a right thing to do so (SQLAlchemy is quite complicated and I had a little experience with it), but it shows that a t-object state sometimes is not refreshed (sometimes because probably there is a race condition and sometimes SQLAlchemy maybe gets more machine time and pulls id just in time, but sometimes is not) after a commit and id-attribute still somehow doesn't exist (I mean, for flask, because for sqlite it does exist, but a new state is not pulled from a DB).
I'm needing some help setting up unittests for Google Cloud Endpoints. Using WebTest all requests answer with AppError: Bad response: 404 Not Found. I'm not really sure if endpoints is compatible with WebTest.
This is how the application is generated:
application = endpoints.api_server([TestEndpoint], restricted=False)
Then I use WebTest this way:
client = webtest.TestApp(application)
client.post('/_ah/api/test/v1/test', params)
Testing with curl works fine.
Should I write tests for endpoints different? What is the suggestion from GAE Endpoints team?
After much experimenting and looking at the SDK code I've come up with two ways to test endpoints within python:
1. Using webtest + testbed to test the SPI side
You are on the right track with webtest, but just need to make sure you correctly transform your requests for the SPI endpoint.
The Cloud Endpoints API front-end and the EndpointsDispatcher in dev_appserver transforms calls to /_ah/api/* into corresponding "backend" calls to /_ah/spi/*. The transformation seems to be:
All calls are application/json HTTP POSTs (even if the REST endpoint is something else).
The request parameters (path, query and JSON body) are all merged together into a single JSON body message.
The "backend" endpoint uses the actual python class and method names in the URL, e.g. POST /_ah/spi/TestEndpoint.insert_message will call TestEndpoint.insert_message() in your code.
The JSON response is only reformatted before being returned to the original client.
This means you can test the endpoint with the following setup:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version') #needed because endpoints expects a . in this value
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint], restricted=False)
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/spi/TestEndpoint.insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
The thing here is you can easily setup appropriate fixtures in the datastore or other GAE services prior to calling the endpoint, thus you can more fully assert the expected side effects of the call.
2. Starting the development server for full integration test
You can start the dev server within the same python environment using something like the following:
import sys
import os
import dev_appserver
sys.path[1:1] = dev_appserver._DEVAPPSERVER2_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
# ...
def setUp(self):
APP_CONFIGS = ['/path/to/app.yaml']
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args([
'--admin_port', '0',
'--port', '8123',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + APP_CONFIGS)
server = devappserver2.DevelopmentServer()
server.start(options)
self.server = server
def tearDown(self):
self.server.stop()
Now you need to issue actual HTTP requests to localhost:8123 to run tests against the API, but again can interact with GAE APIs to set up fixtures, etc. This is obviously slow as you're creating and destroying a new dev server for every test run.
At this point I use the Google API Python client to consume the API instead of building the HTTP requests myself:
import apiclient.discovery
# ...
def test_something(self):
apiurl = 'http://%s/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest' \
% self.server.module_to_address('default')
service = apiclient.discovery.build('testendpoint', 'v1', apiurl)
res = service.testresource().insert({... message ... }).execute()
self.assertEquals(res, { ... expected reponse as dict ... })
This is an improvement over testing with CURL as it gives you direct access to the GAE APIs to easily set up fixtures and inspect internal state. I suspect there is an even better way to do integration testing that bypasses HTTP by stitching together the minimal components in the dev server that implement the endpoint dispatch mechanism, but that requires more research time than I have right now.
webtest can be simplified to reduce naming bugs
for the following TestApi
import endpoints
import protorpc
import logging
class ResponseMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
class RequestMessageClass(protorpc.messages.Message):
message = protorpc.messages.StringField(1)
#endpoints.api(name='testApi',version='v1',
description='Test API',
allowed_client_ids=[endpoints.API_EXPLORER_CLIENT_ID])
class TestApi(protorpc.remote.Service):
#endpoints.method(RequestMessageClass,
ResponseMessageClass,
name='test',
path='test',
http_method='POST')
def test(self, request):
logging.info(request.message)
return ResponseMessageClass(message="response message")
the tests.py should look like this
import webtest
import logging
import unittest
from google.appengine.ext import testbed
from protorpc.remote import protojson
import endpoints
from api.test_api import TestApi, RequestMessageClass, ResponseMessageClass
class AppTest(unittest.TestCase):
def setUp(self):
logging.getLogger().setLevel(logging.DEBUG)
tb = testbed.Testbed()
tb.setup_env(current_version_id='testbed.version')
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_testApi(self):
application = endpoints.api_server([TestApi], restricted=False)
testapp = webtest.TestApp(application)
req = RequestMessageClass(message="request message")
response = testapp.post('/_ah/spi/' + TestApi.__name__ + '.' + TestApi.test.__name__, protojson.encode_message(req),content_type='application/json')
res = protojson.decode_message(ResponseMessageClass,response.body)
self.assertEqual(res.message, 'response message')
if __name__ == '__main__':
unittest.main()
I tried everything I could think of to allow these to be tested in the normal way. I tried hitting the /_ah/spi methods directly as well as even trying to create a new protorpc app using service_mappings to no avail. I'm not a Googler on the endpoints team so maybe they have something clever to allow this to work but it doesn't appear that simply using webtest will work (unless I missed something obvious).
In the meantime you can write a test script that starts the app engine test server with an isolated environment and just issue http requests to it.
Example to run the server with an isolated environment (bash but you can easily run this from python):
DATA_PATH=/tmp/appengine_data
if [ ! -d "$DATA_PATH" ]; then
mkdir -p $DATA_PATH
fi
dev_appserver.py --storage_path=$DATA_PATH/storage --blobstore_path=$DATA_PATH/blobstore --datastore_path=$DATA_PATH/datastore --search_indexes_path=$DATA_PATH/searchindexes --show_mail_body=yes --clear_search_indexes --clear_datastore .
You can then just use requests to test ala curl:
requests.get('http://localhost:8080/_ah/...')
If you don't want to test the full HTTP stack as described by Ezequiel Muns, you can also just mock out endpoints.method and test your API definition directly:
def null_decorator(*args, **kwargs):
def decorator(method):
def wrapper(*args, **kwargs):
return method(*args, **kwargs)
return wrapper
return decorator
from google.appengine.api.users import User
import endpoints
endpoints.method = null_decorator
# decorator needs to be mocked out before you load you endpoint api definitions
from mymodule import api
class FooTest(unittest.TestCase):
def setUp(self):
self.api = api.FooService()
def test_bar(self):
# pass protorpc messages directly
self.api.foo_bar(api.MyRequestMessage(some='field'))
My solution uses one dev_appserver instance for the entire test module, which is faster than restarting the dev_appserver for each test method.
By using Google's Python API client library, I also get the simplest and at the same time most powerful way of interacting with my API.
import unittest
import sys
import os
from apiclient.discovery import build
import dev_appserver
sys.path[1:1] = dev_appserver.EXTRA_PATHS
from google.appengine.tools.devappserver2 import devappserver2
from google.appengine.tools.devappserver2 import python_runtime
server = None
def setUpModule():
# starting a dev_appserver instance for testing
path_to_app_yaml = os.path.normpath('path_to_app_yaml')
app_configs = [path_to_app_yaml]
python_runtime._RUNTIME_ARGS = [
sys.executable,
os.path.join(os.path.dirname(dev_appserver.__file__),
'_python_runtime.py')
]
options = devappserver2.PARSER.parse_args(['--port', '8080',
'--datastore_path', ':memory:',
'--logs_path', ':memory:',
'--skip_sdk_update_check',
'--',
] + app_configs)
global server
server = devappserver2.DevelopmentServer()
server.start(options)
def tearDownModule():
# shutting down dev_appserver instance after testing
server.stop()
class MyTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
# build a service object for interacting with the api
# dev_appserver must be running and listening on port 8080
api_root = 'http://127.0.0.1:8080/_ah/api'
api = 'my_api'
version = 'v0.1'
discovery_url = '%s/discovery/v1/apis/%s/%s/rest' % (api_root, api,
version)
cls.service = build(api, version, discoveryServiceUrl=discovery_url)
def setUp(self):
# create a parent entity and store its key for each test run
body = {'name': 'test parent'}
response = self.service.parent().post(body=body).execute()
self.parent_key = response['parent_key']
def test_post(self):
# test my post method
# the tested method also requires a path argument "parent_key"
# .../_ah/api/my_api/sub_api/post/{parent_key}
body = {'SomeProjectEntity': {'SomeId': 'abcdefgh'}}
parent_key = self.parent_key
req = self.service.sub_api().post(body=body,parent_key=parent_key)
response = req.execute()
etc..
After digging through the sources, I believe things have changed in endpoints since Ezequiel Muns's (excellent) answer in 2014. For method 1 you now need to request from /_ah/api/* directly and use the correct HTTP method instead of using the /_ah/spi/* transformation. This makes the test file look like this:
from google.appengine.ext import testbed
import webtest
# ...
def setUp(self):
tb = testbed.Testbed()
# Setting current_version_id doesn't seem necessary anymore
tb.activate()
tb.init_all_stubs()
self.testbed = tb
def tearDown(self):
self.testbed.deactivate()
def test_endpoint_insert(self):
app = endpoints.api_server([TestEndpoint]) # restricted is no longer required
testapp = webtest.TestApp(app)
msg = {...} # a dict representing the message object expected by insert
# To be serialised to JSON by webtest
resp = testapp.post_json('/_ah/api/test/v1/insert', msg)
self.assertEqual(resp.json, {'expected': 'json response msg as dict'})
For searching's sake, the symptom of using the old method is endpoints raising a ValueError with Invalid request path: /_ah/spi/whatever. Hope that saves someone some time!