Can FastAPI test client be called by something external? - python

I am writting tests for FastAPI application. And first time in my life I needed to do loadtesting (with locust).
For load test I've made fixture that launches application with uvicorn in separate process.
But it causes some issues.
I thought: May be I could use FastAPI test client for that, but discovered, that I can not understand how test client works. Because, apparently, I can not call test client from outside.
Can anyone explain why and can I make TestClient available for other calls?
Setting base url as localhost does not help.
from fastapi import FastAPI
from fastapi.testclient import TestClient
import requests
app = FastAPI()
#app.get("/")
def index():
return "ok"
if __name__ == "__main__":
test_client = TestClient(app)
print(f"{test_client.base_url=}") # http://testserver
r_client = test_client.get("/")
r_requests = requests.get(f"{test_client.base_url}/")
assert r_client.status_code == 200 # True
assert r_requests.status_code == 200 # False, ConnectionError, Why?

The TestClient isn't a webserver - it's a client that simulates regular http requests. It does this by emulating the ASGI interface and setting up the request context without actually having the overhead of making an HTTP request.
Since there is no server running, you can't make any requests against it from outside itself - it just lets you interact with the ASGI application as any regular outside client would do, without the extra overhead of going through a full http stack. This makes testing more efficient and lets you test your applications without having an active http server running while the tests run.
If you're going to do load testing of an application, use the same http stack as you'd do in production (so uvicorn, gunicorn, etc.), otherwise the test situation won't really reflect how your application and setup will behave under load. If you're doing performance regress testing, using the TestClient would probably be sufficient (since your application will be the one were performance varies).

Related

Use Multiple Azure Application Insights in one Flask app

Hi i have a flask application that Build as a docker image to serve as an api
this image is deployed to multiple environments (DEV/QA/PROD)
i want to use an applicationInsight for each environment
using a single application Insight works fine
here is a code snippet
app.config['APPINSIGHTS_INSTRUMENTATIONKEY'] = APPINSIGHTS_INSTRUMENTATIONKEY
appinsights = AppInsights(app)
#app.after_request
def after_request(response):
appinsights.flush()
return response
but to have multiple application i need to configure app.config with the key of the app insight
i thought of this solution which thourghs errors
here a snippet :
app = Flask(__name__)
def monitor(key):
app.config['APPINSIGHTS_INSTRUMENTATIONKEY'] = key
appinsights = AppInsights(app)
#app.after_request
def after_request(response):
appinsights.flush()
return response
#app.route("/")
def hello():
hostname = urlparse(request.base_url).hostname
print(hostname)
if hostname == "dev url":
print('Dev')
monitor('3ed57a90-********')
if hostname == "prod url":
print('prod')
monitor('941caeca-********-******')
return "hello"
this example contains the function monitor which reads the url and decide which app key to give so it can send metrics to the right place but apparently i can't do those processes after the request is sent (is there a way a config variable can be changed based on the url condition ?)
error Message :
AssertionError: The setup method 'errorhandler' can no longer be called on the application. It has already handled its first request, any changes will not be applied consistently. Make sure all imports, decorators, functions, etc. needed to set up the application are done before running it.
i hope someone can guide me to a better solution
thanks in advance
AFAIK, Normally Application Insights SDK collect the telemetry data, and it has sent to Azure by batch. So, you have to keep a single application insights resource for an application. Use staging for use different application insights for same application.
When the request started for the service till to complete his response the Application insights has taking care of the specific service life cycle. The application while start to end it will track the information. So that we can't use more than one Application Insights in single application.
When Application starts the AI start collecting telemetry data when Application stops then the AI stops gathering telemetry information. We were using Flush to even though in between application stops to send information to AI.
I have tried what you have used. It confirms the same in the log
I have tried with single application insights I can be able to collect all telemetry information.

How to run flask along side my tests in PyTest?

The usual flags: I'm new to Python, I'm new to PyTest, I'm new to Flask.
I need to create some server independent tests to test an api which calls a third-party.
I cannot access that api directly, but I can tell it what url to use for each third-party.
So what I want to do is to have a fake api running on the side (localhost) while I'm running my tests, so when the api that I'm testing needs to consume the third-parties, it uses my fake-api instead.
So I created the following app.py:
from flask import Flask
from src.fakeapi.routes import configure_routes
app = Flask(__name__)
configure_routes(app)
def start_fake_api():
app.run(debug=True)
And my_test.py:
from src.fakeapi.app import start_fake_api
#start_fake_api()
def test_slack_call():
send_request_to_api_to_configure_which_url_to_use_to_call_third_party("http://127.0.0.1:5000/")
send_request_to_api_to_populate_table_using_third_party()
Now, this might be an oversimplified example, but that's the idea. My problem obviously is that once I run Flask the process just stays in stand by and doesn't continue with the tests.
I want to avoid having to depend on manually running the server before running the tests, and I want to avoid running my tests in parallel.
What's the best way to do this?
Can I somehow execute app.py when I execute pytest? Maybe by altering pytest.ini somehow?
Can I force a new thread just for the server to run?
Thanks in advance!
I don't see a good reason to run a fake server, when you can instead use mock libraries such as requests-mock or responses to respond.
That said, if you really do need to run a real server, you could set up a session scoped fixture with a cleanup.
Adding autouse will make the tests automagically start the server, but you can leave that out and just invoke the fixture in your test, รก la test_foo(fake_api)
Implementing the TODOed bit can be a little tricky; you'd probably need to set up the Werkzeug server in a way that you can signal it to stop; e.g. by having it wait on a threading.Event you can then raise.
#pytest.mark.fixture(scope="session", autouse=True)
def fake_api():
app = ...
port = random.randint(1025, 65535) # here's hoping no one is on that port
t = threading.Thread(target=lambda: app.run(debug=True, port=port))
t.start()
yield port
# TODO: implement cleanly terminating the thread here :)

Flask restful GET doesn't respond within app

I have a flask restful api with an endpoint
api.add_resource(TestGet, '/api/1/test')
and I want to use the data from that endpoint to populate my jinja template. But everytime I try to call it in a sample route like this
#app.route('/mytest')
def mytest():
t = get('http://localhost:5000/api/1/test')
It never returns anything and stays in a loop meaning it is doing something with the request and never returns. Is there a reason I am not able to call it within the same flask app? I am able to reach the endpoint on the browser and from another python REPL. Thoroughly confused why this would happen and why it never returns anything. At least expecting an error.
Here is the entire sample of what I am trying to run
from flask import Flask
from requests import get
app = Flask('test')
from flask_restful import Api, Resource
api = Api(app)
class TestGet(Resource):
def get(self):
return {'test': 'message'}
api.add_resource(TestGet, '/test')
#app.route('/something')
def something():
resp = get('http://localhost:5000//test').json
print(resp)
from gevent.wsgi import WSGIServer
WSGIServer(('', 5000), app).serve_forever()
Use app.run(threaded=True) if you just want to debug your program. This will start a new thread for every request.
Please see this SO thread with nice explanation of Flask limitations: https://stackoverflow.com/a/20862119/5167302
Specifically, in your case you are hitting this one:
The main issue you would probably run into is that the server is single-threaded. This means that it will handle each request one at a time, serially. This means that if you are trying to serve more than one request (including favicons, static items like images, CSS and Javascript files, etc.) the requests will take longer. If any given requests happens to take a long time (say, 20 seconds) then your entire application is unresponsive for that time (20 seconds).
Hence by making request from within request you are putting your application into deadlock.

Multithreaded Flask application on Apache server

I have a python script.
Main thread (if name=='main', etc): when the main thread initiates, it runs several threads to listen to data streams, events, and to process them. The main thread then starts running the Flask application (app.run()). Processing and data is sent to the front-end Flask app (no issues here)
The Apache Server and mod_wsgi requires me to directly import the app, meaning that my other threads won't run.
My dilemma. In the examples I've seen, the .wsgi script from someapp imports app as application. This would only run the flask application. If I managed to somehow run the python script instead as main, the flask application would be ran on localhost:5000 by default and is not recommended in production to change or use .run().
First of all, is it possible to get this application on a server in this current structure? How would I get the whole application to work on a server? Would I need to completely restructure it? Is it not possible to specify host: 0.0.0.0 port:80 then run the python script instead of just importing the app? Any help is appreciated, any forwarding to other documentations.
Edit: for the sake of testing, I will be using AWS Ubuntu (any other linux distro can be used/switched to if needed).
Sort and misleading answer is yes, it is possible (make sure there is any other program that uses port 80 such as apache etc):
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
However, you should not do that. Not recommended as it states in the documentation:
You can use the builtin server during development, but you should use
a full deployment option for production applications. (Do not use the
builtin development server in production.)
Proxy HTTP traffic through apache2 to Flask is much better.
This way, apache2 can handle all your static files and act as a reverse proxy for your dynamic content, passing those requests to Flask.
To have threads check the documentation of WSGIDaemonProcess.
Example of Apache/mod_wsgi configuration should looks like this:
WSGIDaemonProcess mysite processes=3 threads=2 display-name=mod_wsgi
WSGIProcessGroup mysite
WSGIScriptAlias / /some/path/wsgi.py
I managed to find an answer to this without diverging too far from guides on how to get a Flask application working with Python3 and Apache2.
In short, when you initialise Flask, you most likely do something like this:
from flask import Flask
app = Flask(__name__)`
The proposed solution:
import atexit #for detecting flask exit
import threading
from flask import Flask
shareddata = 0
running = False
def init_app():
global shareddata
global running
running = True
app = Flask(__name__)
# some threading goes here
# e.g.
def jointhread():
running=False
t.join()
def MyThread1():
while(running):
#do something
t1 = threading.Thread(target=MyThread1, args=[])
t1.start()
atexit.register(jointhread)
return app
app = init_app()
Threading might not work, whichever's applicable.
I had a similar issue where there was a thread I wanted to constantly monitor data using an API. I ended up importing the function(s) I wanted threaded to my WSGI file and kicked them off there.
Example
import threading
from main import <threaded_function>
my_thread = threading.Thread(target=<threaded_function>)
my_thread.start()

How do you use tornado.testing for creating WebSocket unit tests?

I'm working on a project that works with tornado's websocket functionality. I see a decent amount of documentation for working with asychronous code, but nothing on how this can be used to create unit tests that work with their WebSocket implementation.
Does tornado.testing provide the functionality to do this? If so, could someone provide a brief example of how to make it happen?
Thanks in advance.
As #Vladimir said, you can still use AsyncHTTPTestCase to create/manage the test webserver instance, but you can still test WebSockets in much the same way as you would normal HTTP requests - there's just no syntactic sugar to help you.
Tornado also has its own WebSocket client so there's no need (as far as I've seen) to use a third party client - perhaps it's a recent addition though. So try something like:
import tornado
class TestWebSockets(tornado.testing.AsyncHTTPTestCase):
def get_app(self):
# Required override for AsyncHTTPTestCase, sets up a dummy
# webserver for this test.
app = tornado.web.Application([
(r'/path/to/websocket', MyWebSocketHandler)
])
return app
#tornado.testing.gen_test
def test_websocket(self):
# self.get_http_port() gives us the port of the running test server.
ws_url = "ws://localhost:" + str(self.get_http_port()) + "/path/to/websocket"
# We need ws_url so we can feed it into our WebSocket client.
# ws_url will read (eg) "ws://localhost:56436/path/to/websocket
ws_client = yield tornado.websocket.websocket_connect(ws_url)
# Now we can run a test on the WebSocket.
ws_client.write_message("Hi, I'm sending a message to the server.")
response = yield ws_client.read_message()
self.assertEqual(response, "Hi client! This is a response from the server.")
# ...etc
Hopefully that's a good starting point anyway.
I've attempted to implement some unit tests on tornado.websocket.WebSocketHandler based handlers and got the following results:
First of all AsyncHTTPTestCase definitely has lack of web sockets support.
Still, one can use it at least to manage IOLoop and application stuff which is significant. Unfortunately, there is no WebSocket client provided with tornado, so here enter side-developed library.
Here is unit test on Web Sockets using Jef Balog's tornado websocket client.
This answer (and the question) may be of interest, I use ws4py for the client and Tornado's AsyncTestCase which simplifies the whole thing.

Categories