My project has 2 apps: api (for handling API endpoints) and api-content (for handling file uploads), each listening on a different port. Does anyone know how i can run my test cases with both servers launched? E.g., file uploads need authentication on the api server. my test cases derive from TestCase and have create_app implemented, but that seems to be designed for testing with only one app.
Create two apps, one for each port:
apiapp.config['TESTING'] = True
apitest = apiapp.test_client()
apicontentapp.config['TESTING'] = True
contenttest = apicontentapp.test_client()
and use these two test clients to access either app. There is no need to actually run a server.
Note that this would go well beyond unit testing, you are in the realm of integration or even functional testing here.
Related
I wrote a Flask web application for a system that our company uses. However, we have another web application, which is running on Node.JS. The "problem" is that my colleague writes everything on node, while I write everything in Python.
We want to implement both applications on one webpage - for example:
My application will run on example.com/assistant
His application will run on example.com/app1 and example.com/app2
How can we do this? Can we somehow implement the templates that I use with his templates and vice-versa?
Thank you in advance!
V
Serving different apps from the same domain
You can use haproxy for directing requests to specific service based on ACL rules.
You could use path_beg rule, to direct any request beginning with specific path to be directed to corresponding server. See example below.
/etc/haproxy/haproxy.cfg
# only relevant part of the config file
# assumes all apps are on one machine
frontend http-in
bind *:80
acl py_app1 path_beg /assistant
acl node_app1 path_beg /app1
acl node_app2 path_beg /app2
default_backend main_servers
backend py_app1
server flask_app 127.0.0.1:5000
backend node_app1
server nodejs1 127.0.0.1:4001
backend node_app2
server nodejs2 127.0.0.1:4002
backend main_servers
server other1 127.0.0.1:3000 # nginx, apache, or whatever
Sharing template code between apps
This would be harder, as you would need to both agree on some kind of format, which needs to be language and framework-agnostic, and probably logic-less.
Mustache claims to be "framework-agnostic way to render logic-free views". I used it sparringly a few years ago so this one is first that came to mind, however you should do more research on this, maybe there is some better fit.
Python implementation
JS implementation
The problem would be to actually keep the templates always in sync with apps, and not break functionality of the views. If a template changes then you would need to test all apps that use this template file. Also, you probably will block one another from updating your apps at different times, because if one of you change the template files, then you must come to a consensus, update all relevant apps, and deploy them at one time.
I am wondering if there is a way to obtain the hostname of a Django application when running tests. That is, I would like the tests to pass both locally and when run at the staging server. Hence a need to know http://localhost:<port> vs. http://staging.example.com is needed because some tests query particular URLs.
I found answers on how to do it inside templates, but that does not help since there is no response object to check the hostname.
How can one find out the hostname outside the views/templates? Is it stored in Django settings somewhere?
Why do you need to know the hostname? Tests can run just fine without it, if you use the test client. You do not need to know anything about the system they're running on.
You can also mark tests with a tag and then have the CI system run the tests including that tag.
And finally there is the LiveServerTestCase:
LiveServerTestCase does basically the same as TransactionTestCase with one extra feature: it launches a live Django server in the background on setup, and shuts it down on teardown. This allows the use of automated test clients other than the Django dummy client such as, for example, the Selenium client, to execute a series of functional tests inside a browser and simulate a real user’s actions.
The live server listens on localhost and binds to port 0 which uses a free port assigned by the operating system. The server’s URL can be accessed with self.live_server_url during the tests.
Additional information from comments:
You can test if the URL of an image file is present in your response by testing for the MEDIA_URL:
self.assertContains(response, f'{settings.MEDIA_URL}/default-avatar.svg')
You can test for the existence of an upload in various ways, but the easiest one is to check if there's a file object associated with the FileField. It will throw ValueError if there is not.
I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those #rpc functions, I have no luck there at all.
What I have tried in testing those #rpc functions:
1. Get dummy data in model database
2. Start a server at localhost:8000
3. Create a suds.Client object that can communicate with localhost:8000
4. Try to invoke #rpc functions from the suds.Client object, and test if the output matches what I expected.
However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running.
I tried to make the server run on a different thread, but that messed up my test even more.
I have searched as much as I could online and found no materials that can answer this question.
TL;DR: how do you test #rpc functions using unit test?
I believe if you are using a service inside a test, that test should not be a unit test.
you might want to consider use factory_boy or mock, both of them are python modules to mock or fake a object, for instance, to fake a object to give a response to your rpc call.
I'm currently running a t2.micro instance on EC2 right now. I have the html/web interface side of it working, along with a MySQL database.
The site allows users to register and stores them in the DB via a PHP script.
I want there to be an actual Python application that queries the MySQL database and returns user data, to then be executed in a Python script.
What I cannot find is whether I host this Python application as a totally separate instance or if it can exist on the same instance, in a different directory. I ultimately just need to query the database, which makes me thing it must exist on the same instance.
Could someone please provide some guidance?
Let me just be clear: this is not a Python web app. This Python backend is entirely separate except making queries against the database.
Either approach is possible, but there are pros & cons to each.
Running separate Python app on the same server:
Pros:
Setting up local access to the database is fairly simple
Only need to handle backups or making snapshots, etc. for a single instance
Cons:
Harder to scale up individual pieces if you need more memory, processing power, etc. in the future
Running the Python app on a separate server:
Pros:
Separate pieces means you can scale up & down the hardware each piece is running on, according to their individual needs
If you're using all micro instances, you get more resources to work with, without any extra costs (assuming you're still meeting all the other 'free tier eligible' criteria)
Cons:
In general, more pieces == more time spent on configuration, administration tasks, etc.
You have to open up the database to non-local access
Simplest: open up the database to access from anywhere (e.g. all remote IP addresses), and have the Python app log in via the internet
Somewhat safer, more complex: set the Python app server up with an elastic IP, open up the database to access only from that address
Much safer, more complex: set up your own virtual private cloud (VPC), and allow connections to the database only from within the VPC. You'd have to configure public access for each of the servers for whatever public traffic you'll have, presumably ports 80 and/or 443.
I'm building a Django app with an API built on Piston. For the sake of keeping everything as DRY as possible and the API complete, I'd like my internal applications to call the API rather than the models (kind of a proxy-view-controller a la https://github.com/raganwald/homoiconic/blob/master/2010/10/vc_without_m.md but all on one django install for now). So the basic setup is:
Model -> API -> Application -> User Client
I can overload some core Piston classes to create an internal client interface for the application, but I'm wondering if I could just use the Django Test Client to accomplish the same thing. So to create an article, rather than calling the model I would run:
from django.test.client import Client
c = Client()
article = c.post('/api/articles', {
'title' : 'My Title',
'content' : 'My Content'
})
Is there a reason I shouldn't use the test client to do this? (performance, for instance) Is there a better tool that's more tailored for this specific purpose?
After reviewing the code for TestClient, it doesn't appear to have any additional overhead related to testing. Rather, it just functions as a basic client for internal requests. I'll be using the test client as the internal client, and using Piston's DjangoEmitter to get model objects back from the API.
Only testing will tell whether the internal request mechanism is too much of a performance hit.