I am wondering if there is a way to obtain the hostname of a Django application when running tests. That is, I would like the tests to pass both locally and when run at the staging server. Hence a need to know http://localhost:<port> vs. http://staging.example.com is needed because some tests query particular URLs.
I found answers on how to do it inside templates, but that does not help since there is no response object to check the hostname.
How can one find out the hostname outside the views/templates? Is it stored in Django settings somewhere?
Why do you need to know the hostname? Tests can run just fine without it, if you use the test client. You do not need to know anything about the system they're running on.
You can also mark tests with a tag and then have the CI system run the tests including that tag.
And finally there is the LiveServerTestCase:
LiveServerTestCase does basically the same as TransactionTestCase with one extra feature: it launches a live Django server in the background on setup, and shuts it down on teardown. This allows the use of automated test clients other than the Django dummy client such as, for example, the Selenium client, to execute a series of functional tests inside a browser and simulate a real user’s actions.
The live server listens on localhost and binds to port 0 which uses a free port assigned by the operating system. The server’s URL can be accessed with self.live_server_url during the tests.
Additional information from comments:
You can test if the URL of an image file is present in your response by testing for the MEDIA_URL:
self.assertContains(response, f'{settings.MEDIA_URL}/default-avatar.svg')
You can test for the existence of an upload in various ways, but the easiest one is to check if there's a file object associated with the FileField. It will throw ValueError if there is not.
Related
So, I am running a Flask web app as the frontend to a small SQLite inventory database. I am using Flask-SQLAlchemy for all of the database interaction. I have a search feature built in, so that you could (for example) find all pieces of hardware assigned to Person X. Or, find all pieces of hardware from Project ABC that have the status Available. The search feature supports optional criteria.
For security reasons, I can't post executable code here, but this is my search function:
for system in db.session.query(System).filter(and_(System.assignee_id.like("%{}%".format(assignee_id)),
System.serial.like("%{}%".format(serial)),
System.user_id.like("%{}%".format(user_id)),
System.project_id.like("%{}%".format(project_id)),
System.status_id.like("%{}%".format(status_id)))):
results.append(system)
The weird thing is, this works perfectly when I run the app in the Development config. But, in my Production config, if I try to use the 'Assignee' or 'User' criteria (both of which are strings of an email address), I get bad results. It shows me a bunch of systems, some of which have the correct assignee, but most of which don't. The only differences between the Production and Development configs are the file path to the SQLite database and the authentication mechanism to sign into the application.
Why would this return different results between the two? It works EXACTLY like it is supposed to in the Dev config.
UPDATE
After some additional debugging as suggested by the comments, it looks like the problem is in the actual data of the ProductionConfig. However, when I browse the .sqlite file in DB Browser, initially everything looks correct. Where is it getting these random assignees?
POOL = redis.ConnectionPool(host='localhost', port=6379, db=0)
app = Flask(__name__)
#app.route('/get_cohort_curve/', methods=['GET'])```
def get_cohort_curve():
curve = str(request.args.get('curve'))
cohort = str(request.args.get('cohort'))
key = curve+cohort
return get_from_redis(key)
def get_from_redis(key):
try:
my_server = redis.Redis(connection_pool=POOL)
return json.dumps(my_server.get(key))
except Exception, e:
logging.error(e)
app.run()
I need to write unit-tests for this.
How do I test just the route, i.e. a get request goes to the right place?
Do I need to create and destroy instances of the app in the test for each function?
Do I need to create a mock redis connection?
If you are interested in running something in Flask, you could create a virtual environment and test the whole shebang, but in my opinion that is THE HARDEST way to do it.
When I built my site installing Redis locally, setting the port and then depositing some data inside it with an appropriate key was essential. I did all of my development in iPython (jupyter) notebooks so that I could test the functions and interactions with Redis before adding the confounding layer of Flask.
Then you set up a flawless template, solid HTML around it and CSS to style the site. If it works without data as an html page, then you move on to the python part of Flask.
Solidify the Flask directories. Make sure that you house them in a virtual environment so that you can call them from your browser and it will treat your virtual environment as a server.
You create your app.py application. I created each one of the page functions one at a time. tested to see that it was properly pushing the variables to post on the page and calling the right template. After you get on up and running right, with photos and data, then at the next page's template, using #app.route
Take if very slow, one piece at a time with debugging on so that you can see where when and how you are going wrong. You will only get the site to run with redis server on and your virtual environment running.
Then you will have to shut down the VE to edit and reboot to test. At first it is awful, but over time it becomes rote.
EDIT :
If you really want to test just the route, then make an app.py with just the #app.route definition and return just the page (the template you are calling). You can break testing into pieces as small as you like, but you have to be sure that the quanta you pick are executable as either a python script in a notebook or commandline execution or as a compact functional self-contained website....unless you use the package I mentioned in the comment: Flask Unit Testing Applications
And you need to create REAL Redis connections or you will error out.
Newbie on appengine and I really don't know how to phrase the question which sadly results in me not knowing what keywords to google and I hope that i really do get help other than the bashing that a lot of people do.
I'm confused between the behavior of appengine online and the appengine on the local server.
Background info:
Btw this is in Python
Initially i assumed that , when needed or as authored
an instance of the app or module will be created.
And that instance will be the one serving multiple requests from different clients.
In this behavior any initialization code will only be run once.
But in the local development server.
Every time i add something new, specially in the main.py,
the server is able to catch the new changes,
then on browser-refresh be able to run it.
This made me think, wait...
Does it run the entire script over and over again
on every request?
Question:
Does an instance/module run the entire code on every request or is this just an added behavior to the dev server to make development easier?
Both your assumptions - about behaviour in production and development - are wrong.
In production, GAE spins up instances as required. This may be in response to increased load, or the host may simply decide after a certain amount of time to recycle an instance by killing it and starting a new one. Initialization code will always be run whenever a new instance is started.
In development, you only get a single instance. However, the server watches your file system for changes. If it detects a change to the code itself, it will restart itself, and therefore re-run the initialization code. But if you don't make any code changes between requests, the existing process continues indefinitely, and init code will not be re-run.
My project has 2 apps: api (for handling API endpoints) and api-content (for handling file uploads), each listening on a different port. Does anyone know how i can run my test cases with both servers launched? E.g., file uploads need authentication on the api server. my test cases derive from TestCase and have create_app implemented, but that seems to be designed for testing with only one app.
Create two apps, one for each port:
apiapp.config['TESTING'] = True
apitest = apiapp.test_client()
apicontentapp.config['TESTING'] = True
contenttest = apicontentapp.test_client()
and use these two test clients to access either app. There is no need to actually run a server.
Note that this would go well beyond unit testing, you are in the realm of integration or even functional testing here.
In my Django project I need to be able to check whether a host on the LAN is up using an ICMP ping. I found this SO question which answers how to ping something in Python and this SO question which links to resources explaining how to use the sodoers file.
The Setting
A Device model stores an IP address for a host on the LAN, and after adding a new Device instance to the DB (via a custom view, not the admin) I envisage checking to see if the device responds to a ping using an AJAX call to an API which exposes the capability.
The Problem
However (from the docstring of a library suggested in the the first SO question) "Note that ICMP messages can only be sent from processes running as root."
I don't want to run Django as the root user, since it is bad practice. However this part of the process (sending and ICMP ping) needs to run as root. If with a Django view I wish to send off a ping packet to test the liveness of a host then Django itself is required to be running as root since that is the process which would be invoking the ping.
Solutions
These are the solutions I can think of, and my question is are there any better ways to only execute select parts of a Django project as root, other than these:
Run Django as root (please no!)
Put a "ping request" in a queue that another processes -- run as root -- can periodically check and fulfil. Maybe something like celery.
Is there not a simpler way?
I want something like a "Django run as root" library, is this possible?
Absolutely no way, do not run the Django code as root!
I would run a daemon as root (written in Python, why not) and then IPC between the Django instance and your daemon. As long as you're sure to validate the content and properly handle it (e.g. use subprocess.call with an array etc) and only pass in data (not commands to execute) it should be fine.
Here is an example client and server, using web.py
Server: http://gist.github.com/788639
Client: http://gist.github.com/788658
You'll need to install webpy.org but it's worth having around anyway. If you can hard-wire the IP (or hostname) into the server and remove the argument, all the better.
What's your OS here? You might be able to write a little program that does what you want given a parameter, and stick that in the sudoers file, and give your django user permission to run it as root.
/etc/sudoers
I don't know what kind of system you're on, but on any box I've encountered, one does not have to be root to run the command-line ping program (it has the suid bit set, so it becomes root as necessary). So you could just invoke that. It's a bit more overhead, but probably negligible compared to network latency.