Running selenium tests in python and chrome. Between each test, it logs that its creating a new http connection.
Is it possible to create a connection pool so that a new one isn't created for each test?
Based on the low details provided on your question, and that you're mentioning that this happens only between tests, leads me to think that you have something in your setUp or tearDown methods.
Check what gets executed there and see whether that is the source of your problem.
Related
I've been using node-red to trigger communication to a philips hue gateway. I have succeeded in triggering it the way I want. The issue is that I need the action to take place more immediately than my current implementation. The only reason there is a delay is because it needs to establish a connection. I've tried looking online but it doesn't seem that there is a simple way to send this sort of connection descriptor across python scripts. I want to share the descriptor because I could have one script that connects to the gateway and runs an empty while loop. The second script could then just take the connection anytime I run it and do its actions. Apologies if this was answered before but I'm not well versed in python and a lot of the solutions were not making sense. For example, it doesn't seem that redis would be able to solve my issue.
Thanks
As per #hardillb's comment the easiest to control the Phillips Hue is to use one of the existing Node-Red Hue nodes:
https://flows.nodered.org/node/node-red-contrib-node-hue
https://flows.nodered.org/node/node-red-contrib-huemagic
If you have special requirements that require use of the Hue Python SDK ... It is possible to use the node-red-contrib-pythonshell node to run a python script that stays alive (using the node's "Continuous" option) and have Node-Red send messages to the script (using the Stdin option). There's some simple examples in the node's test directory: https://github.com/namgk/node-red-contrib-pythonshell/tree/master/test.
I am setting up the test platform for a multi-tenant system.
For each written test, I want to create a test for each tenant and wrap the entire test run to change the database connexion and some thread local variable before the test without breaking the database teardown where it flushes.
During my trial-and-error process, I've resorted to climbing higher and higher in the pytest hook chain: I started with pytest_generate_tests to create a test for each tenant with an associated fixture, but the teardown failed, I've ended up at the following idea:
def pytest_runtestloop(session):
for tenant in range(settings.TENANTS.keys()):
with set_current_tenant(tenant):
with environ({'DJANGO_CONFIGURATION': f'Test{tenant.capitalize()}Config'}):
session.config.pluginmanager.getplugin("main").pytest_runtestloop(session)
return True
Although this does not work (since django-configurations loads the settings during the earlier pytest_load_initial_conftests phase), this example should give an idea of what I am trying to achieve.
The big roadblock: The default database connection needs to point to the current tenant's database before any fixtures are loaded and after flush is ran.
I have disabled pytest-django's default session fixture mechanism and plan on using a external database for tests :
#pytest.fixture(scope='session')
def django_db_setup():
pass
I could have a wrapper python script that calls pytest multiple times with the right config, but I would lose a lot of nice tooling.
The purpose is to implement a pool like database connection pool in my web application. My application is write by Django.
The problem is that every time a http request come, my code will be loaded and run through. So if I write some code to initiate a pool. These code will be run per http request. And the pool will be initiated per request. So it is meaningless.
So how should I write this?
Your understanding of how things work is wrong, unfortunately. The way Django runs is very much dependent on the way you are deploying it, but in almost all circumstances it does not load code or initiate globals on every request. Certainly, uWSGI does not behave that way; it runs a set of long-lived workers that persist across many requests.
In effect, uWSGI is already a connection pool. In other words, you are trying to solve a problem that does not exist.
I have created a testsuite which has 2 testcases that are recorded using selenium in firefox. Both of those test cases are in separate classes with their own setup and teardown functions, because of which each test case opens the browser and closes it during its execution.
I am not able to use the same web browser instance for every testcase called from my test suite. Is there a way to achieve this?
This is how is suppose to work.
Tests should be independent else they can influence each other.
I think you would want to have a clean browser each time and not having to clean session/cookies each time, maybe now not, but when you will have a larger suite you will for sure.
Each scenario has will start the browser and it will close it at the end, you will have to research which methods are doing this and do some overriding, this is not recommended at all.
I'm trying to debug an error where python import statements randomly fail, at other times they run cleanly.
This is an example of the exceptions I see. Sometimes I'll see this one, sometimes I'll see another one in a different module, though it seems to always hit in one of 4 modules.
ERROR:root:/home/user/projecteat/django/contrib/auth/management/__init__.py:25: RuntimeWarning: Parent module 'django.contrib.auth.management' not found while handling absolute import
from django.contrib.contenttypes.models import ContentType
Because of the random nature, I'm almost certain it's a threading issue, but I don't understand why I would get import errors, so I'm not sure what to look for in debugging. Can this be caused by filesystem contention if different threads are trying to load the same modules?
I'm trying to get Django 1.4's LiveServerTestCase working on Google App Engine's development server. The main thread runs django's test framework. When it loads up a LiveServerTestCase based test class, it spawns a child thread which launches the App Engine dev_appserver, which is a local webserver. The main thread continues to run the test, using the Selenium driver to make HTTP requests, which are handled by dev_appserver on the child thread.
The test framework may run a few tests in the LiveServerTestCase based class before tearing down the testcase class. At teardown, the child thread is ended.
It looks like the exceptions are happening in the child (HTTP server) thread, mostly between tests within a single testcase class.
The code for the App Engine LiveServerTestCase class is here: https://github.com/dragonx/djangoappengine/blob/django-1.4/test.py
It's pretty hard to provide all the debugging info required for this question. I'm mostly looking for suggestions as to why python import statements would give RuntimeWarning errors.
I have a partial answer to my own question. What's going on is that I have two threads running.
Thread 1 is running the main internal function inside dev_appserver (dev_appserver_main) which is handling HTTP requests.
Thread 2 is running the Selenium based testcases. This thread will send commands to the browser to do something (which then indirectly generates an HTTP request and re-enters in thread 1). It then either issues more requests to Selenium to check status, or makes a datastore query to check for a result.
I think the problem is that upon handling every HTTP request, Thread 1 (dev_appserver) changes the environment so that certain folders are not accessible (folder excluded in app.yaml, as well as the environment that is not part of appengine). If Thread 2 happens to run some code in this time, certain imports may fail to load if they are located in these folders.