Result Execution and Presentation when Blackbox Testing - python

I have developed a Blackbox Test Environment in Python 3.2 for testing a piece of hardware. In this environment I have a TestExecution.py module where I run my tests as follows:
while(True):
TestWithRestart("Test122")
TestWithRestart("Test123",keys="invalid_keys.dat")
TestWithOneComPort("Test200", keys="invalid_keys.dat")
TestWithTwoComPorts("Test200")
TestWithTwoComPorts("Test200", ppc_simulation_script="Test200.pcc")
TestWithNoComPort()
TestTime("Test500")
Test600()
TestWithComPortNoise("Test600")
TestWithComPortInteruption("Test601")
Each hardware release I test is represented on my PC by its own Test Environment folder. This folder contains logs, keys and a TestExecution.py. Each Test Case has its own results folder and in this folder I have log folders for each execution of the test.
Its also possible that I need to design new tests for a new hardware release. In this case it can take numerous attempts until I get this test to work properly.
With regard to the Pass/Fail status of a test, I do this by manually checking within my log files. The next improvement will be to automate the process of establishing if a test passed or not. I will write separate classes for this. This process will be ongoing.
I'm wondering if I can integrate my environment with Continuous Integration Software with a view to presenting both test execution and/or results in a nice graphical form. It would also be nice to select the tests I wish to execute. What open source software would you recommend?
Thanks,
Barry

Jenkins. For example, you can dump your test results in JUnit XML format and Jenkins will automatically produce nice graphs.
Plugins depend on your needs, of course, but here is a list of the essential plugins plus my favorites (some of them are bundled in the basic package):
Ant
A version control integration plugin (like Subversion, depends on what you are using)
Parameterized Trigger Plugin
Build Timeout Plugin
Log Parser Plugin
Regex Email Plugin
Artifact Deployer Plugin
Extended e-mail Plugin
As a Python programmer you will also benefit greatly from Python Jenkins API Wrapper.
In general, however, be careful with plugins: sometimes they are unstable and/or don't function properly. A look at plugin revision history usually can tell you if it is well-maintained.
You may install Jenkins locally on your machine and play with it for a few days before deciding if it fits your needs.

Related

Package python code dependencies for remote execution on the fly

my situation is as follows, we have:
an "image" with all our dependencies in terms of software + our in-house python package
a "pod" in which such image is loaded on command (kubernetes pod)
a python project which has some uncommitted code of its own which leverages the in-house package
Also please assume you cannot work on the machine (or cluster) directly (say remote SSH interpreter). the cluster is multi-tenant and we want to optimize it as much as possible so no idle times between trials
also for now forget security issues, everything is locked down on our side so no issues there
we want to essentially to "distribute" remotely the workload -i.e. a script.py - (unfeasible in our local machines) without being constrained by git commits, therefore being able to do it "on the fly". this is necessary as all of the changes are experimental in nature (think ETL/pipeline kind of analysis): we want to be able to experiment at scale but with no bounds with regards to git.
I tried dill but I could not manage to make it work (probably due to the structure of the code). ideally, I would like to replicate the concept mleap applied to ML pipelines for Spark but on a much smaller scale, basically packaging but with little to no constraints.
What would be the preferred route for this use case?

How do you write a unit test for a method that connects to a web service?

I'm using py.test to create unit tests for my application, but I am stuck with a problem.
I create automated web software, so a lot of my methods connect to external servers. I don't want to do this within the test, instead I would rather store the HTML source and test against that.
The question is how do I do this? For example where do I store the test data? Is there anything within py.test that can aid in storing/testing offline data?
The general solution is to use mocking; replacing the library that calls out to the web service and replacing it with something that acts like that library but returns test versions of normal results.
Use the unittest.mock library to do the mocking; it comes with Python 3.3 and up, or is available as a backport for older Python releases.
Just add a new package to your tests package (where all your unittests are stored) that handles the 'fixtures', the test data to be produced for certain arguments.

How do you distribute Python scripts?

I have a server which executes Python scripts from a certain directory path. Incidently this path is a check-out from the SVN trunk version of the scripts. However, I get the feeling that this isn't the right way to provide and update scripts for a server.
Do you suggest other approaches? (compile, copy, package, ant etc.)
In the end a web server will execute some Python script with parameters. How do I do the update process?
Also, I have trouble deciding what is best to handle updated versions which only work for new projects on the server. Therefore, if I update the Python scripts, but only newly created web jobs will know how to handle that. I "delivery" to one of many directories which keep track of versions and the server picks the right one?!
EDIT: I webserver is basically an interface that runs some data analysis. That analysis is the actual scripts that take some parameters and mingle data. I don't really change the web interface. I only need to update the data scripts stored on webserver. Indeed, in some advanced version the web server should also pick the right version of my data scripts. However, at the moment I have no idea which would be the easiest way.
The canonical way of distributing Python code/functionality is by using a PyPi compliant package manager.
A list of available PyPi implementations on python.org:
http://wiki.python.org/moin/PyPiImplementations
Instructions on setting up and using EggBasket:
http://chrisarndt.de/projects/eggbasket/#installation
Instructions on installing ChiShop:
http://justcramer.com/2011/04/04/setting-up-your-own-pypi-server/
Note that for this to work you need to distribute your code as "Eggs"; you can find out how to do this here: http://peak.telecommunity.com/DevCenter/setuptools
A great blog post on the usage of eggs and the different parts in packaging: http://mxm-mad-science.blogspot.com/2008/02/python-eggs-simple-introduction.html

Python: tool to keep track of deployments

I'm looking for a tool to keep track of "what's running where". We have a bunch of servers, and on each of those a bunch of projects. These projects may be running on a specific version (hg tag/commit nr) and have their requirements at specific versions as well.
Fabric looks like a great start to do the actual deployments by automating the ssh part. However, once a deployment is done there is no overview of what was done.
Before reinventing the wheel I'd like to check here on SO as well (I did my best w/ Google but could be looking for the wrong keywords). Is there any such tool already?
(In practice I'm deploying Django projects, but I'm not sure that's relevant for the question; anything that keeps track of pip/virtualenv installs or server state in general should be fine)
many thanks,
Klaas
==========
EDIT FOR TEMP. SOLUTION
==========
For now, we've chosen to simply store this information in a simple key-value store (in our case: the filesystem) that we take great care to back up (in our case: using a DCVS). We keep track of this store with the same deployment tool that we use to do the actual deploys (in our case: fabric)
Passwords are stored inside a TrueCrypt volume that's stored inside our key-value store.
==========
I will still gladly accept any answer when some kind of Open Source solution to this problem pops up somewhere. I might share (part of) our solution somewhere myself in the near future.
pip freeze gives you a listing of all installed packages. Bonus: if you redirect the output to a file, you can use it as part of your deployment process to install all those packages (pip can programmatically install all packages from the file).
I see you're already using virtualenv. Good. You can run pip freeze -E myvirtualenv > myproject.reqs to generate a dependency file that doubles as a status report of the Python environment.
Perhaps you want something like Opscode Chef.
In their own words:
Chef works by allowing you to write
recipes that describe how you want a
part of your server (such as Apache,
MySQL, or Hadoop) to be configured.
These recipes describe a series of
resources that should be in a
particular state - for example,
packages that should be installed,
services that should be running, or
files that should be written. We then
make sure that each resource is
properly configured, only taking
corrective action when it's
neccessary. The result is a safe,
flexible mechanism for making sure
your servers are always running
exactly how you want them to be.
EDIT: Note Chef is not a Python tool, it is a general purpose tool, written in Ruby (it seems). But it is capable of supporting various "cookbooks", including one for installing/maintaining Python apps.

Syntax for using mr.ripley for benchmarking

I have a Plone 3.3.5 site that I'm migrating to plone.app.blob for BLOB storage. I'm looking to measure the difference in performance and resource usage by replaying requests to the site, pre-migration and post-migration.
I found that mr.ripley comes with it's own buildout and I used that to install it. That buildout contains a section which creates a script at bin/replay, which is configured by some parameters in the buildout.cfg. The included parameters look like they should work for my instance as I'm running on port 8080 as well.
I copied one of my (smaller) apache logs into the base directory of my mr.ripley buildout and chowned it so that my zope user can read it. Then I try to run it like this:
time bin/replay mysite.com_access.log
It seems to run (doesn't produce any errors or drop me back into the shell) however I don't see any signs that it's loading up the server. My RAM and CPU usage in top still look like the machine is idling.
Many hours later the process does still not seem to have been completed. I ran it using screen, detached and returned several times to the session, but it just seems to be stuck.
Any recommendations as to what I might be missing?
I've performed before and after load testing to test architecture changes. To do this we used JMeter. We took apache logs that represented the typical use we were after. JMeter allows these to be replayed. In addition it will simulate cookies/sessions and browser cache responses to make the request even more realistic.
Then we built a buildout to deploy jmeter and it's configuration out to several test nodes and let it run.
I know this doesn't answer your direct question but it's an alternative approach.

Categories