I am using Coverage.py
My use case is as follows:
My test is devised of several different processes. Each inside its own container. I would like to get a single coverage report on the entire test
The python packages that I would like to track are usually installed in the base python of the containers, and in one container it is in virtual environment (instead of base python)
I output a different .coverage file for each and would like to
Combine them (they do share some libraries that I would like to get coverage on)
Report to html
The issue is that the paths are local to the container which captured the coverage.
I would like to run combine and report on the host.
I know that for combine I can use [paths] to alias different paths that are the same (haven't worked with it yet since I am trying to run a single process at first).
I am not sure how to get the correct paths in the report.
When I run report I get an error of "no source for code" which makes sense but I can't find a way to run the report on the host even though all of the code exists on the host.
Related
A pretty large Python based project I'm working on has to deal with a situation some of you might know:
you have a local checkout which your server can not be run from (history), you alter a couple of files, e.g. by editing or git-operations and then you want to locally 'patch' a running server residing at a different location of the file system.
[Local Checkout, e.g. /home/me/project] = deploy => [Running Environment, e.g. /opt/project]
The 'deployment' process might have to run arbitrary build scripts, copy modified files, maybe restart a running service and so on.
Note that I'm not talking about CI or web-deployment - it's more like you change something on your source files and want to know if it runs (locally).
Currently we do this with a self-grown hierarchy scripts and want to improve this approach, e.g. with a make-based approach.
Personally I dislike make for Python projects for a couple of reasons, but in principle the thing I'm looking for could be done with make, i.e. it detects modifications, knows dependencies and it can do arbitrary stuff to meet the dependencies.
I'm now wondering if there isn't something like make for Python projects with same basic features as make but with 'Python-awareness' (Python binding, nice handling of command line args, etc).
Has this kind of 'deploy my site for development'-process a name I should know? I'm not asking what program I should use but how I should inform myself (examples are very welcome though)
The way I am creating the Allure report is by creating the XMLs with the allure plugin for pytest, like so
pytest LoginTest.py --alluredir C:\Users\xxx\Desktop\Allure\xml
Here will generate a hand-full of XMLs and txt files. As expected.
Next I serve the Allure with these XMLs, like so.
allure serve C:\Users\xxx\Desktop\Allure\xml --port 9000
This then kicks off the Allure server and displays me the test results in the correct fashion and everything is great.
However, if I were to run the same test again and make it fail, for example - the server doesnt update automatically, I have to kill it and re-run the 2nd command. Surely there is a way for it to automatically notice new XMLs and update accordingly? Or am I missing something?
I also do not understand how to make use of the other features of Allure, (trends, history, etc). I have looked at Github, documentation, etc - and cant seem to find an answer to help me.
There is no runtime report feature is available at the moment.
The right way to use history features is using one of Allure CI plugins (Jenkins/TeamCity/Bamboo). In case you need to use it locally all you need is copy history folder from previous report to allure-results and then generate the report as usual.
You can use the docker container to see your reports automatically updated.
https://github.com/fescobar/allure-docker-service/tree/master/allure-docker-python-pytest-example
https://github.com/fescobar/allure-docker-service
I am trying to build python from source in a Docker container. The error I'm getting is:
"This platform's pyconfig.h needs to define PY_FORMAT_LONG_LONG" (more info below)
My question is : How does one properly set the PY_FORMAT_LONG_LONG in the build configuration ?. Which environment variable, compiler flags need to be set ? And of course, which is the right part of the fine manual to read ?
These builds were working fine in a a VM, and are now failing various
base images (CEntOS6/7, Ubuntu 14.04/16.10).
Full error at e.g. https://ci.sagrid.ac.za/job/python-deploy/39/ARCH=x86_64,GCC_VERSION=4.9.4,NAME=python,OS=centos6,SITE=generic,VERSION=2.7.13/console)
The configuration for the build is at e.g. https://github.com/SouthAfricaDigitalScience/python-deploy/blob/master/build-2.7.13.sh
(Platform: Linux, specifically Fedora and Red Hat Enterprise Linux 6)
I have an integration test written in Python that does the following:
creates a temporary directory
tells a web service (running under apache) to run an rsync job that copies files into that directory
checks the files have been copied correctly (i.e. the configuration was correctly passed from the client through to an rsync invocation via the web service)
(tries to) delete the temporary directory
At the moment, the last step is failing because rsync is creating the files with their ownership set to that of the apache user, and so the test case doesn't have the necessary permissions to delete the files.
This Server Fault question provides a good explanation for why the cleanup step currently fails given the situation the integration test sets up.
What I currently do: I just don't delete the temporary directory in the test cleanup, so these integration tests leave dummy files around that need to be cleared out of /tmp manually.
The main solution I am currently considering is to add a setuid script specifically to handle the cleanup operation for the test suite. This should work, but I'm hoping someone else can suggest a more elegant solution. Specifically, I'd really like it if nothing in the integration test client needed to care about the uid of the apache process.
Approaches I have considered but rejected for various reasons:
Run the test case as root. This actually works, but needing to run the test suite as root is rather ugly.
Set the sticky bit on the directory created by the test suite. As near as I can tell, rsync is ignoring this because it's set to copy the flags from the remote server. However, even tweaking the settings to only copy the execute bit didn't seem to help, so I'm still not really sure why this didn't work.
Adding the test user to the apache group. As rsync is creating the files without group write permission, this didn't help.
Running up an Apache instance as the test user and testing against that. This has some advantages (in that the integration tests won't require that apache be already running), but has the downside that I won't be able to run the integration tests against an Apache instance that has been preconfigured with the production settings to make sure those are correct. So even though I'll likely add this capability to the test suite eventually, it won't be as a replacement for solving the current problem more directly.
One other thing I really don't want to do is change the settings passed to rsync just so the test suite can correctly clean up the temporary directory. This is an integration test for the service daemon, so I want to use a configuration as close to production as I can get.
Add the test user to the apache group (or httpd group, whichever has group ownership on the files).
With the assistance of the answers to that Server Fault question, I was able to figure out a solution using setfacl.
The code that creates the temporary directory for the integration test now does the following (it's part of a unittest.TestCase instance, hence the reference to addCleanup):
local_path = tempfile.mkdtemp().decode("utf-8")
self.addCleanup(shutil.rmtree, local_path)
acl = "d:u:{0}:rwX".format(os.geteuid())
subprocess.check_call(["setfacl", "-m", acl, local_path])
The first two lines just create the temporary directory and ensure it gets deleted at the end of the test.
The last two lines are the new part and set the default ACL for the directory such that the test user always has read/write access and will also have execute permissions for anything with the execute bit set.
I've just started using Jenkins today, so it's entirely possible that I've missed something in the docs.
I currently have Jenkins set up to run unit tests from a local Git repo (via plugin). I have set up the environment correctly (at least, in a seemingly working condition), but have run into a small snag.
I have a single settings.py file that I have excluded from my git repo (it contains a few keys that I'm using in my app). I don't want to include that file into my git repo as I'm planning on OS'ing the project when I'm done (anyone using the project would need their own keys). I realize that this may not be the best way of doing this, but it's what's done (and it's a small personal project), so I'm not concerned about it.
The problem is that because it's not under git management, Jenkins doesn't pick it up.
I'd like to be able to copy this single file from my source directory to the Jenkins build directory prior to running tests.
Is there a way to do this? I've tried using the copy to slave plugin, but it seems like any file that I want would first (manually) need to be copied or created in workspace/userContent. Am I missing something?
I would suggest using some environment variable, like MYPROJECT_SETTINGS. So when running the task by Jenkins you can overwrite the default path to whatever you can put your settings file for Jenkins in.
The other option, in case you don't want to copy settings file to each build-machine by hand, would be making a settings.py with some default fake keys, which you can add to your repo, and a local settings file with real keys, which overwrites some options, e.g.:
# settings.py file
SECRET_KEY = 'fake stuff'
try:
from settings_local import *
except ImportError:
pass
I am using the Copy Data To Workspace Plugin for this, Copy to Slave plugin should also work, but I found Copy Data To Workspace Plugin to be easier to work with for this use-case.
Why just not use "echo my-secret-keys > settings.txt" in jenkins and adjust your script to read this file so you can add it to report?