Pass environment variable to PyTest based on Github commit message - python

In .yml we can do
jobs:
build-conda:
if: "! contains(toJSON(github.event.commits.*.message), '[skip ci]')"
to skip a build if commit message contains [skip ci]. Likewise, I'd like to pass an environment variable to the Python test scripts, like so (pseudocode):
if "[my msg]" in github_commit_message:
os.environ["MY_VAR"] = "1"
else:
os.environ["MY_VAR"] = "0"
or just pass the whole github_commit_message to the env var. I'm using Github actions, but Travis is an option.

You can set a workflow environment variable based on whether the commit message contains a certain substring.
For the example below, the variable CONTAINS_PYTHON is set to 'true' if the commit message contains the string [python].
In the run step, the value is printed using python. Note that this assumes it is run on a runner that has python installed and on the PATH. This is the case for ubuntu-latest, but possibly not for self-hosted runners. Therefore, if you get a message like "python not found", make sure to also include the setup-python action.
on: push
jobs:
test:
runs-on: ubuntu-latest
env:
CONTAINS_PYTHON: ${{ contains(toJSON(github.event.commits.*.message), '[python]') }}
steps:
- run: python -c 'import os; print(os.getenv("CONTAINS_PYTHON"))'

Related

How to run github action tests when a python file (.py) anywhere in the project changed?

How can I have a github action that runs pytest when ANY python file (.py file) anywhere in the project changes? This project contains a mix of different languages and I only want to run pytest if a python file changed somewhere in the project (in ANY directory at any level within the project).
name: Test Python Tests
on:
push:
paths:
- what to put here????
jobs:
build-and-run:
steps:
- uses: actions/checkout#v1
- name: Update Conda environment with "requirements.yml"
uses: matthewrmshin/conda-action#v1
with:
args: conda env update -f ./requirements.yml
- name: Run "pytest" with the Conda environment
uses: matthewrmshin/conda-action#v1
with:
args: pytest
on:
push:
paths:
- '**.py'
This should do the trick, see Filter pattern cheat sheet
Basically, what you need is the git diff information and read all changed files from there.
GitHub Actions' Push Event doesn't include a list of modified files. That means, you have to always trigger a workflow run on push and then check for the files that changed via the normal REST API.
https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
Note: The webhook payload available to GitHub Actions does not include the added, removed, and modified attributes in the commit object. You can retrieve the full commit object using the REST API. For more information, see "Get a single commit"".
You could use a JavaScript Action in combination with the OctoKit Client (https://github.com/actions/toolkit). If you use the one from the toolkit, it will already be authenticated.
OctoKit can be used to make REST Calls fairly easy. See the default response 200 at https://docs.github.com/en/rest/reference/repos#get-a-commit
...
"files": [
{
"filename": "file1.txt",
"additions": 10,
"deletions": 2,
"changes": 12,
"status": "modified",
"raw_url": "https://github.com/octocat/Hello-World/raw/7ca483543807a51b6079e54ac4cc392bc29ae284/file1.txt",
"blob_url": "https://github.com/octocat/Hello-World/blob/7ca483543807a51b6079e54ac4cc392bc29ae284/file1.txt",
"patch": "## -29,7 +29,7 ##\n....."
}
]
...
If the files-field contains a .py-file, cancel the workflow. You can cancel the workflow directly from the JS itself:
core.setFailed(error.message);
With core being your OctoKit client.

ansible test fails because lookup('file', '/path/to/file') returns old content, in gitlab-ci [duplicate]

This question already has answers here:
Could not use lookup file module for a file under /etc/
(1 answer)
Ansible authorized key module unable to read public key
(2 answers)
Closed 4 years ago.
I wrote some ansible tests, using the assert module. The real task changes a file, the test reads it's content and checks if it contains some string.
Everything works fine on an normal VM (ubuntu EC2 instance). However, it fails in gitlab-ci in a docker container. Please bear with me if I sound confused. I am confused.
The main task and the debugging task look like this:
- name: Disable core dumps
become: true
pam_limits:
comment: " disable core dumps"
domain: '*'
limit_item: core
limit_type: hard
value: 0
- name: debug file content
become: true
vars:
contents: "{{ lookup('file', '/etc/security/limits.conf') }}"
debug:
var: contents
Checking the debug output, I can see that my line * hard core 0 is not in the contents variable. Consequently, a check like this fails:
name: Assert that the line "* hard core 0" is in limits.conf
become: true
vars:
contents: "{{ lookup('file', '/etc/security/limits.conf') }}"
assert:
that:
contents is search('[^#][*]\s+hard\s+core\s+0.*')
However, this check succeeds:
- name: Get line with core configuration in limits.conf
shell: grep -o -E '^\*\s+hard\s+core\s+0.*$' /etc/security/limits.conf
register: core_line
The issue really seems to be that the file lookup doesn't see the file that the others see, the others being humans logging onto the machine, or the grep command.
Again, on a VM the file content is correct and the test succeeds, as expected. However, gitlab-ci spins up some docker container (which seems to be standard) and some virtual machine (which is special to this environment - in this case you won't be able to help me and I apologize for bothering you). Somewhere on the way things get weird, and I get confused
This is not an issue with the pam_limits module. It works just fine. The same happens when I use the lineinfile module.
The ansible version is ansible 2.6.4, python 2.7.6, GitLab Community Edition 10.8.7
Lookups are performed on ansible host (the host where ansible-playbook binary has been executed). Contrary, modules are executed on targeted host, (hosts: foobar statement in a play).
If you want to use data from the remote host, you can use fetch, or just command: cat (command: grep).

Scons ignores the environment for dependency tracking when using python function builders

I have an issue in scons 2.5.1 related to passing parameters through the environment to python based builders.
When an ordinary builder is called it seems like the result is flagged as dirty if any of the source files or the environment variables passed in have changed.
When using python function builders (described here http://scons.org/doc/1.2.0/HTML/scons-user/x3524.html) it seems like scons only care about the source files.
Here is a minimal artificial example of where it fails. It's two implementations of passing a parameter through the environment and writing it to the target file using the shell. One implementation is just a command string, the other uses python subprocess to invoke it in a python function. I use an argument to scons to select what builder to use.
#SConstruct
import subprocess
def echo_fun(env, source, target):
subprocess.check_call('echo %s > %s' % (env['MESSAGE'], str(target[0])), shell= True)
return None
env = Environment(BUILDERS = {'echo' : Builder(action='echo $MESSAGE > $TARGET'),
'echo_py': Builder(action=echo_fun),
})
build_fn = env.echo_py if ARGUMENTS.get('USE_PYTHON', False) else env.echo
build_fn(['test.file'], [], MESSAGE = ARGUMENTS.get('MSG', 'None'))
Here is the result of running the scons script with different parameters:
PS C:\work\code\sconsissue> scons -Q MSG=Hello
echo Hello > test.file
PS C:\work\code\sconsissue> scons -Q MSG=Hello
scons: `.' is up to date.
PS C:\work\code\sconsissue> scons -Q MSG=HelloAgain
echo HelloAgain > test.file
PS C:\work\code\sconsissue> del .\test.file
PS C:\work\code\sconsissue> scons -Q MSG=Hello -Q USE_PYTHON=True
echo_fun(["test.file"], [])
PS C:\work\code\sconsissue> scons -Q MSG=Hello -Q USE_PYTHON=True
scons: `.' is up to date.
PS C:\work\code\sconsissue> scons -Q MSG=HelloAgain -Q USE_PYTHON=True
scons: `.' is up to date.
In the case of using an ordinary builder it detects that the result is dirty when MSG changes (and clean when MSG stays the same) but in the python command version it considered it up to date even if MSG is changed.
A workaround for this would be to put my builder scripts in a separate python script and invoke that python script with the environment dependencies as command line parameters but it seems convoluted.
Is this the expected behavior or a bug?
Is there an easier workaround than the one I described above where I can keep my build functions in the SConstruct file?
This is expected behavior because there is no way for SCons to know that the function (as written) depends on MESSAGE.
However if you read the manpage
http://scons.org/doc/production/HTML/scons-man.html
You'll see this (Under "Action Objects"):
The variables may also be specified by a varlist= keyword parameter;
if both are present, they are combined. This is necessary whenever you
want a target to be rebuilt when a specific construction variable
changes. This is not often needed for a string action, as the expanded
variables will normally be part of the command line, but may be needed
if a Python function action uses the value of a construction variable
when generating the command line.
...
# Alternatively, use a keyword argument.
a = Action(build_it, varlist=['XXX'])
So if you rewrite as:
#SConstruct
import subprocess
def echo_fun(env, source, target):
subprocess.check_call('echo %s > %s' % (env['MESSAGE'], str(target[0])), shell= True)
return None
env = Environment(BUILDERS = {'echo' : Builder(action='echo $MESSAGE > $TARGET'),
'echo_py': Builder(action=Action(echo_fun, varlist=['MESSAGE'])),
})
build_fn = env.echo_py if ARGUMENTS.get('USE_PYTHON', False) else env.echo
build_fn(['test.file'], [], MESSAGE = ARGUMENTS.get('MSG', 'None'))
It should behave as you desire.

use environment variables in CircleCI

I'm trying to use CircleCI to run automated tests. I have a config.yml file tat contains secrets that I don't want to upload to my repo for obvius reasons.
Thus I've created a set of env varialbes in the Project Settings section:
VR_API_KEY = some_value
CLARIFAI_CLIENT_ID = some_value
CLARIFAI_CLIENT_SECRET = some_value
IMAGGA_API_KEY = some_value
IMAGGA_API_SECRET = some_value
The config.yml, I've removed the actual values and looks like this
visual-recognition:
api-key: ${VR_API_KEY}
clarifai:
client-id: ${CLARIFAI_CLIENT_ID}
client-secret: ${CLARIFAI_CLIENT_SECRET}
imagga:
api-key: ${IMAGGA_API_KEY}
api-secret: ${IMAGGA_API_SECRET}
I have a test that basically creates the API client instances and configures everything, this test fails because it looks like CircleCI is not correctly substituting the values...here is the output of some prints (this is just when the values are read from config.yml)
-------------------- >> begin captured stdout << ---------------------
Checking tagger queries clarifai API
${CLARIFAI_CLIENT_ID}
${CLARIFAI_CLIENT_SECRET}
COULD NOT LOAD: 'UNAUTHORIZED'
--------------------- >> end captured stdout << ----------------------
The COULD NOT LOAD: 'UNAUTHORIZED' is expected since unvalid credentials lead to Oauth dance failure
Any clues? Thanks!
Meaning there is no substitution and therefore all tests will fail....what I'm doing wrong here...by the way, I don't have a circle.yml file yet...do I need one?
Thanks!
EDIT: If anyone runs into the same problem, solution was rather simple, I've simple ciphered the config.yml file as depicted here
https://github.com/circleci/encrypted-files
Then in circle.yml just add an instruction to decypher and name the output file config.yml...and that's it!
dependencies:
pre:
# update locally with:
# openssl aes-256-cbc -e -in secret-env-plain -out secret-env-cipher -k $KEY
- openssl aes-256-cbc -d -in config-cipher -k $KEY >> config.yml
CircleCI also supports putting in environment variables (CircleCI Environment Variables). Instead of putting the value of the environment variable in the code, you go to project settings -> Environment Variables. Then just click add variable with name and value. You access the environment variable normally through the name.

How do I tell fabric which role current site has?

I have just one command in fabfile.py:
#role('dev')
def test():
local('...')
Now, I can use --role=dev in every command, but this is extremely stupid.
What I want is to install my project in a host once, with a certain role, then use it without repeating this parameter.
I typically include the following in my fabfile.py:
if not len(env.roles):
env.roles = ["test"]
This says if env.roles is not defined (via the command line for instance) that it should be defined as "test" in my case. So in your case I would alter the above to substitute dev for test and thus you would have:
if not len(env.roles):
env.roles = ["dev"]
By doing this you should find that you get the behavior you are looking for while providing you the ability to override if you so desire at any point in the future.
EDIT: I'm editing this to include a small example fabfile.py and explanation of usage.
env.roledefs = {
'test': ['test.fabexample.com'],
'stage': ['stage.fabexample.com'],
'prod': ['web01.fabexample.com', 'web02.fabexample.com', 'web03.fabexample.com'],
}
# default role will be test
env.roles = ['test']
def git_pull():
run("git pull")
def deploy():
target = "/opt/apps/FOO"
with cd(target):
git_pull()
sudo("service apache2 restart")
Now this fabfile will allow me to deploy code to any of three different environments: "test", "stage", or "prod". I select which environment I want to deploy to via the command line:
fab -R stage deploy
or,
fab --role=stage deploy
If I do not specify a role fabric will default to 'test' due to env.roles being set. Not that fabric isn't used to do anything to the local box, instead it acts on the local box (or boxes) as defined in env.roledefs although with some modifications it could be made to work locally as well.
Typically the fabric command is used from a development box to perform these operations remotely on the testing, staging, or production boxes, therefore specifying the role via the command line is not "extremely stupid" but is by design in this case.
You can use env.roledefs to associates roles with groups of hosts.

Categories