I'm trying to use CircleCI to run automated tests. I have a config.yml file tat contains secrets that I don't want to upload to my repo for obvius reasons.
Thus I've created a set of env varialbes in the Project Settings section:
VR_API_KEY = some_value
CLARIFAI_CLIENT_ID = some_value
CLARIFAI_CLIENT_SECRET = some_value
IMAGGA_API_KEY = some_value
IMAGGA_API_SECRET = some_value
The config.yml, I've removed the actual values and looks like this
visual-recognition:
api-key: ${VR_API_KEY}
clarifai:
client-id: ${CLARIFAI_CLIENT_ID}
client-secret: ${CLARIFAI_CLIENT_SECRET}
imagga:
api-key: ${IMAGGA_API_KEY}
api-secret: ${IMAGGA_API_SECRET}
I have a test that basically creates the API client instances and configures everything, this test fails because it looks like CircleCI is not correctly substituting the values...here is the output of some prints (this is just when the values are read from config.yml)
-------------------- >> begin captured stdout << ---------------------
Checking tagger queries clarifai API
${CLARIFAI_CLIENT_ID}
${CLARIFAI_CLIENT_SECRET}
COULD NOT LOAD: 'UNAUTHORIZED'
--------------------- >> end captured stdout << ----------------------
The COULD NOT LOAD: 'UNAUTHORIZED' is expected since unvalid credentials lead to Oauth dance failure
Any clues? Thanks!
Meaning there is no substitution and therefore all tests will fail....what I'm doing wrong here...by the way, I don't have a circle.yml file yet...do I need one?
Thanks!
EDIT: If anyone runs into the same problem, solution was rather simple, I've simple ciphered the config.yml file as depicted here
https://github.com/circleci/encrypted-files
Then in circle.yml just add an instruction to decypher and name the output file config.yml...and that's it!
dependencies:
pre:
# update locally with:
# openssl aes-256-cbc -e -in secret-env-plain -out secret-env-cipher -k $KEY
- openssl aes-256-cbc -d -in config-cipher -k $KEY >> config.yml
CircleCI also supports putting in environment variables (CircleCI Environment Variables). Instead of putting the value of the environment variable in the code, you go to project settings -> Environment Variables. Then just click add variable with name and value. You access the environment variable normally through the name.
Related
In .yml we can do
jobs:
build-conda:
if: "! contains(toJSON(github.event.commits.*.message), '[skip ci]')"
to skip a build if commit message contains [skip ci]. Likewise, I'd like to pass an environment variable to the Python test scripts, like so (pseudocode):
if "[my msg]" in github_commit_message:
os.environ["MY_VAR"] = "1"
else:
os.environ["MY_VAR"] = "0"
or just pass the whole github_commit_message to the env var. I'm using Github actions, but Travis is an option.
You can set a workflow environment variable based on whether the commit message contains a certain substring.
For the example below, the variable CONTAINS_PYTHON is set to 'true' if the commit message contains the string [python].
In the run step, the value is printed using python. Note that this assumes it is run on a runner that has python installed and on the PATH. This is the case for ubuntu-latest, but possibly not for self-hosted runners. Therefore, if you get a message like "python not found", make sure to also include the setup-python action.
on: push
jobs:
test:
runs-on: ubuntu-latest
env:
CONTAINS_PYTHON: ${{ contains(toJSON(github.event.commits.*.message), '[python]') }}
steps:
- run: python -c 'import os; print(os.getenv("CONTAINS_PYTHON"))'
I have been tasked with making a custom python script (since i'm bad with Bash) to run on a remote NRPE client which recursively counts the number of files in the /tmp directory. This is my script:
#!/usr/bin/python3.5
import os
import subprocess
import sys
file_count = sum([len(files) for r, d, files in os.walk("/tmp")]) #Recursive check of /tmp
if file_count < 1000:
x = subprocess.Popen(['echo', 'OK -', str(file_count), 'files in /tmp.'], stdout=subproce$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
# subprocess.run('exit 0', shell=True, check=True) #Service OK - exit 0
sys.exit(0)
elif 1000 <= file_count < 1500:
x = subprocess.Popen(['echo', 'WARNING -', str(file_count), 'files in /tmp.'], stdout=sub$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(1)
else:
x = subprocess.Popen(['echo', 'CRITICAL -', str(file_count), 'files in /tmp.'], stdout=su$
print(x.communicate()[0].decode("utf-8")) #Converts from byteobj to str
sys.exit(2)
EDIT 1: I tried hardcoding file_count to 1300 and I got a WARNING: 1300 files in /tmp. It appears the issue is solely in the nagios server's ability to read files in the client machine's /tmp.
What I have done:
I have the script in the directory with the rest of the scripts.
I have edited /usr/local/nagios/etc/nrpe.cfg on the client machine with the following line:
command[check_tmp]=/usr/local/nagios/libexec/check_tmp.py
I have edited this /usr/local/nagios/etc/servers/testserver.cfg file on the nagios server as follows:
define service {
use generic-service
host_name wp-proxy
service_description Files in /tmp
check_command check_nrpe!check_tmp
}
The output:
correct output is: OK - 3 files in /tmp
When I run the script on the client machine as root, I got a correct output
When I run the script on the client machine as the nagios user, I get a correct output
My output on the Nagios core APPEARS to be working, but it shows there are 0 files in /tmp when I know there are more. I made 2 files on the client machine and 1 file on the nagios server.
The server output for reference:
https://puu.sh/BioHW/838ba84c3e.png
(Ignore the bottom server, any issues solved with the wp-proxy will also be changed on the wpreess-gkanc1)
EDIT 2: I ran the following on the nagios server:
/usr/local/nagios/libexec/check_nrpe -H 192.168.1.59 -c check_tmp_folder
I indeed got a 0 file return. I still don't know how this can be fixed, however.
systemd service file, maybe this var is set to true :)
PrivateTmp= Takes a boolean argument. If true, sets up a new file system namespace for the executed processes and mounts private /tmp and /var/tmp directories inside it that is not shared by processes outside of the namespace.
This is useful to secure access to temporary files of the process, but makes sharing between processes via /tmp or /var/tmp impossible. If this is enabled, all temporary files created by a service in these directories will be removed after the service is stopped. Defaults to false. It is possible to run two or more units within the same private /tmp and /var/tmp namespace by using the JoinsNamespaceOf= directive, see systemd.unit(5) for details.
This setting is implied if DynamicUser= is set. For this setting the same restrictions regarding mount propagation and privileges apply as for ReadOnlyPaths= and related calls, see above. Enabling this setting has the side effect of adding Requires= and After= dependencies on all mount units necessary to access /tmp and /var/tmp.
Moreover an implicitly After= ordering on systemd-tmpfiles-setup.service(8) is added. Note that the implementation of this setting might be impossible (for example if mount namespaces are not available), and the unit should be written in a way that does not solely rely on this setting for security.
SOLVED!
Solution:
Go to your systemd file for nrpe. Mine was found here:
/lib/systemd/system/nrpe.service
If not there, run:
find / -name "nrpe.service"
and ignore all system.slice results
Open the file with vi/nano
Find a line which says PrivateTmp= (usually second to last line)
If it is set to true, set it to false
Save and exit the file and run the following 2 commands:
daemon-reload
restart nrpe.service
Problem solved.
Short explanation: The main reason for that issue is, that with debian 9.x, some processes which use systemd forced the private tmp directories by default. So if you have any other programs which have issues searching or indexing in /tmp, this solution can be tailored to fit.
I have an issue in scons 2.5.1 related to passing parameters through the environment to python based builders.
When an ordinary builder is called it seems like the result is flagged as dirty if any of the source files or the environment variables passed in have changed.
When using python function builders (described here http://scons.org/doc/1.2.0/HTML/scons-user/x3524.html) it seems like scons only care about the source files.
Here is a minimal artificial example of where it fails. It's two implementations of passing a parameter through the environment and writing it to the target file using the shell. One implementation is just a command string, the other uses python subprocess to invoke it in a python function. I use an argument to scons to select what builder to use.
#SConstruct
import subprocess
def echo_fun(env, source, target):
subprocess.check_call('echo %s > %s' % (env['MESSAGE'], str(target[0])), shell= True)
return None
env = Environment(BUILDERS = {'echo' : Builder(action='echo $MESSAGE > $TARGET'),
'echo_py': Builder(action=echo_fun),
})
build_fn = env.echo_py if ARGUMENTS.get('USE_PYTHON', False) else env.echo
build_fn(['test.file'], [], MESSAGE = ARGUMENTS.get('MSG', 'None'))
Here is the result of running the scons script with different parameters:
PS C:\work\code\sconsissue> scons -Q MSG=Hello
echo Hello > test.file
PS C:\work\code\sconsissue> scons -Q MSG=Hello
scons: `.' is up to date.
PS C:\work\code\sconsissue> scons -Q MSG=HelloAgain
echo HelloAgain > test.file
PS C:\work\code\sconsissue> del .\test.file
PS C:\work\code\sconsissue> scons -Q MSG=Hello -Q USE_PYTHON=True
echo_fun(["test.file"], [])
PS C:\work\code\sconsissue> scons -Q MSG=Hello -Q USE_PYTHON=True
scons: `.' is up to date.
PS C:\work\code\sconsissue> scons -Q MSG=HelloAgain -Q USE_PYTHON=True
scons: `.' is up to date.
In the case of using an ordinary builder it detects that the result is dirty when MSG changes (and clean when MSG stays the same) but in the python command version it considered it up to date even if MSG is changed.
A workaround for this would be to put my builder scripts in a separate python script and invoke that python script with the environment dependencies as command line parameters but it seems convoluted.
Is this the expected behavior or a bug?
Is there an easier workaround than the one I described above where I can keep my build functions in the SConstruct file?
This is expected behavior because there is no way for SCons to know that the function (as written) depends on MESSAGE.
However if you read the manpage
http://scons.org/doc/production/HTML/scons-man.html
You'll see this (Under "Action Objects"):
The variables may also be specified by a varlist= keyword parameter;
if both are present, they are combined. This is necessary whenever you
want a target to be rebuilt when a specific construction variable
changes. This is not often needed for a string action, as the expanded
variables will normally be part of the command line, but may be needed
if a Python function action uses the value of a construction variable
when generating the command line.
...
# Alternatively, use a keyword argument.
a = Action(build_it, varlist=['XXX'])
So if you rewrite as:
#SConstruct
import subprocess
def echo_fun(env, source, target):
subprocess.check_call('echo %s > %s' % (env['MESSAGE'], str(target[0])), shell= True)
return None
env = Environment(BUILDERS = {'echo' : Builder(action='echo $MESSAGE > $TARGET'),
'echo_py': Builder(action=Action(echo_fun, varlist=['MESSAGE'])),
})
build_fn = env.echo_py if ARGUMENTS.get('USE_PYTHON', False) else env.echo
build_fn(['test.file'], [], MESSAGE = ARGUMENTS.get('MSG', 'None'))
It should behave as you desire.
I have:
a library that does [Stuff]
a swagger API definition, which is roughly #1 with minor differences to map cleanly to a REST service
a flask app generated #2 using Swagger-Codegen - eg results in python controller functions roughly one-to-one with #1.
My intent is that the flask app (all generated code) should only handle mapping that actual REST api and parameter parsing to match the API spec coded in swagger. After any parameter parsing (again, generated code) it should call directly over to my (non-generated) backend.
My question is, how best to hook these up withOUT hand-editing the generated python/flask code? (Feedback on my design, or details of a formal design pattern that accomplishes this would be great too; I'm new to this space).
Fresh from the generator, I end up with python functions like:
def create_task(myTaskDefinition):
"""
comment as specified in swagger.json
:param myTaskDefinition: json blah blah blah
:type myTaskDefinition: dict | bytes
:rtype: ApiResponse
"""
if connexion.request.is_json:
myTaskDefinition = MyTaskTypeFromSwagger.from_dict(connexion.request.get_json())
return 'do some magic!' # swagger codegen inserts this string :)
On the backend I have my actual logic:
def create_task_backend(myTaskDefinition):
# hand-coded, checked into git: do all the things
return APIResponse(...)
What is the right way to get create_task() to call create_task_backend()?
Of course if I make breaking changes to my swagger spec I will have to hand-update the non-generated code regardless; however there are many reasons I may want to re-generate my API (say, add/refine the MyTaskTypeFromSwagger class, or skip checking into git the generated code at all) and if I have to hand-edit the generated API code, then all those edits are blown away with each re-generation.
Of course I could script this with a ~simple grammar in eg. pyparsing; but while this is my first time with this issue, it seems likely it's been widely solved already!
The following approach worked for me:
created three directories:
src - for my code,
src-gen for the swagger generated code,
codegen in which I have put a script that generate the server along with a few tricks.
I copied all the templates (available in the swagger build) to codegen/templates and edited the controller.mustache to refer to src/server_impl, so it can use my own code. The editing uses the template language so it is generic. Still it is not perfect (I would change a few naming conventions) but it does the job. So, first add to controller.mustache:
from {{packageName}}.server_impl.controllers_impl import {{classname}}_impl
then add instead of return 'do some magic!' the following:
return {{classname}}_impl.{{operationId}}({{#allParams}}{{paramName}}{{^required}}=None{{/required}}{{#hasMore}}, {{/hasMore}}{{/allParams}})
Script:
The src has a server_impl directory.
It creates a symobolic link so that server_impl can be imported as a python module
cd ../src-gen/swagger_server/
ln -s ../../src/server_impl/
cd ../../codegen
java -jar swagger-codegen-cli.jar generate \
-i /path_to_your_swagger definition.yaml \
-l python-flask \
-o ../src-gen \
-t ./templates
cd ../src-gen/
python3 -m swagger_server
I was tempted to use swagger-codegen before and ran into the same conundrum. Everything is fine until you update the spec. Although you can use custom templates, this just seemed like a lot of overhead and maintenance, when all I want is a design first API.
I ended up using connexion instead, which uses the swagger specification to automatically handle routing, marshaling, validation, etc. Connexion is built on flask, so you would not need to worry about switching frameworks or anything, you will just get the benefit of portions of your application being automatically handled from swagger instead of having to maintain auto-generated code.
For now I am working around this by doing the build in these steps
run the codegen
sed-script the generated code to fix trivial stuff like namespaces
hand-edit the files, so that instead of returning 'do some magic' (thats the string all the generated controller endpoints return) they simply call a corresponding function in my 'backend'
use git format-patch to make a patch of the preceeding changes, so that when i re-generated code the build can automatically apply the changes.
Thus, i can add new endpoints and I only have to hand-code the calls to my backend ~once. Instead of using patch files, i could do this directly by writing a py-parsing grammar for the generated code and using the parsed generated code to create the calls to my backend ... that would take longer so I did this all as a quick hack.
This is far from optimal, i'm not going to mark this as accepted as I'm hoping someone will offer a real solution.
The workflow I came to.
The idea is to generate the code, then extract swagger_server package to the project directory. But separately, keep controllers your are coding in the separate directory or (as I do) in the project root and merge them with generated ones after each generations using git merge-files. Then you need to inject your fresh controllers code into swagger_server/controllers, i.e. before starting server.
project
+-- swagger_server
| +-- controllers
| +-- controller.py <- this is generated
+-- controller.py <- this is you are typing your code in
+-- controller.py.common <- common ancestor, see below
+-- server.py <- your server code, if any
So the workflow is the following:
Generate code, copy swagger_server to your project directory, completely overwrite existing
Backup controller.py and controller.py.common from project root
git merge-file controller.py controller.py.common swagger_server/controllers/controller.py
Make swagger_server/controllers/controller.py new common ancestor so copy it to controller.py.common, overwrite existing
Feel free to automate all of this with shell script, i.e.
#!/bin/bash
# Swagger generate server and client stub based on specification, them merge it into the project.
# Use carefully! Commit always before using this script!
# The following structure is assumed:
# .
# +-- my_client
# | +-- swagger_client
# +-- my_server
# | +-- swagger_server
# +-- merge.sh <- this script
read -p "Have you commited the project??? " -n 1 -r
if [[ ! $REPLY =~ ^[Yy]$ ]]; then echo 'Commit first!'; exit 1; fi
rm -rf swagger-python-client
rm -rf swagger-python-server
java -jar swagger-codegen-cli.jar generate -i swagger.yaml -l python -o swagger-python-client
java -jar swagger-codegen-cli.jar generate -i swagger.yaml -l python-flask -o swagger-python-server
# Client - it's easy, just replace swagger_client package
rm -rf my_client/swagger_client
cp -rf swagger-python-client/swagger_client/ my_client
# Server - replace swagger_server package and merge with controllers
rm -rf my_server/.backup
mkdir -p my_server/.backup
cp -rf my_server/swagger_server my_server/.backup
rm -rf my_server/swagger_server
cp -rf swagger-python-server/swagger_server my_server
cd my_server/swagger_server/controllers/
files=$( ls * )
cd ../../..
for f in $files; do
# skip __init__.py
if [ -z "$flag" ]; then flag=1; continue; fi
echo "======== $f"
# initialization
cp -n my_server/swagger_server/controllers/$f my_server/$f.common
cp -n my_server/swagger_server/controllers/$f my_server/$f
# real merge
cp -f my_server/$f my_server/.backup/
cp -f my_server/$f.common my_server/.backup/
git merge-file my_server/$f my_server/$f.common my_server/swagger_server/controllers/$f
cp -f my_server/swagger_server/controllers/$f otmini-repo/$f.common
done
rm -rf swagger-python-client
rm -rf swagger-python-server
Use connexion as #MrName suggested.
I first started using this together with codegen.
openapi-generator generate -i ../myapi.yaml -g python-flask -o .
This generates a directory with the openapi server.
|- openapi_server\
|--controllers\
|--mytag._controller.py\
|--openapi\
|--my-api.yaml\
If you add tags to your paths in the api spec, then a separate tagname-controller.py is created for each tag. For each operationId a function is generated.
However, once this is set up, connexion can handle updates to the api spec.
If I add a new path to openapi/my-api.yaml, with an operationId=new_func, then I can add new_func() to the existing controller. I don't lose the existing server logic (but I would still back it up before just in case). I haven't tried radical changes to existing paths yet.
I'm managing two server environments that are configured differently. I access the two environments by specifying different SSH configurations on the command line because I need to specify a different User, ProxyCommand, and a list of other options for SSH.
e.g.
ssh oldserver.example.org -F config_legacy
ssh newserver.example.org -F config
To configure and maintain state on my servers, I've been using Ansible (version 1.9.0.1), which reads an SSH configuration file that is specified by a line in its ansible.cfg:
...
ssh_args = -F some_configuration_file
...
The ansible.cfg is loaded a number of ways:
def load_config_file():
''' Load Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
p = configparser.ConfigParser()
path0 = os.getenv("ANSIBLE_CONFIG", None)
if path0 is not None:
path0 = os.path.expanduser(path0)
path1 = os.getcwd() + "/ansible.cfg"
path2 = os.path.expanduser("~/.ansible.cfg")
path3 = "/etc/ansible/ansible.cfg"
for path in [path0, path1, path2, path3]:
if path is not None and os.path.exists(path):
try:
p.read(path)
except configparser.Error as e:
print("Error reading config file: \n{0}".format(e))
sys.exit(1)
return p
return None
I could use this behavior to set an environmental variable before each command to load an entirely different ansible.cfg, but that seems messy as I only need to fiddle the ssh_args. Unfortunately, Ansible doesn't expose the command switch to specify an SSH config.
I'd like to not maintain any modifications to Ansible I'd like to not wrap all calls to the ansible or ansible-playbook commands. To preserve the behavior of Ansible's commands, I believe my options are:
a) have the target of ssh_args = -F <<config_file>> be a script that's opened
b) have the target of p.read(path) be a script that gets expanded to generate a valid ansible.cfg
c) just maintain different ansible.cfg files and take advantage of the fact that Ansible picks this file in the order of environmental variable, cwd.
Option C is the only way that I can see accomplishing this. You could have your default/most-used ansible.cfg be the one that is read in the cwd ansible.cfg, then optionally setting/unsetting an environmental variable that points to the version that specifies the ssh_args = -F config_legacy line that you need (ANSIBLE_SSH_ARGS).
The reason for needing to do an ansible.cfg instead of just passing an envvar with SSH options is because Ansible does not honor the User setting in an ssh configuration file -- it's already decided who it wants to run as on kick off of a command.
Dynamic inventory (ec2.py) files are incredibly poor places to hack in a change for maintenance reasons, which is why it's typical to see --user=REMOTE_USER flags, which coupled with setting an ANSIBLE_SSH_ARGS="-F some_ssh_config" environmental variable, make for a ugly commands to give to a casual user of an Ansible repo.
e.g.
ANSIBLE_SSH_ARGS="-F other_ssh_config" ansible-playbook playbooks/why_i_am_doing_this.yml -u ubuntu
v.
ansible-playbook playbooks/why_i_am_doing_this.yml -F other_ansible.cfg
Option A doesn't work because the file is opened all at once for loading into Python, per the p.read() above, not that it matters because if files could arbitrarily decide to open as scripts, we'd be living in a very scary world.
This is how the ansible.cfg loading looks from a system perspective:
$ sudo dtruss -a ansible ......
74947/0x11eadf: 312284 3 2 stat64("/Users/tfisher/code/ansible/ansible.cfg\0", 0x7FFF55D936C0, 0x7FD70207EA00) = 0 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
74947/0x11eadf: 312316 3 1 read_nocancel(0x5, "# ansible.cfg \n#\n# Config-file load order is:\n# envvar ANSIBLE_CONFIG\n# `pwd`/ansible.cfg\n# ~/.ansible.cfg\n# /etc/ansible/ansible.cfg\n\n# Some unmodified settings are left as comments to encourage research/suggest modific", 0x1000) = 3477 0
74947/0x11eadf: 312308 19 17 open_nocancel("/Users/tfisher/code/ansible/ansible.cfg\0", 0x0, 0x1B6) = 5 0
Option B doesn't work for the same reasons why A doesn't work -- even if you create a mock Python file object with proper read/readline/readlines sigatures, the file is still being opened for reading only, not execution.
And if this is the correct repo for OpenSSH, the config file is specified like so:
#define _PATH_HOST_CONFIG_FILE SSHDIR "/ssh_config"
processed like so:
/* Read systemwide configuration file after user config. */
(void)read_config_file(_PATH_HOST_CONFIG_FILE, pw,
host, host_arg, &options,
post_canon ? SSHCONF_POSTCANON : 0);
and read here with an fopen, which leaves no room for "file as a script" schenanigans.
Another option is set the environment variable ANSIBLE_SSH_ARGS to the arguments you want Ansible to pass to the ssh command.