Is it possible to have a context manager that just keeps the state of the previous run execution. In code:
EDIT: Not a working solution, something I expected
with sudo('. myapp'): #this runs a few things and sets many env variables
run('echo $ENV1') # $ENV1 isn't set because the sudo command ran independently
I am trying to run several commands but want to keep state between each command ?
I tried using the prefix context manager but it doesn't work with the shell_env context manager: When running this code
with shell_env(ENV1="TEST"):
with prefix(". myapp"):
run("echo $ENV2")
I expected my ENV to be set then run my application which should have set env2 but the prefix runs before the shell_env ?
Don't really understand the question asked here. Could you give a little more detail in what you are trying to accomplish. However I tried the same thing (with sudo('. myapp)) you did which threw an AttributeError __exit__ exception.
Finally I've tried the to use prefix to source the bash file and executing a sudo command line within this context, which works just fine.
#fab.task
def trythis():
with fab.prefix('. testenv'):
fab.sudo('echo $ENV1')
When executing the task I get the following output.
[host] Executing task 'trythis'
[host] sudo: echo $ENV1
[host] out: sudo password:
[host] out: testing
[host] out:
Done.
Disconnecting from host... done.
with shell_env(ENV1="TEST"):
with prefix(". myapp"):
run("echo $ENV2")
I expected my ENV to be set then run my application which should have set env2 but the prefix runs before the shell_env ?
Given fabric's documentation the code you've written will generate:
export ENV1="TEST" && . myapp && echo $ENV2
Given that myapp creates ENV2, your code should work the way you want it to work, though not all shell interpret the dot operator the same way, using source is always a better idea.
with shell_env(ENV1="TEST"):
with prefix("source myapp"):
run("echo $ENV2")
You may consider a bug in myapp though, and/or double check that all path and working directory are correctly set.
Related
I have a dockerfile where a few commands need to be executed in a row, not in parallel or asynchronously, so cmd1 finishes, cmd2 starts, etc. etc.
Dockerfile's RUN is perfect for that. However, one of those RUN commands uses environment variables, meaning i'm calling os.getenv at some point. Sadly, it seems like when passing environment variables, be it through the CLI itself or with help of a .env file, only CMD instead of RUN works. but CMD is launching concurrently, so the container executes this command, but goes over right to the next one, which i definitely don't want.
In conclusion, is there even a way to pass environment variables to RUN commands in a dockerfile?
To help understand a bit better, here's an excerpt from my dockerfile:
FROM python:3.8
# Install python dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# Create working directory
RUN mkdir -p /usr/src/my_directory
WORKDIR /usr/src/my_directory
# Copy contents
COPY . /usr/src/my_directory
# RUN calling method that uses calls os.getenv at some point (THIS IS THE PROBLEM)
RUN ["python3" ,"some_script.py"]
# RUN some other commands (this needs to run AFTER the command above finishes)
#if i replace the RUN above with CMD, this gets called right after
RUN ["python3", "some_other_script.py","--param","1","--param2", "config.yaml"]
Excerpt from some_script.py:
if __name__ == "__main__":
abc = os.getenv("my_env_var") # this is where i get a ReferenceError if i use RUN
do_some_other_stuff(abc)
The .env file I'm using with the dockerfile (or docker-compose):
my_env_var=some_url_i_need_for_stuff
Do not use the exec form of a RUN instruction if you want variable substitution, or use it to execute a shell. From the documentation:
Unlike the shell form, the exec form does not invoke a command shell. This means that normal shell processing does not happen. For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ]. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
This is how I solved my problem:
write a bash script that executes all relevant commands in the nice order that i want to
use ENTRYPOINT instead of CMD or RUN
the bash script will already have the ENV vars, but you can double check with positional arguments passed to that bash script
I am trying to create a Docker image/container that will run on Windows 10/Linux and test a REST API. Is it possible to embed the function (from my .bashrc file) inside the DockerFile? The function pytest calls pylint before running the .py file. If the rating is not 10/10, then it prompts the user to fix the code and exits. This works fine on Linux.
Basically here is the pseudo-code inside the DockerFile I am attempting to build an image.
------------------------------------------
From: Ubuntu x.xx
install python
Install pytest
install pylint
copy test_file to the respective folder
Execute pytest test_file_name.py
if the rating is not 10\10:
prompt the user to resolve the rating issue and exit
------------here is the partial code snippet from the func------------------------
function pytest () {
argument1="$1"
# Extract the path and file name for pylint when method name is passed
pathfilename=`echo ${argument1} | sed 's/::.*//'`
clear && printf '\e[3J'
output=$(docker exec -t orch-$USER pylint -r n ${pathfilename})
if (echo "$output" | grep 'warning.*error' o&>/dev/null or
echo "${output}" | egrep 'warning|convention' &>/dev/null)
then
echo echo "${output}" | sed 's/\(warning\)/\o033[33m\1\o033[39m/;s/\(errors\|error\)/\o033[31m\1\o033[39m/'
YEL='\033[0;1;33m'
NC='\033[0m'
echo -e "\n ${YEL}Fix module as per pylint/PEP8 messages to achieve 10/10 rating before pusing to github\n${NC}"`
fi
Another option I can think of:
Step 1] Build the image (using DockerFile) with all the required software
Step 2] In a .py file, add the call for execution of pytest with the logic from the function.
Your thoughts?
You can turn that function into a standalone shell script. (Pretty much by just removing the function wrapper, and taking out the docker exec part of the tool invocation.) Once you've done that, you can COPY the shell script into your image, and once you've done that, you can RUN it.
...
COPY pylint-enforcer.sh .
RUN chmod +x ./pylint-enforcer.sh \
&& ./pylint-enforcer.sh
...
It looks like pylint will produce a non-zero exit code if it emits any messages. For purposes of a Dockerfile, it may be enough to just RUN pylint -r -n .; if it prints anything, it looks like it will return a non-zero exit code, which docker build will interpret as "failure" and not proceed.
You might consider whether you'll ever want the ability to build and push an image of code that isn't absolutely perfect (during a production-down event, perhaps), and whether you want to require root-level permissions to run simple code-validity tools (if you can docker anything you can edit arbitrary files on the host as root). I'd suggest running these tools out of a non-Docker virtual environment during your CI process, and neither place them in your Dockerfile nor depend on docker exec to run them.
[root#hostname ~]# python script.py # allow this
[user#hostname ~]$ sudo python script.py # deny this
[user#hostname ~]$ sudo -E python script.py # deny this
[user#hostname ~]$ sudo PATH=$PATH python script.py # deny this
[user#hostname ~]$ python script.py # kindly refuse this
I'm trying to achieve the behavior above. Read further if you care why or if the example isn't sufficient enough. Sorry for the sharp tongue, but most of my Stack Exchange questions get hostile questions back instead of answers.
This question arises from requiring an admin to run my script, but the nature of the script requires root's environment variables (and not sudo's).
I've given this some thorough research... below is from this answer
if os.geteuid() == 0:
pass # sufficient to determine if elevated privileges
But then I started needing to access PATH inside of my script. I noticed that
sudo -E env | grep PATH; env | grep PATH
prints different PATH values. I found it was because of the security policy on PATH. I also found the workaround to PATH is sudo PATH=$PATH ...
However, it's not the only policy protected environment variable, and at that point, why push this enumeration of environment variables on the script user? It seems that requiring root explicitly is the best approach, and just warn the admin to use root explicitly from within the script otherwise.
Is there such a way to distinguish between root and sudo with Python?
Despite the reasons discussed to not pursue this solution, I actually did find it for others wondering if it's possible.
[user#hostname ~]$ sudo python
>>> import os
>>> os.environ["SUDO_UID"] # UID of user running sudo
'uid'
And when logged in as root...
[root#hostname ~]# python
>>> import os
>>> try:
... uid = os.environ["SUDO_UID"]
raise AssertionError("Ran with sudo")
... except KeyError, e:
... ... # SUDO_UID, SUDO_USER, etc. not set without sudo
I also found a way to access root's PATH just running with sudo.
path = os.popen("su - -c env | grep ^PATH= | cut -d'=' -f2-").read().strip()
I think I like this solution better than relying on how my script is ran.
You're going to get "hostile questions" because the premise of your issue doesn't make much sense. In general if a command can be run as the root user via sudo then it should not matter whether it was run via sudo (or runas, etc.) or by some other mechanism that has the UID set to root such as an interactive login as the root user. You should not require running a program to be predicated on an interactive login as the root user account rather than via a setuid program like sudo or your program if it were setuid root.
A cheap and dirty solution is to ensure the interactive root login sets a unique env var that is unlikely to be set when your program is run via sudo. That is, however, obviously easy to spoof so if you're doing this for security then that approach is not acceptable.
Use the subprocess module to run commands and check the output:
from subprocess import check_output
uid = check_output(['bash', '-c', 'echo $UID']).decode().strip()
if uid != '0':
sys.exit() # or return
or
user = check_output(['whoami']).decode().strip()
if user != 'root':
sys.exit() # or return
It appears that aside from checking $PATH, root and sudo are indistinguishable.
I have created a executeable script .sh which contains code to run a django managemenet command.
cron.sh
#!/bin/sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I can confirm this script and manage.py command is working by executing it directly on terminal
$ /path/to/cron.sh
When i do it same via crontab its not working as expected.
** What am i doing wrong ?? I can confirm there is nothing wrong with crontab, it executing the cron.sh file but path/to/env/bin/python manage.py some_command is not working as expected.
cron log also showing
CRON[14768]: (root) CMD /path/to/cron.sh > /dev/null 2>&1
I am using bitnami django ami (ubuntu 14.04.5 LTS)
Update
After removing /dev/null i am getting this error now
"Cannot locate wrapped file"
It seems that it is a PATH problem. I do not know if django uses specific paths that must be set but AFAIK the crontab PATH is really limited due to security reasons. Just to check if that is the problem you could do in a shell terminal the following:
echo $PATH
You will get a complete PATH for instance:
/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
In your crontab, put it above your code:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
Tell me if this works. If does, try to purge the provided PATH or even better provide absolute locations in your code.
I have to say that I don't know if you can perform a cd in the cron like this. I always used absolute paths or cd /some/dir && /path/to/script args.
P.S: I cannot make comments yet, for this reason I put it in an answer.
The problem is that your not using the script that Bitnami uses to load all the environment variables (/opt/bitnami/scritps/setenv.sh).
I would try using this script:
#!/bin/sh
. /opt/bitnami/scritps/setenv.sh
. /path/to/env/activate
cd /path/to/project
/path/to/env/bin/python manage.py some_command
I'm trying to prefill env.password using --initial-password-prompt, but remote is throwing back some strangeness. Let's say that I'm trying to cat a root-owned file as testuser, with 600 permissions on the file. I'm calling sudo('cat /home/testuser/test.txt'), and getting this back:
[testuser#testserver] sudo: cat /home/testuser/test.txt
[testuser#testserver] out: cat: /home/testuser/test.txt: Permission denied
[testuser#testserver] out:
Fatal error: sudo() received nonzero return code 1 while executing!
Requested: cat /home/testuser/test.txt
Executed: sudo -S -p 'sudo password:' -u "testuser" /bin/bash -l -c "cat /home/testuser/test.txt"
Is that piping the prompt right back into the input? I tried using sudo() with pty=False to see if it was an issue with the pseudoterminal, but to no avail.
Here's the weird part: calling run('sudo cat /home/testuser/test.txt') and invoking fab without --initial-password-prompt passes back a password prompt from remote, and on entering the password, everything works fine.
Naturally, running ssh -t testuser#testserver 'sudo cat /home/user/test.txt' prompts for a password and returns the contents of the file correctly. Do I have an issue with my server's shell config, or is the issue with how I'm using sudo()?
Down the line, I'm likely to set up a deploy user with no-password sudo and restricted commands. That'll probably moot the issue, but I'd like to figure this one out if possible. I'm running an Ubuntu 14.10 VPS, in case that's relevant.
Oh, my mistake. I had foolishly set env.sudo_user to my deploy user testuser, thinking that it was specifying the invoking user on remote. In fact, it was specifying the target user, and I was attempting to sudo into myself. Whoops.