Fabric - Running sed on a local file - python

I am trying to run sed on a file locally, here is my code:
from fabric.api import *
from fabric.contrib.files import sed
import os.path
def init(project, repository=None):
repository = project if not repository else repository
folder = os.path.join(os.path.dirname(__file__), repository)
local('cp -R bin/* %s' % folder)
with lcd(folder):
sed('wsgi.py', '{PROJECTNAME}', project)
It is then prompting me to specify a host. Is there any way I can run sed locally like this? I also tried:
local("sed -i \'s/{PROJECTNAME}/%s/\' wsgi.py" % project)
But I am getting the following error:
sed: -i may not be used with stdin

I don't know the sed contrib API, but fabric's docs say about the local function:
local is simply a convenience wrapper around the use of the builtin Python subprocess module with shell=True activated. If you need to do anything special, consider using the subprocess module directly.
So I suggest you just call subprocess.call() with shell=False, that should probably fix the error with sed -i

I managed to get this working by using the following:
local("sed -i \'\' -e\'s/{PROJECTNAME}/%s/\' wsgi.py" % project)
I am not sure why this works with the extra \'\' and what the implications are exactly, but it seems to work.

Presumably, the reason that this fixes your issue is that you're using Mac OS X (or another BSD).
The BSD version of sed requires the -i argument to have a value. That value should be the file extension that sed will use to create a backup, in case an error occurs during sed processing and the file needs to be rolled back to its original contents. The value can also be an empty string (''), indicating to sed that no backup file should be created.
GNU's version of sed is smarter, and knows that, if no value is passed, then no backup should be created. It does not need the empty string.

Related

Save an output file on disk using shell commands in a django app

I am creating a script to run shell commands for simulation purposes using a web app. I want to run a shell command in a django app and then save the output to a file.
The problem I am facing is that when running the shell command, the output tries to get saved in the url that is invoked (for example: localhost:8000/projects) which is understandable.
I want to save the output to for example:
/home/myoutput/output.txt rather than /projects or /tasks
I have to run a whole script and save it's output to the txt file later but that is easy once this is done.
Tried os.chdir() function to change directory to /desiredpath already
from subprocess import run
#the function invoked from views.py
def invoke_mpiexec():
run('echo "this is a test file" > fahadTest.txt')
FileNotFoundError at /projects
Exception Type: FileNotFoundError
First I want to say that directly calling external programs from a web request in Django is a bit of an anti-pattern. The preferred approach is to use a work queue like Celery or rq, but that comes with a bit of added complexity.
That being said, you can solve your problem with the argument shell=True:
from subprocess import run
#the function invoked from views.py
def invoke_mpiexec():
run('echo "this is a test file" > fahadTest.txt', shell=True)
Here is the documentation:
If shell is True, the specified command will be executed through the
shell. This can be useful if you are using Python primarily for the
enhanced control flow it offers over most system shells and still want
convenient access to other shell features such as shell pipes,
filename wildcards, environment variable expansion, and expansion of ~
to a user’s home directory. However, note that Python itself offers
implementations of many shell-like features (in particular, glob,
fnmatch, os.walk(), os.path.expandvars(), os.path.expanduser(), and
shutil).
Note: Using shell=True can lead to security issues:
If the shell is invoked explicitly, via shell=True, it is the
application’s responsibility to ensure that all whitespace and
metacharacters are quoted appropriately to avoid shell injection
vulnerabilities.
You should use subprocess.call with stdout argument
def invoke_mpiexec():
f = open("fahadTest.txt", "w")
subprocess.call(['echo', '"this is a test file"'], stdout=f)
or use write function
def invoke_mpiexec():
f = open('fahadTest.txt', 'w')
f.write("Now the file has more content!")
f.close()
So I figured it out.
Below is the fix:
run('mkdir -p $HOME/phdata/test/ && echo "this is a test file" > $HOME/phdata/test/fahadTest.txt', shell=True)
mkdir -p creates a directory if it doesn't exist
$HOME is used to go to the home directory and from there you can
navigate to folders.
shell=True argument is required to run it as shell command
You can also create a ssh connection and run the commands/ scripts on the remote server. For this, my approach will be to create a script on the remote server, call it through my app and provide arguments to it. Another workaround which is not that good but works is to create a script on the server using the above line and then call it.

Execute bash commands that are within a list from python

i got this list
commands = ['cd var','cd www','cd html','sudo rm -r folder']
I'm trying to execute one by one all the elements inside as a bash script, with no success. Do i need a for loop here?
how to achieve that?, thanks all!!!!
for command in commands:
os.system(command)
is one way you could do it ... although just cd'ing into a bunch of directories isnt going to have much impact
NOTE this will run each command in its own subshell ... so they would not remember their state (ie any directory changes or environmental variables)
if you need to run them all in one subshell than you need to chain them together with "&&"
os.system(" && ".join(commands)) # would run all of the commands in a single subshell
as noted in the comments, in general it is preferred to use subprocess module with check_call or one of the other variants. however in this specific instance i personally think that you are in a 6 to 1 half a dozen to the other, and os.system was less typing (and its gonna exist whether you are using python3.7 or python2.5 ... but in general use subprocess exactly which call probably depends on the version of python you are using ... there is a great description in the post linked in the comments by #triplee why you should use subprocess instead)
really you should reformat your commands to simply
commands = ["sudo rm -rf var/www/html/folder"] note that you will probably need to add your python file to your sudoers file
also Im not sure exactly what you are trying to accomplish here ... but i suspect this might not be the ideal way to go about it (although it should work...)
This is just a suggestion, but if your just wanting to change directories and delete folders, you could use os.chdir() and shutil.rmtree():
from os import chdir
from os import getcwd
from shutil import rmtree
directories = ['var','www','html','folder']
print(getcwd())
# current working directory: $PWD
for directory in directories[:-1]:
chdir(directory)
print(getcwd())
# current working directory: $PWD/var/www/html
rmtree(directories[-1])
Which will cd three directories deep into html, and delelte folder. The current working directory changes when you call chdir(), as seen when you call os.getcwd().
declare -a command=("cd var","cd www","cd html","sudo rm -r folder")
## now loop through the above array
for i in "${command[#]}"
do
echo "$i"
# or do whatever with individual element of the array
done
# You can access them using echo "${arr[0]}", "${arr[1]}" also

Error in check_call() subprocess, executing 'mv' unix command: "Syntax error: '(' unexpected"

I'm making a python script for Travis CI.
.travis.yml
...
script:
- support/travis-build.py
...
The python file travis-build.py is something like this:
#!/usr/bin/env python
from subprocess import check_call
...
check_call(r"mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder", shell=True)
...
When Travis building achieves that line, I'm getting an error:
/bin/sh: 1: Syntax error: "(" unexpected
I just tried a lot of different forms to write it, but I get the same result. Any idea?
Thanks in advance!
Edit
My current directory layout:
- my_project/final_folder/
- cmake-3.0.2-Darwin64-universal/
- fileA
- fileB
- fileC
I'm trying with this command to move all the current files fileA, fileB and fileC, excluding my_project and cmake-3.0.2-Darwin64-universal folders into ./my_project/final_folder. If I execute this command on Linux shell, I get my aim but not through check_call() command.
Note: I can't move the files one by one, because there are many others
I don't know which shell Travis are using by default because I don't specify it, I only know that if I write the command in my .travis.yml:
.travis.yml
...
script:
# Here is the previous Travis code
- mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder
...
It works. But If I use the script, it fails.
I found this command from the following issue:
How to use 'mv' command to move files except those in a specific directory?
You're using the bash feature extglob, to try to exclude the files that you're specifying. You'll need to enable it in order to have it exclude the two entries you're specifying.
The python subprocess module explicitly uses /bin/sh when you use shell=True, which doesn't enable the use of bash features like this by default (it's a compliance thing to make it more like original sh).
If you want to get bash to interpret the command; you have to pass it to bash explicitly, for example using:
subprocess.check_call(["bash", "-O", "extglob", "-c", "mv !(my_project|cmake-3.0.2-Darwin64-universal) ./my_project/final_folder"])
I would not choose to do the job in this manner, though.
Let me try again: in which shell do you expect your syntax !(...) to work? Is it bash? Is it ksh? I have never used it, and a quick search for a corresponding bash feature led nowhere. I suspect your syntax is just wrong, which is what the error message is telling you. In that case, your problem is entirely independent form python and the subprocess module.
If a special shell you have on your system supports this syntax, you need to make sure that Python is using the same shell when invoking your command. It tells you which shell it has been using: /bin/sh. This is usually just a link to the real shell executable. Does it point to the same shell you have tested your command in?
Edit: the SO solution you referenced contains the solution in the comments:
Tip: Note however that using this pattern relies on extglob. You can
enable it using shopt -s extglob (If you want extended globs to be
turned on by default you can add shopt -s extglob to .bashrc)
Just to demonstrate that different shells might deal with your syntax in different ways, first using bash:
$ !(uname)
-bash: !: event not found
And then, using /bin/dash:
$ !(uname)
Linux
The argument to a subprocess.something method must be a list of command line arguments. Use e.g. shlex.split() to make the string be split into correct command line arguments:
import shlex, subprocess
subprocess.check_call( shlex.split("mv !(...)") )
EDIT:
So, the goal is to move files/directories, with the exemption of some file(s)/directory(ies). By playing around with bash, I could get it to work like this:
mv `ls | grep -v -e '\(exclusion1\|exclusion2\)'` my_project
So in your situation that would be:
mv `ls | grep -v -e '\(myproject\|cmake-3.0.2-Darwin64-universal\)'` my_project
This could go into the subprocess.check_call(..., shell=True) and it should do what you expect it to do.

lsof command doesn't look in specified directory when i specify user

So I was writing a python script and my goal was to use lsof to list all open files under a specific directory (my home folder) for the local user and only output the 'uniq' entries.
My script looked like this:
import os, sys, getpass
user = getpass.getuser()
cmd = "lsof -u " + user + " +d ~ | sort | uniq"
os.system(cmd)
This kind of does what I want it to do, it does lsof for the current local user, but it fails to look specifically in the home directory that i specified. Instead it does lsof on the root directory and lists all lsof for the entire file system for the user. However, when I do the same command without the -u user it looks specifically in the home directory. I've been looking into why this is exactly, and yes I have tried using +d /home/ and +d ~/home/ instead of just +d ~ to get this to work with no success, so I am kind of stumped. Any advice would be great :)
From the man page:
Normally list options that are specifically stated are ORed - i.e., specifying the -i option without an address and the -ufoo option produces a listing of all network files OR files belonging to processes owned by user ''foo''.
There are some exceptions, but none of them apply in your case.
So, -u me +d ~ means "all files that are opened by me OR in my home directory.
So how you do what you want?
The -a option may be used to AND the selections. For example, specifying -a, -U, and -ufoo produces a listing of only UNIX socket files that belong to processes owned by user ''foo''.
Throw a -a in there:
cmd = "lsof -a -u " + user + " +d ~ | sort | uniq"
By the way, in general, you really don't want to use os.system in Python, which is why the documentation specifically says:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.
And really, why use sort and uniq instead of sorting in Python? Or, alternatively, if all you want to do is get this shell pipeline run, and not process it in any way in Python, why use Python instead of bash in the first place?
lsof joins options together using OR by default, try adding the -a flag to AND them together.

How to use export with Python on Linux

I need to make an export like this in Python :
# export MY_DATA="my_export"
I've tried to do :
# -*- python-mode -*-
# -*- coding: utf-8 -*-
import os
os.system('export MY_DATA="my_export"')
But when I list export, "MY_DATA" not appear :
# export
How I can do an export with Python without saving "my_export" into a file ?
export is a command that you give directly to the shell (e.g. bash), to tell it to add or modify one of its environment variables. You can't change your shell's environment from a child process (such as Python), it's just not possible.
Here's what's happening when you try os.system('export MY_DATA="my_export"')...
/bin/bash process, command `python yourscript.py` forks python subprocess
|_
/usr/bin/python process, command `os.system()` forks /bin/sh subprocess
|_
/bin/sh process, command `export ...` changes its local environment
When the bottom-most /bin/sh subprocess finishes running your export ... command, then it's discarded, along with the environment that you have just changed.
You actually want to do
import os
os.environ["MY_DATA"] = "my_export"
Another way to do this, if you're in a hurry and don't mind the hacky-aftertaste, is to execute the output of the python script in your bash environment and print out the commands to execute setting the environment in python. Not ideal but it can get the job done in a pinch. It's not very portable across shells, so YMMV.
$(python -c 'print "export MY_DATA=my_export"')
(you can also enclose the statement in backticks in some shells ``)
Not that simple:
python -c "import os; os.putenv('MY_DATA','1233')"
$ echo $MY_DATA # <- empty
But:
python -c "import os; os.putenv('MY_DATA','123'); os.system('bash')"
$ echo $MY_DATA #<- 123
I have an excellent answer.
#! /bin/bash
output=$(git diff origin/master..origin/develop | \
python -c '
# DO YOUR HACKING
variable1_to_be_exported="Yo Yo"
variable2_to_be_exported="Honey Singh"
… so on
magic=""
magic+="export onShell-var1=\""+str(variable1_to_be_exported)+"\"\n"
magic+="export onShell-var2=\""+str(variable2_to_be_exported)+"\""
print magic
'
)
eval "$output"
echo "$onShell-var1" // Output will be Yo Yo
echo "$onShell-var2" // Output will be Honey Singh
Mr Alex Tingle is correct about those processes and sub-process stuffs
How it can be achieved is like the above I have mentioned.
Key Concept is :
Whatever printed from python will be stored in the variable in the catching variable in bash [output]
We can execute any command in the form of string using eval
So, prepare your print output from python in a meaningful bash commands
use eval to execute it in bash
And you can see your results
NOTE
Always execute the eval using double quotes or else bash will mess up your \ns and outputs will be strange
PS: I don't like bash but your have to use it
I've had to do something similar on a CI system recently. My options were to do it entirely in bash (yikes) or use a language like python which would have made programming the logic much simpler.
My workaround was to do the programming in python and write the results to a file.
Then use bash to export the results.
For example:
# do calculations in python
with open("./my_export", "w") as f:
f.write(your_results)
# then in bash
export MY_DATA="$(cat ./my_export)"
rm ./my_export # if no longer needed
You could try os.environ["MY_DATA"] instead.
Kind of a hack because it's not really python doing anything special here, but if you run the export command in the same sub-shell, you will probably get the result you want.
import os
cmd = "export MY_DATA='1234'; echo $MY_DATA" # or whatever command
os.system(cmd)
In the hope of providing clarity over common cinfusion...
I have written many python <--> bash <--> elfbin toolchains and the proper way to see it is such as this:
Each process (originator) has a state of the environment inherited from whatever invoked it. Any change remains lokal to that process. Transfering an environment state is a function by itself and runs in two directions, each with it's own caveats. The most common thing is to modify environment before running a sub-process. To go down to the metal, look at the exec() - call in C. There is a variant that takes a pointer to environment data. This is the only actually supported transfer of environment in typical OS'es.
Shell scripts will create a state to pass when running children when you do an export. Otherwise it just uses that which it got in the first place.
In all other cases it will be some generic mechanism used to pass a set of data to allow the calling process itself to update it's environment based on the result of the child-processes output.
Ex:
ENVUPDATE = $(CMD_THAT_OUTPUTS_KEYVAL_LISTS)
echo $ENVUPDATE > $TMPFILE
source $TMPFILE
The same can of course be done using json, xml or other things as long as you have the tools to interpret and apply.
The need for this may be (50% chance) a sign of misconstruing the basic primitives and that you need a better config or parameter interchange in your solution.....
Oh, in python I would do something like...
(need improvement depending on your situation)
import re
RE_KV=re.compile('([a-z][\w]*)\s*=\s*(.*)')
OUTPUT=RunSomething(...) (Assuming 'k1=v1 k2=v2')
for kv in OUTPUT.split(' ')
try:
k,v=RE_KV.match(kv).groups()
os.environ[k]=str(v)
except:
#The not a property case...
pass
One line solution:
eval `python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'`
echo $python_include_path # prints /home/<usr>/anaconda3/include/python3.6m" in my case
Breakdown:
Python call
python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'
It's launching a python script that
imports sysconfig
gets the python include path corresponding to this python binary (use "which python" to see which one is being used)
prints the script "python_include_path={0}" with {0} being the path from 2
Eval call
eval `python -c 'import sysconfig;print("python_include_path={0}".format(sysconfig.get_path("include")))'`
It's executing in the current bash instance the output from the python script. In my case, its executing:
python_include_path=/home/<usr>/anaconda3/include/python3.6m
In other words, it's setting the environment variable "python_include_path" with that path for this shell instance.
Inspired by:
http://blog.tintoy.io/2017/06/exporting-environment-variables-from-python-to-bash/
import os
import shlex
from subprocess import Popen, PIPE
os.environ.update(key=value)
res = Popen(shlex.split("cmd xxx -xxx"), stdin=PIPE, stdout=PIPE, stderr=PIPE,
env=os.environ, shell=True).communicate('y\ny\ny\n'.encode('utf8'))
stdout = res[0]
stderr = res[1]
os.system ('/home/user1/exportPath.ksh')
exportPath.ksh:
export PATH=MY_DATA="my_export"

Categories