I'm trying to make some functions in python so that I can connect to a linux terminal and do stuff (like in this case, create a file). The code I have, works partially. The only thing that doesn't work is if you want to do something after you have entered the code. Like for instance you create the file and then want to navigate somewhere else (cd /tmp) for instance. Instead of doing the next command, it will just add to the file created.
def create_file(self, name, contents, location):
try:
log.info("Creating a file...")
self.device.execute("mkdir -p {}".format(location))
self.cd_path(location)
self.device.sendline("cat > {}".format(name))
self.device.sendline("{}".format(contents))
self.device.sendline("EOF") # send the CTRL + D command to save and exit I tried here with ^D as well
except:
log.info("Failed to create the file!")
The contents of the file is:
cat test.txt
#!/bin/bash
echo "Fail Method Requested"
exit 1
EOF
ls -d /tmp/asdasd
The order of commands executed is:
execute.create_file(test.txt, the_message, the_location)
execute.check_path("/tmp/adsasd") #this function just checks with ls -d if the directory exists.
I have tried with sendline the following combinations:
^D, EOF, <<EOF
I don't really understand how I could make this happen. I just want to create a file with a specific message. (When researching on how to do this with VI I got the same problem, but there the command I needed was the one for ESC)
If anyone could help with some input that would be great!!
Edit: As Rob mentioned below, sending the character "\x04" actually works. For anyone else having this issue, you can also consult this chart for other combinations if needed:
http://donsnotes.com/tech/charsets/ascii.html
You probably need to send the EOF character, which is typically CONTROL-D, not the three characters E, O, and F.
self.device.sendline("\x04")
http://wiki.bash-hackers.org/syntax/redirection#here_documents
Here docs allow you to use any file input termination string you like to represent end of file ( such as the literal EOF you're attempting to use now). Quoting that string tells the shell not to interpret expansions inside the heredoc content, ensuring that said content is treated as literal.
Using pipes.quote() here ensures that filenames with literal quotes, $s, spaces, or other surprising characters won't break your script. (Of course, you'll need to import pipes; on Python 3, by contrast, this has moved to shlex.quote()).
self.device.sendline("cat > {} <<'EOF'".format(pipes.quote(name)))
Then you can write the EOF as is, having told bash to interpret it as the end of file input.
Related
This is my use case: I have a python script that returns the name of a remote server and a file path from a config file. I need a way to use those two or more parameterized variables and their values as input for my shell script which will then sync files by connecting to the remote server. Any tips appreciated.
See if this is what you were looking for. I have a python script that prints out 3 lines, each a different value. For you, a[0] would be the server, a[1] the first file, and a[2] the second. The number of lines is arbitrary and this would work for any number, one per line, and allow spaces in the file names.
The "<(" executes what is inside and creates something like a pipe, which the readarray command reads (it takes standard input, namely "<")
> readarray a < <(python -c 'print "myserver:8080"; print "file1 which may have spaces"; print "another file"')
> echo ${a[0]}
myserver:8080
> echo ${a[1]}
file1 which may have spaces
> echo ${a[2]}
another file
my python file has these 2 variables:
week_date = "01/03/16-01/09/16"
cust_id = "12345"
how can i read this into a shell script that takes in these 2 variables?
my current shell script requires manual editing of "dt" and "id". I want to read the python variables into the shell script so i can just edit my python parameter file and not so many files.
shell file:
#!/bin/sh
dt="01/03/16-01/09/16"
cust_id="12345"
In a new python file i could just import the parameter python file.
Consider something akin to the following:
#!/bin/bash
# ^^^^ NOT /bin/sh, which doesn't have process substitution available.
python_script='
import sys
d = {} # create a context for variables
exec(open(sys.argv[1], "r").read()) in d # execute the Python code in that context
for k in sys.argv[2:]:
print "%s\0" % str(d[k]).split("\0")[0] # ...and extract your strings NUL-delimited
'
read_python_vars() {
local python_file=$1; shift
local varname
for varname; do
IFS= read -r -d '' "${varname#*:}"
done < <(python -c "$python_script" "$python_file" "${#%%:*}")
}
You might then use this as:
read_python_vars config.py week_date:dt cust_id:id
echo "Customer id is $id; date range is $dt"
...or, if you didn't want to rename the variables as they were read, simply:
read_python_vars config.py week_date cust_id
echo "Customer id is $cust_id; date range is $week_date"
Advantages:
Unlike a naive regex-based solution (which would have trouble with some of the details of Python parsing -- try teaching sed to handle both raw and regular strings, and both single and triple quotes without making it into a hairball!) or a similar approach that used newline-delimited output from the Python subprocess, this will correctly handle any object for which str() gives a representation with no NUL characters that your shell script can use.
Running content through the Python interpreter also means you can determine values programmatically -- for instance, you could have some Python code that asks your version control system for the last-change-date of relevant content.
Think about scenarios such as this one:
start_date = '01/03/16'
end_date = '01/09/16'
week_date = '%s-%s' % (start_date, end_date)
...using a Python interpreter to parse Python means you aren't restricting how people can update/modify your Python config file in the future.
Now, let's talk caveats:
If your Python code has side effects, those side effects will obviously take effect (just as they would if you chose to import the file as a module in Python). Don't use this to extract configuration from a file whose contents you don't trust.
Python strings are Pascal-style: They can contain literal NULs. Strings in shell languages are C-style: They're terminated by the first NUL character. Thus, some variables can exist in Python than cannot be represented in shell without nonliteral escaping. To prevent an object whose str() representation contains NULs from spilling forward into other assignments, this code terminates strings at their first NUL.
Now, let's talk about implementation details.
${#%%:*} is an expansion of $# which trims all content after and including the first : in each argument, thus passing only the Python variable names to the interpreter. Similarly, ${varname#*:} is an expansion which trims everything up to and including the first : from the variable name passed to read. See the bash-hackers page on parameter expansion.
Using <(python ...) is process substitution syntax: The <(...) expression evaluates to a filename which, when read, will provide output of that command. Using < <(...) redirects output from that file, and thus that command (the first < is a redirection, whereas the second is part of the <( token that starts a process substitution). Using this form to get output into a while read loop avoids the bug mentioned in BashFAQ #24 ("I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?").
The IFS= read -r -d '' construct has a series of components, each of which makes the behavior of read more true to the original content:
Clearing IFS for the duration of the command prevents whitespace from being trimmed from the end of the variable's content.
Using -r prevents literal backslashes from being consumed by read itself rather than represented in the output.
Using -d '' sets the first character of the empty string '' to be the record delimiter. Since C strings are NUL-terminated and the shell uses C strings, that character is a NUL. This ensures that variables' content can contain any non-NUL value, including literal newlines.
See BashFAQ #001 ("How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?") for more on the process of reading record-oriented data from a string in bash.
Other answers give a way to do exactly what you ask for, but I think the idea is a bit crazy. There's a simpler way to satisfy both scripts - move those variables into a config file. You can even preserve the simple assignment format.
Create the config itself: (ini-style)
dt="01/03/16-01/09/16"
cust_id="12345"
In python:
config_vars = {}
with open('the/file/path', 'r') as f:
for line in f:
if '=' in line:
k,v = line.split('=', 1)
config_vars[k] = v
week_date = config_vars['dt']
cust_id = config_vars['cust_id']
In bash:
source "the/file/path"
And you don't need to do crazy source parsing anymore. Alternatively you can just use json for the config file and then use json module in python and jq in shell for parsing.
I would do something like this. You may want to modify it little bit for minor changes to include/exclude quotes as I didn't really tested it for your scenario:
#!/bin/sh
exec <$python_filename
while read line
do
match=`echo $line|grep "week_date ="`
if [ $? -eq 0 ]; then
dt=`echo $line|cut -d '"' -f 2`
fi
match=`echo $line|grep "cust_id ="`
if [ $? -eq 0 ]; then
cust_id=`echo $line|cut -d '"' -f 2`
fi
done
(on os x 10.10.1)I am trying to use a paired-end merger (Casper) within a python script. i'm using os.system (don't want to use subprocess or pexpect modules). In my script here is the line that doesn't work:
os.system("casper %s %s -o %s"%(filein[0],filein[1],fileout))
#filein[0]: input file 1
#filein[1]: input file 2
#fileout: output prefix (default==casper)
Once my script is launched only the 2 first string parameters of this command are interpreted but not the third one, causing an output file with the default prefix name. Since my function is iterating through a lot of fastq files, they are all merged in a single "casper.fastq" file.
I tried to mess up with the part of the command that doesn't work (right after -o), putting meaningless string and still it is executed with no error and the default output, here is the "messed up line":
os.system("casper %s %s -ldkfnlqdskgfno %s"%(filein[0],filein[1],fileout))
Could anybody help in understanding what the heck is going on?
Print the command before execute it to check if your command wrapped correctly(like file name need to be quoted)
Execute your assumed output command directly to see if it is misinterpreted.
This might be a simple question, but I am new to bash scripting and have spent quite a bit of time on this with no luck; I hope I can get an answer here.
I am trying to write a bash script that reads individual lines from a text file and passes them along as argument for a python script. I have a list of files (which I have saved into a single text file, all on individual lines) that I need to be used as arguments in my python script, and I would like to use a bash script to send them all through. Of course I can take the tedious way and copy/paste the rest of the python command to individual lines in the script, but I would think there is a way to do this with the "read line" command. I have tried all sorts of combinations of commands, but here is the most recent one I have:
#!/bin/bash
# Command Output Test
cat infile.txt << EOF
while read line
do
VALUE = $line
python fits_edit_head.py $line $line NEW_PARA 5
echo VALUE+"huh"
done
EOF
When I do this, all I get returned is the individual lines from the input file. I have the extra VALUE there to see if it will print that, but it does not. Clearly there is something simple about the "read line" command that I do not understand but after messing with it for quite a long time, I do not know what it is. I admit I am still a rookie to this bash scripting game, and not a very good one at that. Any help would certainly be appreciated.
You probably meant:
while read line; do
VALUE=$line ## No spaces allowed
python fits_edit_head.py "$line" "$line" NEW_PARA 5 ## Quote properly to isolate arguments well
echo "$VALUE+huh" ## You don't expand without $
done < infile.txt
Python may also read STDIN so that it could accidentally read input from infile.txt so you can use another file descriptor:
while read -u 4 line; do
...
done 4< infile.txt
Better yet if you're using Bash 4.0, it's safer and cleaner to use readarray:
readarray -t lines < infile.txt
for line in "${lines[#]}; do
...
done
I have add the line into my .vimrc
map <F4> :w !python<cr>
When I open gvim to edit an unname Python file, there are only two lines in it
x=input("which one do you like ? ")
print(x)
I press F4, and get EOF when reading a line, how to solve it?
When you add map <F4> :w<cr>:!python %<cr> or imap <F4> <Esc>:w <cr>:!python %<cr>, it can only make a named file run, if it is a no named file, the map will not work, how can I make the no named file run?
#benjifisher's answer is correct. The input (function) is the problem.
The :w !python pipes the program to python through stdin (Basically the same as
echo 'input("x")' | python
which also fails if run in the shell). However input() tries to read from stdin which it can't do because python read the program from stdin and stdin is still trying to read from the pipe. However the pipe is already at the end and won't ever contain new data. So input() just reads EOF.
To see that python is reading from stdin we look at :h :w_c which shows that the file is being passed to stdin of the cmd which in this case is python.
:w_c :write_c
:[range]w[rite] [++opt] !{cmd}
Execute {cmd} with [range] lines as standard input
(note the space in front of the '!'). {cmd} is
executed like with ":!{cmd}", any '!' is replaced with
the previous command :!.
If the buffer had contained something that wasn't reading from stdin your mapping would have worked.
For example if the unnamed buffer contains
print(42)
running the command :w !python in vim prints 42.
So the problem isn't that the mapping fails. The problem is that your program doesn't know how to get input. The solution is use either a named file or don't write interactive programs in vim. Use the python interpreter for that.
Since you have :w in your mapping, I am assuming you either want to run the script directly from the insert mode or you want to save it anyways before running it.
Multiple commands in vim require a separator like <bar> i.e. (|) or use <CR> (Carriage Return) between and after commands.
You can put both of the mappings below in your .vimrc and they should meet your requirement on hitting F4, whether you are in normal mode or insert mode.
If you are in normal mode, you are fine with map:
map <F4> :w<cr>:!python %<cr>
While for insert mode, you would need imap and an Esc to get out of insert mode:
imap <F4> <Esc>:w <cr>:!python %<cr>
I think the problem is the input() line. Python is looking for input and not finding any. All it finds (wherever it looks) is an EOF.
One way to do this without a temp file, would be to do something like this(with a small helper function):
function! GetContentsForPython()
let contents = join(getline(1,'$'), "\n")
let res = ''
for l in split(contents, '\n')
if len(l)
let res = res . l . ';'
endif
endfor
let res = '"' . escape(res, '"') . '"'
return res
endfunction
noremap <f4> :!python -c <c-r>=GetContentsForPython()<cr><cr>
This gets the contents of the current buffer, and replaces the newlines with semi colons so you can execute it with
python -c "print 'hello'"
There may be better ways accomplishing this, but this seems to work for me.