I am trying to write a bash script to use in python code.
Multi-line bash command (this works perfectly when run directly from terminal)
mydatefile="/var/tmp/date"
while IFS= read line
do
echo $line
sh /opt/setup/Script/EPS.sh $(echo $line) | grep "WORD" | awk -F ',' '{print $6}'
sleep 1
done <"$mydatefile"
My single line conversion
mydatefile="/var/tmp/date;" while IFS= read line do echo $line; sh /opt/setup/Script/EPS.sh $(echo $line) | grep "WORD" | awk -F ',' '{print $6}'; sleep 1; done <"$mydatefile";
ERROR
-bash: syntax error near unexpected token `done'
Missing a ; (fatal syntax error):
while IFS= read line; do echo ...
# ^
# here
More in depth :
combined grep+awk in a single command
mydatefile="/var/tmp/date"
while IFS= read line; do
echo "$line"
sh /opt/setup/Script/EPS.sh "$line" |
awk -F ',' '/WORD/{print $6}'
sleep 1
done < "$mydatefile"
use more quotes !
Learn how to quote properly in shell, it's very important :
"Double quote" every literal that contains spaces/metacharacters and every expansion: "$var", "$(command "$var")", "${array[#]}", "a & b". Use 'single quotes' for code or literal $'s: 'Costs $5 US', ssh host 'echo "$HOSTNAME"'. See
http://mywiki.wooledge.org/Quotes
http://mywiki.wooledge.org/Arguments
http://wiki.bash-hackers.org/syntax/words
finally:
mydatefile="/var/tmp/date;" while IFS= read line; do echo $line; sh /opt/setup/Script/EPS.sh "$line" | awk -F ',' '/WORD/{print $6}'; sleep 1; done < "$mydatefile";
One way to do this conversion might be to paste the script onto the command-line, then look up in the history - though this might depend on the version of command-line editor you have. Note that you do need a semicolon before do, but NOT after. You are punished for too many semicolons as well as too few.
Another way would be to line-by-line fold each line in your script and keep testing it.
The binary chop approach is do the first half, test and undo or continue.
Once you have it down to 1 line that works you can pasted it into python.
Related
I need to use echo and awk commands in python script. Can you help me?
I have a bash script, there is an example:
while read LINE
do
BOM1=`echo "$LINE" | awk -F $'\t' '{print $1}'`
BOM2=`echo "$LINE" | awk -F $'\t' '{print $2}'`
done < file.txt
I try to rewrite the same in python script:
import subprocess
with open(PT_tmp_bom_list,"r+") as Tmp_list_file:
for line in Tmp_list_file:
cmd="echo {} | awk -F '\t' '{print $1}'".format(line)
subprocess.call(cmd, shell=True)
I have a several questions:
If line is a string. I cannot output it, tried:
cmd="echo {} ".format(line)
it says that: The system cannot find the file specified. It means, I can't get a line for awk.
The line should look like:
<deliverydir>/bom/bom_list.txt**TAB**<bom_list_dir>/bom_list.txt**TAB**Internal User
The second question, if i get a line from echo, how should I use awk command for this line?
You definitely do not need external programs for this; Python easily subsumes the functionality of Awk and then some.
with open(PT_tmp_bom_list,"r+") as Tmp_list_file:
for line in Tmp_list_file:
bom1, bom2, _ = line.rstrip('\n').split('\t')
Take out the , _ if the lines have exactly two fields.
I've got a problem with executing a shell command in python. Here is some part of my code which is causing the error:
p = subprocess.Popen(["cat input.txt |apertium -d. kaz-morph|\
sed -e 's/\$\W*\^/$\n^/g'| cut -f2 -d'/'|cut -f1 -d '<'|\
awk '{print tolower($0)}'|sort -u>output.txt"], shell=True, stdout=f1)
Still getting the error: unterminated 's' command.
Hope you will help me because I couldn't solve it for 10 days :(
p.s. sorry for my english
'\n' must be '\\n', or else it is interpreted as a line break, which results in an unterminated string "cat input.txt |apertium -d. kaz-morph|sed -e 's/\$\W*\^/$".
Alternatively, mark the string as raw: r"cat input.txt |apertium ....".
I am currently working with pxssh for Python and it's working like a charm, however it seems like it doesn't process piped (|) commands - it interprets them as separate ones.
At the moment I have the following:
s = pxssh.pxssh()
s.login(sshHost, sshUsername, sshPassword)
s.sendline("cat somefile | grep something | cut -d \' -f 4")
It works properly with any commands that are not piped, however I need to send a piped one at the moment.
Is there a way around this with pxssh, or can you please suggest a way to implement another solution for this command?
It's not clear to me why pxssh would behave as you describe. Are you sure the issue is not that your \' is interpreted by Python, whereas you want it to be interpreted by the remote shell? That would be better spelled like so:
s.sendline("cat somefile | grep something | cut -d \\' -f 4")
You certainly do have alternatives. One would be to use a single command instead of a pipeline, such as:
s.sendline("sed -n '/something/ { s/\\([^,]*,\\)\\{3\\}\\([^,]*\\),.*/\\2/; p }'")
As a special case of that, you could launch a subshell in which to run your pipeline:
s.sendline('''bash -c "cat somefile | grep something | cut -d \\' -f 4"''')
i have a bash script, that extracts the bugs from a csv file and imports it into bugzilla using PyBugz.
The following sequences are used:
description=$(echo "$line" |cut -f5 -d ';')
bugz -d 3 -b http://bugzilla/bugzilla/xmlrpc.cgi -u "$user" -p "$pass" post --product "$prod" --component "$compo" --title "$title" --description "$description" --op-sys "$ops" --platform "$platf" --priority ""$prio"" --severity "$sever" --alias "$alias" --assigned-to "$assign" --cc "$ccl" --version "$ver" --url "$url" --append-command "$appen" --default-confirm "y"
but the description line containing "blablabla \n blablabla" including the newline is beeing recognized as
"Description : blablabla n blablabla"
If I export a bug and dump the output into a textfile, pybugz puts a 0x0a as newline character. how can I make pybugz recognize my \n character as 0x0a??
If description contains the characters \n, and you want to convert that into an actual newline, then you'll have to do some work:
bugz ... --description "$(echo -e "$description")" ...
That will expose other escape sequences as well, see https://www.gnu.org/software/bash/manual/bashref.html#index-echo
I got it.
The way to catch the data was done in the following way:
while read line ; do
description=$(echo "$line" |cut -f5 -d ';')
done <csvfile
however, the read already changed the \n string to n
so whatever I did after that was obviously a failure.
I did it in a very unnice way now but it works like a charm
lines=$(cat csvexport |wc -l)
for (( lineno=1 ; $lineno<=$lines ; lineno++ )); do
description=$(cat csvexport |sed -n "$lineno"p |cut -f5 -d ';')
done
and everything is fine ;-)
Thanks anyway for the help.
I have 50000 files and each one has 10000 lines. Each line is in the form:
value_1 (TAB) value_2 (TAB) ... value_n
I wanted to remove specific values from every line in every file (i used cut to remove values 14-17) and write the results to a new file.
For doing that in one file, i wrote this code:
file=nameOfFile
newfile=$file".new"
i=0
while read line
do
let i=i+1
echo line: $i
a=$i"p"
lineFirstPart=$(sed -n -e $a $file | cut -f 1-13)
#echo lineFirstPart: $lineFirstPart
lineSecondPart=$(sed -n -e $a $file | cut -f 18-)
#echo lineSecondPart: $lineSecondPart
newline=$lineFirstPart$lineSecondPart
echo $newline >> $newfile
done < $file
This takes ~45 secs for one file, which means for all it will take about: 45x50000 = 625h ~= 26 days!
Well, i need something faster, e.g. a solution that cats the whole file, applies the two cut commands simultaneusly or something like that i guess.
Also solutions in python are accepted + appreciated but bash scripting is preferable!
The entire while loop can be replaced with one line:
cut -f1-13,18- $file > $newfile