I am running the sed command inside python using os.system. Below is the code.
os.system("sed -i /solid/s/Visualization Toolkit generated SLA File/chestwall/g mesh1.stl")
The name to be changed has spaces in it. Also, in the end part i.e. mesh1.stl, the 1 need to be variable. How to do it?
Firstly, for this code, I am getting error as:
sed: -e expression #1, char 22: unterminated s command
I tried putting / at the end.
Second, I need the mesh1 to be a variable from previous line. Say, mesh1 as a and everytime, a changes. How to write like that?
Make sure that the sed statement/command is in either double or single quotes and then use "+" to concatenate strings before passing them to os.system
import os
var=1
os.system("sed -i 's/solid/s/Visualization Toolkit generated SLA File/chestwall/g' mesh" + var + ".stl")
The function os.system() is now considered to be superseded by
subprocess.call().
Would you please try the following:
import subprocess
a = 'mesh1'
cmd = ['sed', '-i', '/solid/s/Visualization Toolkit generated SLA File/chestwall/g', '{0}.stl'.format(a)]
subprocess.call(cmd)
You can pass the command as a list, not a string, and you can explicitly divide the arguments.
Related
My goal is to execute the following bash command in Python and store its output:
echo 'sudo ./run_script.sh -dates \\{\\'2017-11-16\\',\\'2017-11-29\\'\\}'|sed 's;\\\\;\\;'
When I run this command in bash, the output is: sudo ./run_script.sh -dates \{\'2019-10-05\',\'2019-10-04\'\}
My initial idea was to replace the double backslash by a single backslash in Python. As ridiculous as it seems, I couldn't do it in Python (only when using print() the output is as I would like but I can't store the output of print() and str() doesn't convert \ to . So I decided to do it in bash.
import subprocess
t= 'some \\ here'
cmd = "echo \'"+ t+"\'|sed 's;\\\\;\\;'"
ps = subprocess.run(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
ps.stdout
Out[6]: b"sed: -e expression #1, char 7: unterminated `s' command\n"
Running Python 3.6.8 on Ubuntu 18
Try using subprocess.check_output instead. You're also forgetting an extra backslash for every backslash in your command.
import subprocess
command = "echo 'some \\\\here'|sed 's;\\\\\\\\;\\\\;'"
output = subprocess.check_output(command, shell=True).decode()
print(output) # prints your expect result "some \here"
After re-reading your question I kinda understood what you wanted.
a = r'some \here'
print(a) #some \here
Again, raw string literals...
I am trying to incorporate this sed command to remove the last comma in a son file.
sed -i -e '1h;1!H;$!d;${s/.*//;x};s/\(.*\),/\1 /' file.json"
when i run this in the command line, it works fine. When i try to run as a subprocess as so it doesn't work.
Popen("sed -e '1h;1!H;$!d;${s/.*//;x};s/\(.*\),/\1 /' file.json",shell=True).wait()
What am I doing wrong?
It doesn't work because when you write \1, python interprets that as \x01 and our regular expression doesn't work / is illegal.
That is already better:
check_call(["sed","-i","-e",r"1h;1!H;$!d;${s/.*//;x};s/\(.*\),/\1 /","file.json"])
because splitting as a real list and passing your regex as a raw string has a better chance to work. And check_call is what you need to just call a process, without caring about its output.
But I would do even better: since python is good at processing files, given your rather simple problem, I would create a fully portable version, no need for sed:
# read the file
with open("file.json") as f:
contents = f.read().rstrip().rstrip(",") # strip last newline/space then strip last comma
# write back the file
with open("file.json","w") as f:
f.write(contents)
In general, you might try the following solutions:
Pass the raw string, as was mentioned
Escape the '\' character.
This code also does what you need:
Popen("sed -e '1h;1!H;$!d;${s/.*//;x};s/\(.*\),/\\1 /' file.json", shell=True).wait()
or
try:
check_call(["sed", "-i", "-e", "1h;1!H;$!d;${s/.*//;x};s/\(.*\),/\\1 /", "file.json"])
except:
pass # or handle the error
I am trying to execute the following command in python using plumbum:
sort -u -f -t$'\t' -k1,1 file1 > file2
However, I am having issues passing the -t$'\t' argument. Here is my code:
from plumbum.cmd import sort
separator = r"-t$'\t'"
print separator
cmd = (sort["-u", "-f", separator, "-k1,1", "file1"]) > "file2"
print cmd
print cmd()
I can see problems right away after print separator and print cmd() executes:
-t$'\t'
/usr/bin/sort -u -f "-t\$'\\t'" -k1,1 file1 > file2
The argument is wrapped in double quotes.
An extra \ before $ and \t is inserted.
How should I pass this argument to plumbum?
You may have stumbled into limitations of the command line escaping.
I could make it work using subprocess module, passing a real tabulation char litteraly:
import subprocess
p=subprocess.Popen(["sort","-u","-f","-t\t","-k1,1","file1",">","file2"],shell=True)
p.wait()
Also, full python short solution that does what you want:
with open("file1") as fr, open("file2","w") as fw:
fw.writelines(sorted(set(fr),key=lambda x : x.split("\t")[0]))
The full python solution doesn't work exactly the same way sort does when dealing with unicity. If 2 lines have the same first field but not the same second field, sort keeps one of them, whereas the set will keep both.
EDIT: unchecked but you just confirmed that it works: just tweak your plumbum code with:
separator = "-t\t"
could just work, although out of the 3 ones, I'd recommend the full python solution since it doesn't involve an external process and therefore is more pythonic and portable.
I need to write a python script where I need to call a few awk commands inside of it.
#!/usr/bin/python
import os, sys
input_dir = '/home/abc/data'
os.chdir(input_dir)
#wd=os.getcwd()
#print wd
os.system ("tail -n+2 ./*/*.tsv|cat|awk 'BEGIN{FS="\t"};{split($10,arr,"-")}{print arr[1]}'|sort|uniq -c")
It gives an error in line 8: SyntaxError: unexpected character after line continuation character
Is there a way I can get the awk command get to work within the python script?
Thanks
You have both types of quotes in that string, so use triple quotes around the whole thing
>>> x = '''tail -n+2 ./*/*.tsv|cat|awk 'BEGIN{FS="\t"};{split($10,arr,"-")}{print arr[1]}'|sort|uniq -c'''
>>> x
'tail -n+2 ./*/*.tsv|cat|awk \'BEGIN{FS="\t"};{split($10,arr,"-")}{print arr[1]}\'|sort|uniq -c'
You should use subprocess instead of os.system:
import subprocess
COMMAND = "tail -n+2 ./*/*.tsv|cat|awk 'BEGIN{FS=\"\t\"};{split($10,arr,\"-\")}{print arr[1]}'|sort|uniq -c"
subprocess.call(COMMAND, shell=True)
As TehTris has pointed out, the arrangement of quotes in the question breaks the command string into multiple strings. Pre-formatting the command and escaping the double-quotes fixes this.
I don't know if this is a problem with python or with the shell (zsh on linux), I've an argument like this: "#xyz" that starts with a "#"
python the_script.py first_argument #second_argument third_arg
I tried to escape # with \ or \\, or use "" but the program doesn't start. If I leave the # from #second_arguments everything's ok.
Perhaps the "#" is a glob character in zsh, expanding to all symbolic links in the current directory. Try escaping it with "##"?
Try running the argument list with echo, i.e:
echo the_script.py first_argument #second_argument third_arg
That way, you can figure out if it was expanded or passed as-is to the script.