I have a slight problem. I have a software which has a command with two inputs. The command is: maf2hal inputfile outputfile.
I need to call this command from a Python script. The Python script asks the user for path of input file and path of output file and stores them in two variables.The problem is that when I call the command maf2hal giving the two variable names as the arguments, the error I get is cannot locate file.
Is there a way around this? Here's my code:
folderfound = "n" # looping condition
while (folderfound == "n"):
path = raw_input("Enter path of file to convert (with the extension) > ")
if not os.path.exists(path):
print "\tERROR! file not found. Maybe file doesn't exist or no extension was provided. Try again!\n"
else:
print "\tFile found\n"
folderfound = "y"
folderfound = "y" # looping condition
while (folderfound == "y"):
outName = raw_input("Enter path of output file to be created > ")
if os.path.exists(outName):
print "\tERROR! File already exists \n\tEither delete the existing file or enter a new file name\n\n"
else:
print "Creating output file....\n"
outputName = outName + ".maf"
print "Done\n"
folderfound = "n"
hal_input = outputName #inputfilename, 1st argument
hal_output = outName + ".hal" #outputfilename, 2nd argument
call("maf2hal hal_input hal_output", shell=True)
This is wrong:
call("maf2hal hal_input hal_output", shell=True)
It should be:
call(["maf2hal", hal_input, hal_output])
Otherwise you're giving "hal_input" as the actual file name, rather than using the variable.
You should not use shell=True unless absolutely necessary, and in this case it is not only unnecessary, it is pointlessly inefficient. Just call the executable directly, as above.
For bonus points, use check_call() instead of call(), because the former will actually check the return value and raise an exception if the program failed. Using call() doesn't, so errors may go unnoticed.
There are a few problems. Your first reported error was that the call to the shell can't find the maf2hal program - that sounds like a path issue. You need to verify that the command is in the path of the shell that is being created.
Second, your call is passing the words "hal_input" and "hal_output". You'll need to build that command up first to pass the values of those variables;
cmd = "maf2hal {0} {1}".format(hal_input, hal_output)
call(cmd, shell=True)
Your code is literally trying to open a file called hal_input, not using the contents of your variable with the same name. It looks like you're using the subprocess module to execute, so you can just change it to call(["maf2hal", hal_input, hal_output], shell=True) to use the contents.
at the end of your code:
call("maf2hal hal_input hal_output", shell=True)
you are literally calling that string, not the executable and then those paths, you need to concatenate your strings together first, either by adding them of using .join
eg:
call("maf2hal " + hal_input + " " + hal_output", shell=True)
or
call("maf2hal ".join(hal_input, " ", hal_output), shell=True)
Related
I am reading in a variable from an external json .env file (below)
{
"response-dashboard-repo": "/Users/derekm/BGGoPlan Home/99.0 Repo/Response/response-dashboard",
"responsemobile-repo": "/Users/derekm/BGGoPlan Home/99.0 Repo/BGGoPlan-Lite/NON-SSO-Response/bg3-lite-2020"
}
The problem is both key:values have spaces as part of their values. (eg. 99.0 Repo)
I want to use this value in a subprocess command, but Python keeps breaking it up based on spaces.
# load environment data
with open(envPath, 'r', encoding='utf-8') as envFile:
envData = json.load(envFile)
repo_location = envData['response-dashboard-repo']
# Note: The variable repo_loc is read in from an env file:
subprocess.run(repo_location + '/scripts/buildDashboards.pl', shell=True)
But when I run my script, Python keeps saying:
/bin/sh: /Users/derekm/BGGoPlan: No such file or directory
Can someone help please?
Pass it as a list with one item. And you don't need a shell with this command.
subprocess.run([repo_location + '/scripts/buildDashboards.pl'])
I figured it out
# Note: The variable repo_loc is read in from an env file:
repo_location = repo_location(" ", "\ ")
subprocess.run(repo_location + '/scripts/buildDashboards.pl', shell=True)
I'm still a Python noob but figured that instead of checking a checksum manually, I would make a quick program so it would take less time whenever I had to do it(also as practice) so I wrote this(excuse the extra useless lines and bad naming in my code, I was trying to pinpoint what I was doing wrong.)
import subprocess
FileLocation = input("Enter File Location: ")
Garbage1 = str(input("Enter First Checksum: "))
Garbage2 = str(subprocess.call(['sha256sum', FileLocation]))
Garbage3 = Garbage2.split(None, 1)
if Garbage1 == Garbage3[0]:
print("all good")
else:
print("Still Not Working!!!")
When I run this code it keeps on leaving the filepath at the end of the second checksum because of the Linux command, but I tried getting rid of it various ways with .split() but when I ran the code, it was still there, I also tried to add the file path to the end of the first checksum as a test, but that also won't add the filepath to the end of it.
I do know for a fact that the checksums match
Any idea whats wrong any help would be appreciated.
From the docs, subprocess.call does: Run command with arguments. Wait for command to complete or timeout, then return the returncode attribute. You can verify this in the python shell by entering help(subprocess.call) or looking at https://docs.python.org and searching for the subprocess module.
Your code converts the integer return code to a string, not the checksum. There are other calls in subprocess that capture the process stdout, which is where sha256sum sends its checksum. Stdout is a bytes object that needs to be decoded to a string.
import subprocess
FileLocation = input("Enter File Location: ")
Garbage1 = str(input("Enter First Checksum: "))
Garbage2 = subprocess.check_output(['sha256sum', FileLocation]).decode()
Garbage3 = Garbage2.split(None, 1)
if Garbage1 == Garbage3[0]:
print("all good")
else:
print("Still Not Working!!!")
I am trying to split a file into a number of parts via a python script:
Here is my snippet:
def bashCommandFunc(commandToRun):
process = subprocess.Popen(commandToRun.split(), stdout=subprocess.PIPE)
output = process.communicate()
return output
filepath = "/Users/user/Desktop/TempDel/part-00000"
numParts = "5"
splitCommand = "split -l$((`wc -l < " + filepath + "/" + numParts + ")) " + filepath
splitCommand:
'split -l$((`wc -l < /Users/user/Desktop/TempDel/part-00000`/5)) /Users/user/Desktop/TempDel/part-00000'
If I run this command on a terminal, it splits the file as it's supposed to, but it fails for the above defined subprocess function.
I have tested the function for other generic commands and it works fine.
I believe the character " ` " (tilde) might be an issue,
What is the work around to getting this command to work?
Are there some better ways to split a file from python into "n" parts.
Thanks
You'll have to let Python run this line via a full shell, rather than trying to run it as a command. You can do that by adding shell=True option and not splitting your command. But you really shouldn't do that if any part of the command may be influenced by users (huge security risk).
You could do this in a safer way by first calling wc, getting the result and then calling split. Or even implement the whole thing in pure Python instead of calling out to other commands.
What happens now is that you're calling split with first parameter -l$((``wc, second parameter -l, etc.
I'm trying to make a python script that search words in files.
If I pass txt it will only look in files with .txt extension, but I want to pass * as argument to search in every files.
if sys.argv[4] == "*"
Don't work and if I try
print sys.argv[4]
It print the name of the script
find.py
But not the same way as
print sys.argv[0]
As it will return
./find.py
So, someone already had this problem and, of course, solved it ?
Your shell attaches meaning to * as well. You need to escape it when calling your script to prevent the shell from expanding it:
python find.py \*
sys.argv[0] is the exact name passed used to run the script. That can be a relative path (./find.py, ../bin/find.py) or an absolute path, depending on how it was invoked. Use os.path.abspath() to normalize it.
I have a .jar archive that loads a file and then does some things with it and writes it to the disk again.
If I call this .jar directly from the command prompt, everything works. But when I try to do it from within python, I get the following error:
Input file ("C:\xxx.txt") was not found or was not readable.
This is my python code:
import sys, os, subprocess
if os.path.isdir(sys.argv[1]):
for file in os.listdir("."):
print (" ".join(['java', '-jar', sys.argv[2], 'd', "\"" + os.path.abspath(file) + "\"", "\""+os.path.join(os.path.join(os.path.abspath(os.path.dirname(file)), "output"), file) + "\""]))
subprocess.call(['java', '-jar', sys.argv[2], 'd', "\"" + os.path.abspath(file) + "\"", "\""+os.path.join(os.path.join(os.path.abspath(os.path.dirname(file)), "output"), file) + "\""])
When I copy the printed statement into the commandline, the jar executes perfectly; everything works. I tried running cmd as an admin, but that didn't help.
The problem is the extra quotes you're adding. When you pass subprocess a list of args, it already quotes them appropriately; if you quote them yourself, it'll end up quoting your quotes, so instead of passing an argument that, when unquoted, means C:\xxx.txt, you'll be passing an argument that, when unquoted, means "C:\xxx.txt", which is not a valid pathname.
The rule of thumb for Windows* is: If you know exactly what each argument should be, pass them as a list, and don't try to quote them yourself; if you know exactly what the final command-line string should be, pass it as a string, and don't try to break it into a list of separate arguments yourself.
* Note that this is only for Windows. On POSIX, unless you're using shell=True, you should basically never use a string.