Python command line argument option -d - python

I am trying to figure out the python command-line option -d. From the documentation it says -d Turn on parser debugging output.
But when I test, it shows me nothing but an argument:
main.py
import sys
def main(argv):
print(argv)
if __name__ == "__main__":
main(sys.argv[1:])
Execute in cmd:
$ python main.py -d /path/to/file
O/P:
['-d', 'C:/Program Files/Git/path/to/file']
Here -d print as an argument. Can anybody tell me the purpose of the -d option and how to use it?

-d is python option, not yours. So proper invocation is
$ python -d main.py /path/to/file

Related

Mounting a directory within a docker build

I'm wondering how to use a simple script with a docker container.
The script is:
example python script
# Example python script
import argparse
import pathlib
def run(
*,
input: pathlib.Path | str,
output: pathlib.Path | str,
) -> None:
pathlib.Path(output).write_text(pathlib.Path(input).read_text().upper())
def main() -> int:
desc = "example script"
parser = argparse.ArgumentParser(
description=desc,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"-i",
"--input",
help=("input file"),
required=True,
)
parser.add_argument(
"-o",
"--output",
help=("output file"),
)
parser.add_argument(
"-x",
"--overwrite",
help=("Whether to overwrite previously created file."),
action="store_true",
)
args = parser.parse_args()
if not pathlib.Path(args.input).exists():
raise FileNotFoundError(f"input file {args.input} not found")
if not args.output:
raise argparse.ArgumentError(f"output not given")
if pathlib.Path(args.output).exists() and not args.overwrite:
raise FileExistsError(f"{args.output} already exists. ")
run(input=args.input, output=args.output)
if __name__ == "__main__":
raise SystemExit(main())
The script works fine on my system (without docker).
example docker file
The Dockerfile is:
FROM python:3.10.6-bullseye
COPY . .
ENTRYPOINT ["python", "example.py"]
This works (ish) after the following:
# build
docker build -t demo .
# run
docker run demo --help
Which outputs:
usage: example.py [-h] -i INPUT [-o OUTPUT] [-x]
example.
options:
-h, --help show this help message and exit
-i INPUT, --input INPUT
input file
-o OUTPUT, --output OUTPUT
output file
-x, --overwrite Whether to overwrite previously created file.
But I'm not sure how to use it with the -i and -o arguments.
what I'd like to do
I'd like to be able to do the following:
echo "text" > input.txt
# Create output from input
docker run demo -i input.txt -o output.txt
# Create output from input and say it's ok to overwrite
docker run demo -i input.txt -o output.txt -x
And after this there by a output.txt file created which has TEXT in it.
Error
I've tried to do this with the above command, and it doesn't work.
Eg:
echo "this" > input.txt
docker run demo -i input.txt -o output.txt -x
After this there is no output.txt file created which has THIS in it.
Attempted solution (--mount within the shell command)
Using the following seems to work - but it feels as though It's a lot in a shell command :
docker run \
--mount type=bind,source="$(pwd)",target=/check \
--workdir=/check demo:latest \
-i input.txt -o output.txt -x
Is there a way to do the --mount within the dockerfile itself?
I am doing a similar thing by running a compiler inside the docker container.
Obviously the docker image gets built whenever there is a new version of the compiler or the underlying image.
The container gets to run whenever I want to compile something. And here I have to mount source and target directories, but my docker command looks smaller than yours:
docker run --rm -v /sourcecode:/project:ro -v /compiled:/output:rw -v cache:/cache:rw compilerimagename
All the rest is defined within the image.

Launch a script from Python

I have the code that you can see below. I'm trying to go first to the directory and then,with the pipe "|" make a backup of the MongoDB´s database.The problem is that when I launch the script the console returns me
mongodump is not an internal or external command.
On the other hand,if I launch the same line
cd C:\\...\\MongoDB\\Server\\3.6\\bin | mongodump -h ip -d database name -o C:\\Users\\...\\Desktop\\BackUpMongo
in my system cmd it works without problems. Any idea?
import sys
import os
if __name__ == '__main__':
try:
os.system('cd C:\\...\\MongoDB\\Server\\3.6\\bin | mongodump -h ip -d database name -o C:\\Users\\...\\Desktop\\BackUpMongo')
print("Copia de seguridad finalizada")
except:
print("Error during data base backup")
sys.exit(0)
Use os.chdir() instead of os.system('cd ...').
import os
os.chdir('C:\\...\\MongoDB\\Server\\3.6\\bin')
os.system('mongodump -h ip -d database name -o C:\\Users\\...\\Desktop\\BackUpMongo')
print("Copia de seguridad finalizada")

Multiple bash command in Nomad

I have an application that runs multiple Python scripts in order. I can run them in docker-compose as follow:
command: >
bash -c "python -m module_a &&
python -m module_b &&
python -m module_c"
Now I'm, scheduling the job in Nomad, and added the below command under configuration for Docker driver:
command = "/bin/bash"
args = ["-c", "python -m module_a", "&&","
"python -m module_b", "&&",
"python -m module_c"]
But Nomad seems to escape &&, and just runs the first module, and issue exit code 0. Is there any way to run the multiline command similar to docker-compose?
The following is guaranteed to work with the exec driver:
command = "/bin/bash"
args = [
"-c", ## next argument is a shell script
"for module; do python -m \"$module\" || exit; done", ## this is that script.
"_", ## passed as $0 to the script
"module_a", "module_b", "module_c" ## passed as $1, $2, and $3
]
Note that only a single argument is passed as a script -- the one immediately following -c. Subsequent arguments are arguments to that script, not additional scripts or script fragments.
Even simpler, you could run:
command = "/bin/bash"
args = ["-c", "python -m module_a && python -m module_b && python -m module_c" ]

Running security import command from Python has different behaviour than command line

I am trying to import a pkcs#12 certificate into OS X Keychain using the following command:
security import filename -k ~/Library/Keychains/login.keychain -P password -f pkcs12
In python I use subprocess like this:
if os.path.isfile(_file) and platform.system() == 'Darwin':
keychain = os.path.expanduser('~/Library/Keychains/login.keychain')
command_line = 'security import {} -k {} -P {} -f pkcs12'.format(_file, keychain, password)
logger.info('Importing {} into OS X KeyChain.'.format(_file))
return subprocess.call(shlex.split(command_line))
However I get this error message:
security: SecKeychainItemImport: One or more parameters passed to a function were not valid.
I even tried using shell=True but I then I got the security usage back as if I had passed some wrong argument.
Usage: security [-h] [-i] [-l] [-p prompt] [-q] [-v] [command] [opt ...]
...
...
However, when running it from the command line, the command works as expected:
security import <filename> -k <home>/Library/Keychains/login.keychain -P DTWLDHPYNBWBJB3 -f pkcs12
1 identity imported.
1 certificate imported.
Any idea? Is there a restriction when running security from a non interactive console?
Any python library to achieve the same?
Regards
This was actually due to another problem.
I was using a tmpfile which was not being flushed or closed.
While the script was running the function could not find any content on that file.
Once the script ended, the file (which had 'delete=False') was flushed and for this reason the command line was working no problem.
Solution was to set bufsize=0 :(

Passing arguments to python interpreter from bash script

Sorry this is a very newbie question, but I just can't seem to get it to work.
in my bash script, I have
python=/path/to/python
script=$1
exec $python $script "$#"
How would I pass an argument, say -O to the python interpreter? I have tried:
exec $python -O $script "$#"
And have tried changing python variable to "/path/to/python -O", as well as passing -O to the script, but every time i do any of these three, I get import errors for modules that succeed when I remove the -O.
So my question is how to tell the python interpreter to run with -O argument from a bash script?
Thanks.
You should shift your positional parameters to the left by 1 to exclude your script which is in the first arguments from being included to the arguments for python.
#!/bin/sh
python=/path/to/python
script=$1; shift
exec "$python" -O "$script" "$#"
Then run the script as bash script.sh your_python_script arg1 arg2 ... or sh script.sh your_python_script arg1 arg2 ....

Categories