how to write argparse for jupyter - python

How can I write this argparse code in jupyter notebook?
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-y", "--yolo", required=True,
help="base path to YOLO directory")
ap.add_argument("-c", "--confidence", type=float, default=0.5,
help="minimum probability to filter weak detections")
ap.add_argument("-t", "--threshold", type=float, default=0.3,
help="threshold when applying non-maxima suppression")
args = vars(ap.parse_args())

In order to execute your jupyter notebook from command line and to pass arguments you can use tools like papermill
The below github link has detailed documentation on how it can be used
https://github.com/nteract/papermill

Related

Arg parser error :args = vars(ap.parse_args()) exception occured : python

ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path to input image")
ap.add_argument("-d", "--dataset", required=True,
help="path to dataset")
ap.add_argument("-m", "--model", required=True,
help="path to Caffe pre-trained model")
ap.add_argument("-l", "--labels", required=True,
help="path to ImageNet labels (i.e., syn-sets)")
args = vars(ap.parse_args())
and I'm getting the output as
usage: train.py [-h] -i IMAGE -d DATASET -m MODEL -l LABELS
ipykernel_launcher.py: error: the following arguments are required: -i/--image, -p/--prototxt, -m/--model, -l/--labels.
An exception has occurred, use %tb to see the full traceback.
System Exit: 2
You have this error because you are running your script without passing the required parameters you defined.
You will not have the error if you run the script like this:
script_name.py -i image_path -d data_path -m model_path -l label_path

While Executing code on google colab error regarding the argparse.ArgumentParser()

I am not able to solve this error is it that we cannot use argparse on google colab or there is some alternative to it
The code is:-
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--data", required=True, help="CSV file with quotes to run the model")
parser.add_argument("-m", "--model", required=True, help="Model file to load")
parser.add_argument("-b", "--bars", type=int, default=50, help="Count of bars to feed into the model")
parser.add_argument("-n", "--name", required=True, help="Name to use in output images")
parser.add_argument("--commission", type=float, default=0.1, help="Commission size in percent, default=0.1")
parser.add_argument("--conv", default=False, action="store_true", help="Use convolution model instead of FF")
args = parser.parse_args()
prices = data.load_relative(args.data)
env = environ.StocksEnv({"TEST": prices}, bars_count=args.bars, reset_on_close=False, commission=args.commission,
state_1d=args.conv, random_ofs_on_reset=False, reward_on_close=False, volumes=False)
The Error is :
usage: ipykernel_launcher.py [-h] -d DATA -m MODEL [-b BARS] -n NAME
[--commission COMMISSION] [--conv]
ipykernel_launcher.py: error: the following arguments are required: -d/--data, -m/--model, -n/--name
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2

Image manipulation dlib argparse.ArgumentParser() problem?

We are working on a project with my friend, but we still haven't gotten through a big problem. We tried many things but could not fix the problem. the problem is related to "argparse.ArgumentParser().
error part :
usage: detect_drowsiness.py [-h] -p SHAPE_PREDICTOR [-a ALARM] [-w WEBCAM]
detect_drowsiness.py: error: the following arguments are required: -p/--shape-predictor
codes part:
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--shape-predictor", required=True,
help="path to facial landmark predictor")
ap.add_argument("-a", "--alarm", type=str, default="",
help="path alarm .WAV file")
ap.add_argument("-w", "--webcam", type=int, default=0,
help="index of webcam on system")
args = vars(ap.parse_args())
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])
file content: shape_predictor_68_face_landmarks.dat and detect_drowsiness.py(file name)
Why does this problem exist?
If you notice,
ap.add_argument("-p", "--shape-predictor", required=True,
help="path to facial landmark predictor")
-p/--shape-predictor argument is required. So, you should do the following when you run the python file:
python detect_drowsiness.py -p shape_predictor_68_face_landmarks.dat
or
python detect_drowsiness.py --shape-predictor shape_predictor_68_face_landmarks.dat

How to check cascade and shape predictor is in file same directory?

I'm Using Raspberry pi 3 in opencv python environment but getting this error. Is there an app to open .xml and .dat file in raspberry pi 3?
: error: the following arguments are required: -c/--cascade, -p/--shape-predictor
ap = argparse.ArgumentParser()
ap.add_argument("-c", "--cascade", required=True, help = "path to where the face cascade resides")
ap.add_argument("-p", "--shape-predictor", required=True, help="path to facial landmark predictor")
ap.add_argument("-a", "--alarm", type=int, default=0, help="boolean used to indicate if TrafficHat should be used")
args = vars(ap.parse_args())

View the complete args string while using pdb?

I need to view the complete string in the Argparse object args.networkModel
Original code is from https://github.com/cmusatyalab/openface/blob/master/demos/classifier.py
I only have access to the pdb in the terminal.
When I try the print(args.networkModel) I get
/home/aanilil/ml/openface/demos/../models/openargs.networkModelface/nn4.small2.v1.t7
Is there a way to Print the complete string?
I have also tried the pprint(args.networkModel)
Where I get the output
*** TypeError: 'module' object is not callable
The original parser is constructed like so
parser = argparse.ArgumentParser()
parser.add_argument(
'--dlibFacePredictor',
type=str,
help="Path to dlib's face predictor.",
default=os.path.join(
dlibModelDir,
"shape_predictor_68_face_landmarks.dat"))
parser.add_argument(
'--networkModel',
type=str,
help="Path to Torch network model.",
default=os.path.join(
openfaceModelDir,
'nn4.small2.v1.t7'))
parser.add_argument('--imgDim', type=int,
help="Default image dimension.", default=96)
parser.add_argument('--cuda', action='store_true')
parser.add_argument('--verbose', action='store_true')
subparsers = parser.add_subparsers(dest='mode', help="Mode")
trainParser = subparsers.add_parser('train',
help="Train a new classifier.")
trainParser.add_argument('--ldaDim', type=int, default=-1)
trainParser.add_argument(
'--classifier',
type=str,
choices=[
'LinearSvm',
'GridSearchSvm',
'GMM',
'RadialSvm',
'DecisionTree',
'GaussianNB',
'DBN'],
help='The type of classifier to use.',
default='LinearSvm')
trainParser.add_argument(
'workDir',
type=str,
help="The input work directory containing 'reps.csv' and 'labels.csv'. Obtained from aligning a directory with 'align-dlib' and getting the representations with 'batch-represent'.")
inferParser = subparsers.add_parser(
'infer', help='Predict who an image contains from a trained classifier.')
inferParser.add_argument(
'classifierModel',
type=str,
help='The Python pickle representing the classifier. This is NOT the Torch network model, which can be set with --networkModel.')
inferParser.add_argument('imgs', type=str, nargs='+',
help="Input image.")
inferParser.add_argument('--multi', help="Infer multiple faces in image",
action="store_true")
args = parser.parse_args()

Categories