send --port argument to python - python

I have this code:
#!flask/bin/python
import subprocess
import types
from flask import Flask, jsonify, make_response, abort, request, url_for
app = Flask(__name__)
#app.route('/v1/kill/process/<processName>')
def killProcess(processName):
pid = subprocess.Popen(["pidof", processName], stdout=subprocess.PIPE)
pid = pid.stdout.read()
if len(pid) != 0:
for p in pid.decode("utf-8").split(" "):
p = int(p)
subprocess.call(["kill","-9", str(p)])
return make_response(jsonify({'error': False}), 200)
else:
return make_response(jsonify({'error': "Service Not Found"}), 400)
if __name__ == '__main__':
app.run(debug=True, port=8888)
I want to run this code like this:
‍‍‍‍python fileName.py --port 8888,
How to send --port argument to python and port not hard code.

There are many problems in your script. But i'm just gonna answer your question.
The argparse module makes it easy to write user-friendly command-line interfaces. The program defines what arguments it requires, and argparse will figure out how to parse those out of sys.argv. The argparse module also automatically generates help and usage messages and issues errors when users give the program invalid arguments.
Python argparse
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Description of your program')
parser.add_argument('-p','--port', help='Port_number', required=True)
args = vars(parser.parse_args())
app.run(debug=True, port=args['port'])
args['port'] is a string

When Python runs the arguments are passed in as the array sys.argv.
You can do this in one of two ways. You can use the built-in argparse module, or you can directly access the sys.argv.
Using argparse:
import argparse
parser = argparse.ArgumentParser(description='Run flask program')
parser.add_argument(
'--port',
action='store',
dest='port',
type=int,
default=8888,
metavar='PORTNUM',
help='set port number')
args = parser.parse_args()
print(args.port)
Using the sys.argv directly:
import os
import sys
port = 8888 # default value
reversed_args = list(reversed(sys.argv[1:]))
while True:
if not len(reversed_args):
break
if reversed_args[-1] == '--port':
reversed_args.pop()
if not len(args):
sys.exit(127)
path = int(reversed_args.pop())
else:
sys.exit(127)
print(port)

Related

How do i make my virtual can-slave/client responds to messages sent by the user?

me and my group are new to the Can protocol in general so if we^ve asked a dumb question i appolgise in advance.
We have tried to make a connection between the can master and slave codes, the one below is the client that we have written. The problem with this is that we do get a connection, but it somehow resets right after. what are we doing wrong?
This is what we get if we run the client now
#!/usr/bin/env python
from argparse import ArgumentParser, ArgumentDefaultsHelpFormatter
import canopen
import time
import logging
import os
import signal
logging.basicConfig(level=logging.DEBUG)
#Use terminal to add arguments and parse them
def get_options():
parser = ArgumentParser(description="Canopen instrument simulator",
formatter_class=ArgumentDefaultsHelpFormatter)
parser.add_argument("-e", "--edsfile", required=True,
help="The EDS file for the instrument")
parser.add_argument("-i", "--interface", required=True,
help="The name of the canbus interface")
parser.add_argument("-n", "--nodeid", type=int, required=True,
help="The canopen node ID")
return parser.parse_args()
def initialize_CAN(args):
config = vars(args)
print(config)
network = canopen.Network()
network.connect(bustype='socketcan', channel=args.interface)
can_slave = network.create_node(args.nodeid, args.edsfile)
print(f"Canopen Slave with id {can_slave.id} has started and is connected to {args.interface}")
# Finished with initialization so set to pre-operational state
can_slave.nmt.state = 'PRE-OPERATIONAL'
#--------------------------------------------------------
def main():
args = get_options()
# Opens a new terminal that runs candump
os.system(f"gnome-terminal -e 'bash -c \"candump -tz {args.interface}\"'") # -l logging sets -s 2 by default, -s silentmode 0 (OFF)
logging.basicConfig(level=logging.INFO)
initialize_CAN(args)
print("-----Terminate by pressing ctrl+c-----")
try:
while True:
time.sleep(50.0 / 1000.0) #Idles the program
except KeyboardInterrupt:
os.kill(os.getppid(), signal.SIGHUP) #close application
if __name__ == '__main__':
main()

How to call a python script with parsing arguments on jupyter notebook

I have a python script that I defined:
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--task', type=str)
parser.add_argument('--scale', type=int)
args = parser.parse_args()
... # Do things with my arguments
if __name__ == '__main__':
main()
And I call this script on command line doing:
python myscript.py --task mytask --scale 1
I would like to call this script in a jupyter notebook. Is there a way to do this parsing the arguments and not modifying my script at all? I.e., doing something that looks like to this:
import myscript
myscript.main(--task=mytask,scale=1)
P.S.: I tried using magic line such as %run (that probably could work in my case as well) but I had trouble collecting the returns of my script.
You can pass args to parser.parse_args:
# myscript.py
import argparse
def main(args=None):
parser = argparse.ArgumentParser()
parser.add_argument('--task', type=str)
parser.add_argument('--scale', type=int)
args = parser.parse_args(args=args)
print("task is: ", args.task)
print("scale is: ", args.scale)
if __name__ == "__main__":
main()
Output from cli:
python3 myscript.py --task aaa --scale 10
# task is: aaa
# scale is: 10
Output from python
import myscript
myscript.main("--task aaa --scale 10".split())
# task is: aaa
# scale is: 10

Python - how can I run separate module (not function) as a separate process?

tl,dr: How can I programmably execute a python module (not function) as a separate process from a different python module?
On my development laptop, I have a 'server' module containing a bottle server. In this module, the name==main clause starts the bottle server.
#bt_app.post("/")
def server_post():
<< Generate response to 'http://server.com/' >>
if __name__ == '__main__':
serve(bt_app, port=localhost:8080)
I also have a 'test_server' module containing pytests. In this module, the name==main clause runs pytest and displays the results.
def test_something():
_rtn = some_server_function()
assert _rtn == desired
if __name__ == '__main__':
_rtn = pytest.main([__file__])
print("Pytest returned: ", _rtn)
Currently, I manually run the server module (starting the web server on localhost), then I manually start the pytest module which issues html requests to the running server module and checks the responses.
Sometimes I forget to start the server module. No big deal but annoying. So I'd like to know if I can programmatically start the server module as a separate process from the pytest module (just as I'm doing manually now) so I don't forget to start it manually.
Thanks
There is my test cases dir tree:
test
├── server.py
└── test_server.py
server.py start a web server with flask.
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello, World!'
if __name__ == '__main__':
app.run()
test_server.py make request to test.
import sys
import requests
import subprocess
import time
p = None # server process
def start_server():
global p
sys.path.append('/tmp/test')
# here you may want to do some check.
# whether the server is already started, then pass this fucntion
kwargs = {} # here u can pass other args needed
p = subprocess.Popen(['python','server.py'], **kwargs)
def test_function():
response = requests.get('http://localhost:5000/')
print('This is response body: ', response.text)
if __name__ == '__main__':
start_server()
time.sleep(3) # waiting server started
test_function()
p.kill()
Then you can do python test_server to start the server and do test cases.
PS: Popen() needs python3.5+. if older version, use run instead
import logging
import threading
import time
def thread_function(name):
logging.info("Thread %s: starting", name)
time.sleep(2)
logging.info("Thread %s: finishing", name)
if __name__ == "__main__":
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO,
datefmt="%H:%M:%S")
threads = list()
for index in range(3):
logging.info("Main : create and start thread %d.", index)
x = threading.Thread(target=thread_function, args=(index,))
threads.append(x)
x.start()
for index, thread in enumerate(threads):
logging.info("Main : before joining thread %d.", index)
thread.join()
logging.info("Main : thread %d done", index)
With threading you can run multiple processes at once!
Wim baasically answered this question. I looked into the subprocess module. While reading up on it, I stumbled on the os.system function.
In short, subprocess is a highly flexible and functional program for running a program. os.system, on the other hand, is much simpler, with far fewer functions.
Just running a python module is simple, so I settled on os.system.
import os
server_path = "python -m ../src/server.py"
os.system(server_path)
Wim, thanks for the pointer. Had it been a full fledged answer I would have upvoted it. Redo it as a full fledged answer and I'll do so.
Async to the rescue.
import gevent
from gevent import monkey, spawn
monkey.patch_all()
from gevent.pywsgi import WSGIServer
#bt_app.post("/")
def server_post():
<< Generate response to 'http://server.com/' >>
def test_something():
_rtn = some_server_function()
assert _rtn == desired
print("Pytest returned: ",_rtn)
sleep(0)
if __name__ == '__main__':
spawn(test_something) #runs async
server = WSGIServer(("0.0.0.0", 8080, bt_app)
server.serve_forever()

Redirect output from one python script to another argparse

This is probably a silly question but I've been struggling with this for a while now and I could not make it work. Basically what I am trying to do is use a script output with the command line passed arguments as an input for another script with arguments as well. This is my approach so far, I kind of feel that I am missing something though.
Let's suppose the following script1.py
# !/usr/bin/env python3
import argparse
import sys
def create_arg_parser():
parser = argparse.ArgumentParser()
parser.add_argument("-n", "--number", type=int, help="numbers range")
return parser.parse_args()
def main_a():
nrange = parsed_args.number
l = [i for i in range(nrange)]
return l
if __name__ == "__main__":
parsed_args = create_arg_parser()
print(main_a())
and script2.py
# !/usr/bin/env python3
import os
import subprocess
import argparse
def create_arg_parser():
parser = argparse.ArgumentParser()
parser.add_argument("-o", "--outdir", type=str, help="outdir path")
return parser.parse_args()
def main_b():
# the -n argument here should be inherited from script1.py and not manually set to 5
process = subprocess.Popen(["python", "script1.py", "-n", "5"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = process.communicate() # let it run the list from script1.py
with open(os.path.join(parsed_args.outdir, "list.txt"), "w") as f:
f.write(output[0].decode('utf-8'))
if __name__ == "__main__":
parsed_args = create_arg_parser()
main_b()
This actually works (kinda actually), apart from the fact that I am getting the list outputted from script1.py written alongside the -n argument. What I am trying to do here is use the list created from script1.py as input in script2.py, but just passing the command line arguments once. So for instance, use or inherit the arguments used for script1.py in script2.py. I know this could be done putting all in the same script but this is just an example, I am trying to write a *gml graph in the real problem.
Any idea what I am missing here? Or is there any workaround or a simpler alternative to my approach?
While you can inherit arguments from another parser, for your use case it would be simpler to just define an argument explicitly, since you have to pass the value that results to the other script.
# !/usr/bin/env python3
import os
import subprocess
import argparse
def parse_arguments():
parser = argparse.ArgumentParser()
parser.add_argument("-n", "--number", type=int, help="numbers range")
parser.add_argument("-o", "--outdir", type=str, help="outdir path")
return parser.parse_args()
def main_b(args):
# the -n argument here should be inherited from script1.py and not manually set to 5
process = subprocess.Popen(["python", "script1.py", "-n", args.number], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output = process.communicate() # let it run the list from script1.py
with open(os.path.join(parsed_args.outdir, "list.txt"), "w") as f:
f.write(output[0].decode('utf-8'))
if __name__ == "__main__":
parsed_args = parse_arguments()
main_b(parsed_args)
The alternative is that your first script has to expose its argument parser as a module-level attribute, and your second script would have to import the first. (That would, however, bypass the need to use subprocess to run the first script.
import argparse
import sys
parser = argparse.ArgumentParser()
parser.add_argument("-n", "--number", type=int, help="numbers range")
def main_a(n):
l = [i for i in range(n)]
return l
if __name__ == "__main__":
nrange = parser.parse_args().number
print(main_a(nrange))
and
import os
import argparse
import script1
def parse_arguments():
parser = argparse.ArgumentParser(parents=[script1.parser])
parser.add_argument("-o", "--outdir", type=str, help="outdir path")
return parser.parse_args()
def main_b(parsed_args):
x = script1.main_a(parsed_args.number)
with open(os.path.join(parsed_args.outdir, "list.txt"), "w") as f:
f.write(str(x))
if __name__ == "__main__":
parsed_args = parse_arguments()
main_b(parsed_args)

python - adding a argument to execution script

consider I am having a following code in my bin as follows(filename: emp_dsb):
import sys
from employee_detail_collector.EmpCollector import main
if __name__ == '__main__':
sys.exit(main())
In my command line I will execute the "emp_dsb", so that above code will execute the main function from "employee_detail_collector.EmpCollector"
Code in (employee_detail_collector.EmpCollector) main():
def main():
try:
path = const.CONFIG_FILE
empdsb = EmpDashboard(path)
except SONKPIExceptions as e:
logger.error(e.message)
except Exception as e:
logger.error(e)
Now I need to add some argument here for emp_dsb, that is like "emp_dsb create_emp" should invoke a new set of functionalities for creating a employee, which is also needs to be added in same main()
someone look and let me know your ideas, If not clear let me know so that i will try to make it more clear.
the standard way to use command line arguments is to do this:
import sys
if __name__ == '__main__':
print(sys.argv)
read up on the doc of sys.argv.
then there are fancier ways like the built-in argparse and the 3rd party docopt or click.
I would personally use 'argparse' module.
Here is the link to a dead simple code sample.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("echo")
args = parser.parse_args()
print(args.echo)

Categories