Research is at the bottom, read before -1'ing... Thanks.
I have to write a Python script that runs SQL queries. I made a main class and called SQLQuery. Each SQLQuery instance represents a query. The script must be structured like this:
class SQLQuery(object):
def __init___(self, string_myQuery)...
instance1 = SQLQuery(SQLQuery1)...
instance2 = SQLQuery(SQLQuery2)...
As a user requirement, the instances must be in the same file as the class (so I can't just make each instance a main and execute that file separately), and each instance must be executed with Linux console commands. I can execute the entire script with a simple python SQLQuery.py but I need to execute each instance separately. The queries will be executed every day, automatically, so I don't need a terminal UI tree. It should be executed with a command similar to this:
python SQLQuery.py -inst1
will execute instance1.
python SQLQuery.py -inst2
will execute instance2.
I have researched how to execute Python scripts with Linux commands and most of the articles are about calling commands from the Python script. However, I found this article from the Python documentation. It suggests adding -m, so:
python SQLQuery.py -m inst1
This would let me set my main with a console command, but it doesn't work since the instances aren't modules. And since the instances must be in the same file as the class, I can't just import them as a module when I execute SQLQuery.py with a console command.
Ignoring all the irrelevancies, it sounds like your problem is that you have a bunch of global objects named instance1, instance2, instance3, etc., and you want to call some method on one of them based on a command-line parameter whose value will be similar to, but not identical to, the instance names.
That's probably not a good idea… but it's not that hard:
if __name__ == '__main__':
inst = sys.argv[1] # will be 'inst1', 'inst13', etc.
inst_number = inst[5:] # so '1', '13', etc.
inst_name = 'instance' + inst_number
instance = globals()[inst_name]
instance.execute()
A much better way to do the same thing is to put the instance globals into a list or dict that you can index.
For example, let's say instead of instance1, instance2, etc., you've got an instances dict, with instances['1'], instances[2], etc. Now instead of this:
inst_name = 'instance' + inst_number
instance = globals()[inst_name]
instance.execute()
… you just do this:
instances[inst_number].execute()
Also, instead of coming up with a command-line parameter that has extra stuff in it that you have to parse and throw away, and has no more meaning for a human reader than for your code, why not just take a number?
python myscript.py 12
Or, alternatively, use argparse to create an argument that can be used in all of the obvious ways:
python myscript.py --instance=12
python myscript.py --instance 12
python myscript.py -i12
python myscript.py -i 12
Either way, your code gets the string '12', which it can then use to look up the function, as above.
You have the wrong syntax for the -m option. Suppose you have the following file named foo.py:
import sys
print 'First arg is: ', sys.argv[1]
Then you would call it like this:
$ python -m foo bar
First arg is: bar
Note that the ".py" extension is omitted. You can then use the command line argument to decide which object to use or use the argparse or optparse module to handle the argument.
Related
I'm working on cloning a Virtual Machine (VM) in vCenter environment using this code. It takes command line arguments for name of the VM, template, datastore, etc. (e.g. $ clone_vm.py -s <host_name> -p < password > -nossl ....)
I have another Python file where I've been able to list the Datastore volumes in descending order of free_storage. I have stored the datastore with maximum available storage in a variable ds_max. (Let's call this ds_info.py)
I would like to use ds_max variable from ds_info.py as a command line argument for datastore command line argument in clone_vm.py.
I tried importing the os module in ds_info.py and running os.system(python clone_vm.py ....arguments...) but it did not take the ds_max variable as an argument.
I'm new to coding and am not confident to change the clone_vm.py to take in the Datastore with maximum free storage.
Thank you for taking the time to read through this.
I suspect there is something wrong in your os.system call, but you don't provide it, so I can't check.
Generally it is a good idea to use the current paradigm, and the received wisdom (TM) is that we use subprocess. See the docs, but the basic pattern is:
from subprocess import run
cmd = ["mycmd", "--arg1", "--arg2", "val_for_arg2"]
run(cmd)
Since this is just a list, you can easily drop arguments into it:
var = "hello"
cmd = ["echo", var]
run(cmd)
However, if your other command is in fact a python script it is more normal to refactor your script so that the main functionality is wrapped in a function, called main by convention:
# script 2
...
def main(arg1, arg2, arg3):
do_the_work
if __name__ == "__main__":
args = get_sys_args() # dummy fn
main(*args)
Then you can simply import script2 from script1 and run the code directly:
# script 1
from script2 import main
args = get_args() # dummy fn
main(*args)
This is 'better' as it doesn't involve spawning a whole new python process just to run python code, and it generally results in neater code. But nothing stops you calling a python script the same way you'd call anything else.
Let's say I have a large Python library of functions and I want these functions (or some large number of them) to be available as commands in Bash.
First, disregarding Bash command options and arguments, how could I get a function of a Python file containing a number of functions to run using a single word Bash command? I do not want to have the functions available via commands of a command 'suite'. So, let's say I have a function called zappo in this Python file (say, called library1.py). I would want to call this function using a single-word Bash command like zappo, not something like library1 zappo.
Second, how could options and arguments be handled? I was thinking that a nice way could be to capture all of the options and arguments of the Bash command and then use them within the Python functions using docopt parsing *at the function level```.
Yes, but the answer might not be as simple as you hope. No matter what you do, you're going to have to create something in your bash shell for each function you want to run. However, you could have a Python script generate aliases stored in a file which gets sourced.
Here's the basic idea:
#!/usr/bin/python
import sys
import __main__ #<-- This allows us to call methods in __main__
import inspect #<-- This allows us to look at methods in __main__
########### Function/Class.Method Section ##############
# Update this with functions you want in your shell #
########################################################
def takesargs():
#Just an example that reads args
print(str(sys.argv))
return
def noargs():
#and an example that doesn't
print("doesn't take args")
return
########################################################
#Make sure there's at least 1 arg (since arg 0 will always be this file)
if len(sys.argv) > 1:
#This fetches the function info we need to call it
func = getattr(__main__, str(sys.argv[1]), None)
if callable(func):
#Actually call the function with the name we received
func()
else:
print("No such function")
else:
#If no args were passed to this function, just output a list of aliases for this script that can be appended to .bashrc or similar.
funcs = inspect.getmembers(__main__, predicate=inspect.isfunction)
for func in funcs:
print("alias {0}='./suite.py {0}'".format(func[0]))
Obviously, if you're using methods in a class instead of functions in main, change references from __main__ to your class, and change predicate in the inspect to inspect.ismethod. Also, you probably would want to use absolute paths for the aliases, etc.
Sample output:
~ ./suite.py
alias noargs='./suite.py noargs'
alias takesargs='./suite.py takesargs'
~ ./suite.py > ~/pyliases
~ echo ". ~/pyliases" >> ~/.bashrc
~ . ~/.bashrc
~ noargs
doesn't take args
~ takesargs blah
['./suite.py', 'takesargs', 'blah']
If you use the method I've suggested above, you can actually have your .bashrc run ~/suite.py > ~/pyliases before it sources the aliases from the file. Then your environment gets updated every time you log in/start a new terminal session. Just edit your python function file, and then . ~/.bashrc and the functions will be available.
I am using Centos 7.0 and PyDEv in Eclipse. I am trying to pass the variable in Python into c shell script. But I am getting error:
This is my Python script named raw2waveconvert.py
num = 10
print(num)
import subprocess
subprocess.call(["csh", "./test1.csh"])
Output/Error when I run the Python script:
10
num: Undefined variable.
The file test1.csh contains:
#!/bin/csh
set nvar=`/home/nishant/workspace/codec_implement/src/NTTool/raw2waveconvert.py $num`
echo $nvar
Okey, so apparently it's not so easy to find a nice and clear duplicate. This is how it's usually done. You either pass the value as an argument to the script, or via an environmental variable.
The following example shows both ways in action. Of course you can drop whatever you don't like.
import subprocess
import shlex
var = "test"
env_var = "test2"
script = "./var.sh"
#prepare a command (append variable to the scriptname)
command = "{} {}".format(script, var)
#prepare environment variables
environment = {"test_var" : env_var}
#Note: shlex.split splits a textual command into a list suited for subprocess.call
subprocess.call( shlex.split(command), env = environment )
This is corresponding bash script, but from what I've read addressing command line variables is the same, so it should work for both bash and csh set as default shells.
var.sh:
#!/bin/sh
echo "I was called with a command line argument '$1'"
echo "Value of enviormental variable test_var is '$test_var'"
Test:
luk32$ python3 subproc.py
I was called with a command line argument 'test'
Value of enviormental variable test_var is 'test2'
Please note that the python interpreter needs to have appropriate access to the called script. In this case var.sh needs to be executable for the user luk32. Otherwise, you will get Permission denied error.
I also urge to read docs on subprocess. Many other materials use shell=True, I won't discuss it, but I dislike and discourage it. The presented examples should work and be safe.
subprocess.call(..., env=os.environ + {'num': num})
The only way to do what you want here is to export/pass the variable value through the shell environment. Which requires using the env={} dictionary argument.
But it is more likely that what you should do is pass arguments to your script instead of assuming pre-existing variables. Then you would stick num in the array argument to subprocess.call (probably better to use check_call unless you know the script is supposed to fail) and then use $1/etc. as normal.
I need to interpret few files (scripts) by embedded python interpreter concurrently (to be more detailed one script executes another script as Popen and my app intercepts it and executes it itself). I've found it's called sub-interpreter and i'm going to use it. But i've read sub-interpreter does not have sys.argv:
The new environment has no sys.argv variable
I need to pass argv anyway so how can i do it?
You might find it easier to modify each of the scripts follow the pattern:
def run(*posargs, **argdict):
"""
This does the work and can be called with:
import scriptname
scriptname.run(someargs)
"""
# Code goes here and uses posargs[n] where it would use sys.argv[n+1]
if __name__ == "__main__":
import sys
run(sys.argv[1:])
Then your main script can just call each of the subscripts in turn by simply calling the run method.
You can use environment variables. Have the parent set them by updating the dict os.environ if it's in Python, or setenv() if in C or C++ etc. Then the children can read os.environ to get whatever strings they need.
I just want to have some ideas to know how to do that...
I have a python script that parses log files, the log name I give it as an argument so that when i want to run the script it's like that.. ( python myscript.py LOGNAME )
what I'd like to do is to have two scripts one that contains the functions and another that has only the main function so i don't know how to be able to give the argument when i run it from the second script.
here's my second script's code:
import sys
import os
path = "/myscript.py"
sys.path.append(os.path.abspath(path))
import myscript
mainFunction()
the error i have is:
script, name = argv
valueError: need more than 1 value to unpack
Python (just as most languages) will share parameters across imports and includes.
Meaning that if you do:
python mysecondscript.py heeey that will flow down into myscript.py as well.
So, check your arguments that you pass.
Script one
myscript = __import__('myscript')
myscript.mainfunction()
script two
import sys
def mainfunction():
print sys.argv
And do:
python script_one.py parameter
You should get:
["script_one.py", "parameter"]
You have several ways of doing it.
>>> execfile('filename.py')
Check the following link:
How to execute a file within the python interpreter?