This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Calling an external command in Python
I would like to call various programs from my Python script, like binary programs, but also other perl/python/ruby scripts, like wget, sqlmap and custom scripts.
The problem is that I would like the user to be able to change parameters of the underlying program. Let's take wget for example. Let's say I'm calling this program (note that all three parameters are dynamically inputted into the command):
wget www.google.com --user=user --password=pass
But I would also like the user to add custom parameters to the wget command. I guess the best way would be directly from a file, but I was wondering if something like this exists so that I won't reprogram everything by hand.
Also keep in mind that this is not just 1 program, but it could be up to 100 programs, maybe more. It needs to be extendable and not too complicated for the user to change.
Thanks
One quick example using subprocess.check_output
program = 'wget'
default_args = ['www.google.com']
user_args = []
subprocess.check_output(program + default_args + user_args)
Just be very carefull with this, do all the security checks before allowing any user to add parameters to a command.
You may also need shlex.split to split the user supplied arguments before adding them to subprocess call
If you want to have the defaults in external files you could do something like this:
with open('wget_defaults.txt') as i:
default_args = i.read().split(',')
Hope it helps
Related
This question already has answers here:
Is there any way to pass values to input prompt to the python script which was called by other python script?
(3 answers)
Closed 11 months ago.
Is there a way to auto select a user input option via task scheduler or within the script itself? I'm using a colleagues code and it would save some time if this could be achieved.
Example code below where one of these options needs to be selected.
while True:
choose_report = input(""" Please select the Report type
1) Weekly
2) Monthly
3) Yearly
4) Exit\n
Review type:> """)
I've tried argv but that changes the variables of the script, which is what I could do but I was wondering if there is a quicker way without editing the script variables.
You can do this in a couple of different ways:
Pipe user inputs from a file through stdin. For example: my_script.py < inputs.txt. This will work without needing to change the original script, but is a bit less clear and more fragile if you ever want to change the behavior of the script in the future.
Refactor the script to use argparse, and don't ask for interactive input when running from the task scheduler. Then you can specify all of the choices as script arguments, e.g. my_script.py --report-type=monthly. This will take more work up front, but makes the script very easy to change in the future.
For example, let's say I wanted to set a variable in powershell (or command line), and then get that variable later using something like subprocess.check_output("echo $var", shell=True) (this isn't something I need to do, just an example). Or, lets say I wanted to have a user 'cd' to a directory, and be able to run commands from that directory. I would be able to have some python variable that saves the current directory, and always run "cd {dir};{command}", but that would be inefficient, and still wouldn't work for every situation, I would need to add some special bit of code for every possible situation where a user could want to run a command, then run another command which depends on the first command.
Sorry if I phrased this question badly, let me know if I should clarify. TIA!
Ok, after some searching, I found a way to do this. Using a module called pexpect, MarkBaggett on GitHub has made a simple way to do this: pxpowershell. here's an example:
ps = pxpowershell()
ps.start_process()
ps.run("$x = 10")
print(ps.run("echo $x"))
ps.stop_process()
The only small problems are that 1. colors don't work, and 2. you need to decode() and strip() the output, though you can just add that into the pxpowershell.py.
This question already exists:
Is there a way to run a script with 'unrecognizable' variablenames & funcitonnames? [closed]
Closed 2 years ago.
I wrote a project with many sub-sheets of python code. and need to run it on a p2p cloud-computing service because it needs performance. I dont want the 'host' to know what it is by trying to understand the variable names and function names.
Its about 1000s of variables and 100s of functions, so doing it via CTRL+r and then renaming gives a high risk of errors and takes a long time.
a) Is there a procedure in compiling to make the variable names (of a copy) unrecognizable? (e.g. ahjbeunicsnj instead of placeholder_1_for_xy_csv or kjbej() instead of save_csv())
or alternatively b) Is there a way to encrypt all the files and run it encrypted? (maybe .exe)
Yes it's possible. You can obfuscate the Python script. Programs like PyInstaller can create executables too. You didn't indicate that you researched that but it's an option.
Here's the official page on this topic which goes into far more detail: https://wiki.python.org/moin/Asking%20for%20Help/How%20do%20you%20protect%20Python%20source%20code%3F
Here's an answer on another StackExchange that's also relevant: https://reverseengineering.stackexchange.com/questions/22648/best-way-to-protect-source-code-of-exe-program-running-on-python
I've been learning Python and decided to make a note-taking utility that runs in bash. I've worked out the basic 'guts' and I want to add a feature that allows a new user to configure their notes (for instance, set the directory where new note files are stored).
I realize this means running a 'install/config' function that is only called the first time the user runs the script (or until they configure it). I don't know what this concept is called, and after some research, cannot find anything about it w/Python.
I'm using argparse. You call the python script from the shell and can optionally use it with arguments. If it would help to see my code, please let me know and I'll format it (it's long and needs to be edited a bit if I want to post). Thanks.
tl;dr How do you run a function only once in Python (either first time code is executed, or until the function's purpose - in this case, setting a file path - is fulfilled)?
I'm developing a system that operates on (arbitrary) data from databases. The data may need some preprocessing before the system can work with it. To allow the user the specify possibly complex rules I though of giving the user the possibility to input Python code which is used to do this task. The system is pure Python.
My plan is to introduce the tables and columns as variables and let the user to anything Python can do (including access to the standard libs). Now to my problem:
How do I take a string (the user entered), compile it to Python (after adding code to provide the input data) and get the output. I think the easiest way would be to use the user-entered data a the body of a method and take the return value of that function a my new data.
Is this possible? If yes, how? It's unimportant that the user may enter malicious code since the worst thing that could happen is, that he screws up his own system, which is thankfully not my problem ;)
Python provides an exec() statement which should do what you want. You will want to pass in the variables that you want available as the second and/or third arguments to the function (globals and locals respectively) as those control the environment that the exec is run in.
For example:
env = {'somevar': 'somevalue'}
exec(code, env)
Alternatively, execfile() can be used in a similar way, if the code that you want executed is stored in its own file.
If you only have a single expression that you want to execute, you can also use eval.
Is this possible?
If it doesn't involve time travel, anti-gravity or perpetual motion the answer to this question is always "YES". You don't need to ask that.
The right way to proceed is as follows.
You build a framework with some handy libraries and packages.
You build a few sample applications that implement this requirement: "The data may need some preprocessing before the system can work with it."
You write documentation about how that application imports and uses modules from your framework.
You turn the framework, the sample applications and the documentation over to users to let them build these applications.
Don't waste time on "take a string (the user entered), compile it to Python (after adding code to provide the input data) and get the output".
The user should write applications like this.
from your_framework import the_file_loop
def their_function( one_line_as_dict ):
one_line_as_dict['field']= some stuff
the_file_loop( their_function )
That can actually be the entire program.
You'll have to write the_file_loop, which will look something like this.
def the_file_loop( some_function ):
with open('input') as source:
with open('output') as target:
for some_line in source:
the_data = make_a_dictionary( some_line )
some_function( the_data )
target.write( make_a_line( the_data ) )
By creating a framework, and allowing users to write their own programs, you'll be a lot happier with the results. Less magic.
2 choices:
You take his input and put it in a file, then you execute it.
You use exec()
If you just want to set some local values and then provide a python shell, check out the code module.
You can start an instance of a shell that is similar to the python shell, as well as initialize it with whatever local variables you want. This would assume that whatever functionality you want to use the resulting values is built into the classes you are passing in as locals.
Example:
shell = code.InteractiveConsole({'foo': myVar1, 'bar': myVar2})
What you actually want is exec, since eval is limited to taking an expression and returning a value. With exec, you can have code blocks (statements) and work on arbitrarily complex data, passed in as the globals and locals of the code.
The result is then returned by the code via some convention (like binding it to result).
well, you're describing compile()
But... I think I'd still implement this using regular python source files. Add a special location to the path, say '~/.myapp/plugins', and just __import__ everything there. Probably you'll want to provide some convenient base classes that expose the interface you're trying to offer, so that your users can inherit from them.