How to set locale for all children of python app? - python

I have written an app indicator in python for Ubuntu desktop, which calls several external programs via subprocess. It works fine under English locale , but breaks with others.
I know that there is a way to do subprocess.call( ['command','arg1','arg3'], env=new_env_dict) however I am interested in whether there is a way to force all subprocess calls have new environment instead calling it every time.

So far I have not found a way to globally tell all subprocess calls to use specific environment , so I decided to go with single function that only takes list of arguments , and locale set as shown in related post but with slight variation.
def run_cmd(self, cmdlist):
new_env = dict( os.environ )
new_env['LC_ALL'] = 'C'
try:
stdout = subprocess.check_output(cmdlist,env=new_env)
except subprocess.CalledProcessError:
pass
else:
if stdout:
return stdout

Related

Is it possible to run a python script from the windows command prompt and pass an argument for that script at the same time?

I have a saved python script. I run this python script from the command prompt in Windows 10.
This is as simple as navigating to the directory where the script is located and then typing:
python myscript.py
and the script will run fine.
However, sometimes, I want to run this script such that a variable within that script is set to one value and sometimes to another. This variable tells the script which port to operate an API connection through (if this is relevant).
At the moment, I go into the script each time and change the variable to the one that I want and then run the script after that. Is there a way to set the variable at the time of running the script from the command prompt in Windows 10?
Or are there potentially any other efficient solutions to achieve the same flexibility at the time of running?
Thanks
The usual way to do this is with command-line arguments. In fact, passing a port number is, after passing a list of filenames, almost the paradigm case for command-line arguments.
For simple cases, you can handle this in your code with sys.argv
port = int(sys.argv[1])
Or, if you want a default value:
port = int(sys.argv[1]) if len(sys.argv) > 1 else 12345
Then, to run the program:
python myscript.py 54321
For more complicated cases—when you have multiple flags, some with values, etc.—you usually want to use something like argparse. But you'll probably want to read up a bit on typical command-line interfaces, and maybe look at the arguments of tools you commonly, before designing your first one. Because just looking at all of the options in argparse without knowing what you want in advance can be pretty overwhelming.
Another option is to use an environment variable. This is more tedious if you want to change it for each run, but if you want to set it once for an entire series of runs in a command-line session, or even set a computer-wide default, it's a lot easier.
In the code, you'd look in os.environ:
port = int(os.environ.get('MYSCRIPT_PORT', 12345))
And then, to set a port:
MYSCRIPT_PORT=54321
python myscript.py
You can combine the two: use a command-line argument if present, otherwise fall back to the environment variable, otherwise fall back to a default. Or even add a config file and/or (if you only care about Windows) registry setting. Python itself does something like three-step fallback, as do many major servers, but it may be overkill for your simple use case.
You should look at argparse. Heres an example:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-m", help='message to be sent', type=str)
args = (parser.parse_args())
print args.m
Each argument you create is saved like a dictionary so you have to call it in your code like I did with my print statement
"args.m" < ---- This specifies the argument passed you want to do stuff with
here was my input/output:
C:\Users\Vinny\Desktop>python argtest.py -m "Hi"
Hi
C:\Users\Vinny\Desktop>
More info on argparse:https://docs.python.org/3/library/argparse.html
You need the argparse library.
https://docs.python.org/3/library/argparse.html
https://docs.python.org/2/library/argparse.html

Launch a python script from another script, with parameters in subprocess argument

To launch a python script (it is needed for running an OLED display) from terminal, I have to use the following bash commands: python demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20. Those parameters after .py are important, otherwise, the script will run with default settings and in my case, the script will not launch with default settings. Thus, those parameters are needed.
The problem arises when I need to launch my script from another python script, (instead of using bash commands on terminal). To launch one of my python script from a parent script. I have used:
import subprocess # to use subprocess
p = subprocess.Popen(['python', 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20'])
in my parent script but I got an error stating:
python: can't open file 'demo_oled_v01.py --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20': [Errno 2] No such file or directory
I suspect that adding the parameters --display ssd1351 --width 128 --height 128 --interface spi --gpio-data-command 20 after .py may be causing difficulty in launching the script. As mentioned, these parameters are otherwise essential for me to include for launching with bash commands on terminal. How can I use subprocess with the required parameters to launch this script?
The subprocess library is interpreting all of your arguments, including demo_oled_v01.py as a single argument to python. That's why python is complaining that it cannot locate a file with that name. Try running it as:
p = subprocess.Popen(['python', 'demo_oled_v01.py', '--display',
'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi',
'--gpio-data-command', '20'])
See more information on Popen here.
This started as a comment thread, but got too long and complex.
Calling Python as a subprocess of Python is an antipattern. You can often fruitfully avoid this by refactoring your Python code so that your program can call the other program as a simple library (or module, or package, or what have you -- there is a bit of terminology here which you'll want to understand more properly ... eventually).
Having said that, there are scenarios where the subprocess needs to be a subprocess (perhaps it is designed to do its own signal handling, for example) so don't apply this blindly.
If you have a script like demo.py which contains something like
def really_demo(something, other, message='No message'):
# .... some functionality here ...
def main():
import argparse
parser = argparse.ArgumentParser(description='Basic boilerplate, ignore the details.')
parser.add_argument('--something', dest='something') # store argument in args.something
parser.add_argument('--other', dest='other') # ends up in args.other
# ... etc etc etc more options
args = parser.parse_args()
# This is the beef: once the arguments are parsed, pass them on
really_demo(args.something, args.other, message=args.message)
if __name__ == '__main__':
main()
Observe how when you run the script from the command line, __name__ will be '__main__' and so it will plunge into the main() function which picks apart the command line, then calls some other function -- in this case, real_demo(). Now, if you are calling this code from an already running Python, there is no need really to collect the arguments into a list and pass them to a new process. Just have your Python script load the function you want to call from the script, and call it with your arguments.
In other words, if you are currently doing
subprocess.call(['demo.py', '--something', 'foo', '--other', value, '--message', 'whatever'])
you can replace the subprocess call with
from demo import real_demo
real_demo('foo', value, message='whatever')
Notice how you are bypassing the main() function and all the ugly command-line parsing, and simply calling another Python function. (Pay attention to the order and names of the arguments; they may be quite different from what the command-line parser accepts.) The fact that it is defined in a different file is a minor detail which import handles for you, and the fact that the file contains other functions is something you can ignore (or perhaps exploit more fully if, for example, you want to access internal functions which are not exposed via the command-line interface in a way which is convenient for you).
As an optimization, Python won't import something twice, so you really need to make sure the functionality you need is not run when you import it. Commonly, you import once, at the beginning of your script (though technically you can do it inside the def which needs it, for example, if there is only one place in your code which depends on the import) and then you call the functions you got from the import as many or as few times as you need them.
This is a lightning recap of a very common question. If this doesn't get you started in the right direction, you should be able to find many existing questions on Stack Overflow about various aspects of this refactoring task.
Add full path to the python script & separate all parameter
EX:
import subprocess
p = subprocess.Popen(['python', 'FULL_PATH_TO_FILE/demo_oled_v01.py', '--display', 'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi', '--gpio-data-command', '20'])
For Windows and Python 3.x, you could :
Use a Windows shell (cmd.exe most probably on Windows by default)
result = subprocess.Popen('cd C:\\Users\\PathToMyPythonScript
&& python myPythonScript.py value1ofParam1 value2ofParam2',
shell=True, universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Stay in your Python environment (remove shell=True) you can write :
result = subprocess.Popen(["C:\Windows\System32\cmd.exe", "/k",
"cd", "C:\\Users\\PathToMyPythonScript",
"&&", "dir", "&&", "python", "myPythonScript.py",
"value1ofParam1", "value2ofParam2"], universal_newlines=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = result.communicate()
print(output)
Example of script file you can call for a try (the "MyPythonScript.py") :
# =============================================================================
# This script just outputs the arguments you've passed to it
import sys
print('Number of arguments:', len(sys.argv), 'arguments.')
print('Argument List:', str(sys.argv))
# =============================================================================

Set the variable in command output

I would like to know how can i use my variables in output of another command. For example if i try to generate some keys with "openssl" i'll get the question about the country, state, organizations etc.
I would like to use my variables in the script that i have to fill this information. I'll have variable "Country"; variable "State" etc. and to be parsed/set in to this questions from the openssl command when is executed.
I'm trying this in bash but also would like to know how will be the same think done in python.
Kind regards
You have multiple ways to do so.
1. If you have your script launched before the python script and the result set in an enviroment variable you can read the environment variable from your python script as follows:
import os
os.environ.get('MYVARIABLE', 'Default val')
Otherwise you can try to launch the other application from your python script and read the result by using os.popen():
import os
tmp = os.popen("ls").read()
or better (if you have a python newer than 2.6)
import subprocess
proc = subprocess.Popen('ls', stdout=subprocess.PIPE)
tmp = proc.stdout.read()

How to pass the Python variable to c shell script

I am using Centos 7.0 and PyDEv in Eclipse. I am trying to pass the variable in Python into c shell script. But I am getting error:
This is my Python script named raw2waveconvert.py
num = 10
print(num)
import subprocess
subprocess.call(["csh", "./test1.csh"])
Output/Error when I run the Python script:
10
num: Undefined variable.
The file test1.csh contains:
#!/bin/csh
set nvar=`/home/nishant/workspace/codec_implement/src/NTTool/raw2waveconvert.py $num`
echo $nvar
Okey, so apparently it's not so easy to find a nice and clear duplicate. This is how it's usually done. You either pass the value as an argument to the script, or via an environmental variable.
The following example shows both ways in action. Of course you can drop whatever you don't like.
import subprocess
import shlex
var = "test"
env_var = "test2"
script = "./var.sh"
#prepare a command (append variable to the scriptname)
command = "{} {}".format(script, var)
#prepare environment variables
environment = {"test_var" : env_var}
#Note: shlex.split splits a textual command into a list suited for subprocess.call
subprocess.call( shlex.split(command), env = environment )
This is corresponding bash script, but from what I've read addressing command line variables is the same, so it should work for both bash and csh set as default shells.
var.sh:
#!/bin/sh
echo "I was called with a command line argument '$1'"
echo "Value of enviormental variable test_var is '$test_var'"
Test:
luk32$ python3 subproc.py
I was called with a command line argument 'test'
Value of enviormental variable test_var is 'test2'
Please note that the python interpreter needs to have appropriate access to the called script. In this case var.sh needs to be executable for the user luk32. Otherwise, you will get Permission denied error.
I also urge to read docs on subprocess. Many other materials use shell=True, I won't discuss it, but I dislike and discourage it. The presented examples should work and be safe.
subprocess.call(..., env=os.environ + {'num': num})
The only way to do what you want here is to export/pass the variable value through the shell environment. Which requires using the env={} dictionary argument.
But it is more likely that what you should do is pass arguments to your script instead of assuming pre-existing variables. Then you would stick num in the array argument to subprocess.call (probably better to use check_call unless you know the script is supposed to fail) and then use $1/etc. as normal.

Python subprocess module with pre-populated environment

Question can be related to Use python subprocess module like a command line simulator
I have written some infrastructure code called my_shell to which you can pass shell commands of my application that looks like this
class ApplicationTestShell(object):
def __init__(self):
'''
Constructor
'''
self.play_ground_dir = "/var/tmp/MyAppDir"
ensure_dir_exists_and_empty(self.play_ground_dir)
def execute_command(self, command, on_success = None, on_failure = None):
p = create_shell_process(self, self.play_ground_dir)
sout, serr = p.communicate(input = command)
if p.returncode == 0:
on_success(sout)
else:
on_failure(serr)
def create_shell_process(self, cwd):
return Popen("/bin/bash", env= {WHAT DO I DO HERE?},cwd = test_dir, stdout=PIPE, stderr=PIPE, stdin=PIPE)
The interesting bit to me here is the env parameter. Python expects like a 'map' datastructure of all environment variable. My application requires several variables exported and set. The script for setting and exporting is generated by running say '/bin/appload myapp' (Assume appload is always available on the path). What I do currently
is when I call p.communicate I do the following
p.communicate(input = "eval `/bin/appload myapp`;" + command)
So basically before running the command I call the infrastructure setup.
Is there any way to do this in a better fashion in Python. I somehow want to push the eval /bin/appload part to the env parameter on the Popen class OR as part of the shell creation process.
What are the problems with my current implementation? (I feel it is hacky but I may be wrong)
It depends on how /bin/appload myapp works. If it only guarantees that it will output bash syntax, then parsing that output in Python in order to construct the environment object there is almost certainly more trouble than it's worth (you might need to support parameter and variable expansion, subshells, process substitution, etc, etc). On the other hand, if you are sure that /bin/appload myapp will only ever output lines of the form "VARIABLENAME=someword", then that's pretty trivial to parse in Python and you could move it into your Python code if you like.
There are an awful lot of different directions you could go with these requirements; you could capture the output of appload myapp into a tempfile and set the subprocess's $BASH_ENV to that filename; that would cause the shell to source your environment setup before running your command in a way that some might consider cleaner. You could give your command (with the eval-ing prefix) as the first argument to Popen and pass shell=True, and let Popen do the bash invocation on its own (setting $SHELL explicitly to bash if necessary). You could use bash's -c option to specify the code to run on the command line rather than via stdin. You could have a multi-tiered approach by invoking a shell from Python which eval's the appload myapp environment and then exec's another shell underneath it, so that the first doesn't show up in ps listings and the command given to create_shell_process has the shell all to itself (although that shouldn't really matter). You could do a lot of things, depending on what your concerns are with respect to how the shell is invoked, how it looks in ps listings, whether you want your command to still be run if the appload myapp output produces an error when eval'd, etc. But for a general solution, I think what you have is perfectly fine.
I don't see any real problems with the implementation, besides cosmetic things or minor things that probably only came from copying and pasting the code: create_shell_process doesn't use its cwd parameter, and the on_success and on_failure parameters look like they're optional but the defaults will break things (you can't call None).

Categories