disabling python script tracing, equivalent of turning off -x in bash - python

I have python scripts which I'd like to disable tracing on. By tracing, I mean the ability to run:
python -m trace --tracing thescript.py
In bash, if you want to see the inner workings of a script, you can just run the following:
sh -x thescript.sh
or
bash -x thescript.sh
However, if thescript.sh contains a set +x; that will stop the external sh -x or bash -x from showing any further inner workings of the script past the line containing the set +x.
I want the same for python. I'm only aware of python -m trace --tracing as the only way to see the inner workings of a python script. I'm sure there are many different ways to do this. What I'm looking for here is a solid method to use to stop any type of tracing that could be done to a python script.
I understand if a user has the proper permissions, they can edit the script and comment out whatever is put in to disable tracing. I'm aware of that. But i would still like to know.

Taking even a cursory look at trace.py from the standard library makes the interfaces it uses, and thus the mechanism to disable them, clear:
#!/usr/bin/env python
import sys, threading
sys.settrace(None)
threading.settrace(None)
print("This is not traced")
Of course, this won't (and can't) work against anyone who wants to modify the live interpreter to NOP out those calls; the usual caveats about anyone who owns and controls the hardware being able to own and control the software apply.

Related

Is using os.system('curl ...') in Python truly unsafe, compared to native libraries?

I'm writing a project with Angular on the frontend, and a backend written in Python.
For certain API calls, instead of using Python's built-in libraries (and since I am more of a C/C++/bash programmer) and did not know how to do a similar normal system call in python, I just did a os.system('curl ...').
When people were reviewing my code they said that it would be better practice to use Python's built in libraries instead of doing a system() call since it looks better and could be less dangerous.
Is this a legitimate criticism? How could it be more dangerous if the python library is probably doing the same thing anyway?
I am not asking for opinions on style but legitimate problems with this method.
The objection is entirely legitimate.
Let's say that your command looks like:
def post_result(result_string):
os.system('curl http://example.com/report-result/%s' % (result_string,))
Now, what happens if you're told to report a result that contains ; rm -rf ~? The shell invoked by os.system() runs curl http://example.com/report-result/, and then it runs a second command of rm -rf ~.
Several naive attempts at fixes don't work.
Consider, for example:
# Adding double quotes should work, right?
# WRONG: ''; rm -rf ~'' doesn't work here, but $(rm -rf ~) still does.
os.system('curl http://example.com/report-result/"%s"' % (result_string,))
# ...so, how about single quotes?
# STILL WRONG: $(rm -rf ~) doesn't work on its own, but result_string="'$(rm -rf ~)'" does.
os.system("curl http://example.com/report-result/'%s'" % (result_string,))
Even if you avoid direct shell injection vulnerabilities, using a shell exposes you to other kinds of bugs.
At startup time, a shell does a number of operations based on filesystem contents and environment variables. If an untrusted user can manipulate your program into setting environment variables of their choice before calling os.system(), they can cause a file named in ENV to have its commands executed; can shadow commands with exported functions, or can cause other mischief. See ShellShock for a well-publicized historical example.
And that's before considering other things that can happen to your data. If you're passing a filename to a shell, but unknown to you it contains whitespace and glob characters, that filename can be split into / replaced with other names.
The official Python documentation warns against shell use.
Quoting a warning from the Python subprocess module documentation, which is also relevant here:
Warning: Executing shell commands that incorporate unsanitized input from an untrusted source makes a program vulnerable to shell injection, a serious security flaw which can result in arbitrary command execution. For this reason, the use of shell=True is strongly discouraged in cases where the command string is constructed from external input:
>>> from subprocess import call
>>> filename = input("What file would you like to display?\n")
What file would you like to display?
non_existent; rm -rf / #
>>> call("cat " + filename, shell=True) # Uh-oh. This will end badly...
shell=False disables all shell based features, but does not suffer from this vulnerability; see the Note in the Popen constructor documentation for helpful hints in getting shell=False to work.
When using shell=True, pipes.quote() can be used to properly escape whitespace and shell metacharacters in strings that are going to be used to construct shell commands.
os.system() has all the same faults as subprocess.Popen(..., shell=True) -- even more faults, since subprocess.Popen() provides a way to pass data out-of-band from code, and so can be used safely.
Native Python libraries don't call shells for work that the Python runtime can do.
Python has a socket library in its standard library interface, which directly invokes operating system and libc calls to create network connections and interact with them. There is no shell involved in these syscalls; arguments are C structs, C strings, etc; so they aren't prone to shell injection vulnerabilities in the same way that os.system() is.
Some Python libraries, like libcurl, may be slightly less native insofar as they use their own C libraries rather than only calling out to the operating system through functions included in the Python runtime itself; even then, though, these OS-level syscalls are at a much lower level than any shell.
This answer is entirely correct.
But I'd also like to point out to other cases in which you might think that security doesn't matter. E.g. the command you are running is hard-coded or you have 100% control or trust over what is supplied to it.
Even it that case using os.system() is wrong.
In fact:
You have to rely on external tools that might not be present or, even worse, you might have a command with that name, but it doesn't do what you expect it to do. Maybe because it has a different version or maybe because it's a different implementation of that command (e.g. GNUtar != BSDtar). Manging python dependencies will be much more easy and reliable.
It is more difficult to handle errors. You only have a return code which is not always enough to understand what is going on. And I hope that your solution to this problem isn't to parse the command output.
Environment variables can modify the way a program works unexpectedly. Many programs use environment variables as an alternative to command line or configuration options. If your python script relies on specific behavior from the command, an unexpected variable in the user's environment can break it.
If at some point in the future you will need to let the user customize a bit your script behavior you will need to rewrite it from scratch without os.system() or you might have security problems.

Setting Environment Up with Python

Our environment has a shell script to setup the working area. setup.sh looks like this:
export BASE_DIR=$PWD
export PATH=$BASE_DIR/bin
export THIS_VARIABLE=THAT_VALUE
The user does the following:
% . setup.sh
Some of our users are looking for a csh version and that would mean having two setup files.
I'm wondering if there is a way to do this work with a common python file. In The Hitchhiker's Guide to Python Kenneth Reitz suggests using a setup.py file in projects, but I'm not sure if Python can set environment variables in the shell as I do above.
Can I replace this shell script with a python script that does the same thing? I don't see how.
(There are other questions that ask this more broadly with many many comments, but this one has a direct question and direct single answer.)
No, Python (or generally any process on Unix-like platforms) cannot change its parent's environment.
A common solution is to have your script print the output in a format suitable for the user's shell. E.g. ssh-agent will print out sh-compatible global assignments with -s or when it sees that it is being invoked from a Bourne-compatible shell; and csh syntax if invoked from csh or tcsh or when explicitly invoked with -c.
The usual invocation in sh-compatible shells is $(eval ssh-agent) -- so the text that the program prints is evaluated by the shell where the user invoked this command.
eval is a well-known security risk, so you want to make this code very easy to vet even for people who don't speak much Python (or shell, or anything much else).
If you are, eh cough, skeptical of directly supporting Csh users, perhaps you can convince them to run your sh-compatible script in a Bourne-compatible shell and then exec csh to get their preferred interactive environment. This also avoids the slippery slope of having an ever-growing pile of little maintenance challenges for supporting Csh, Fish, rc, Powershell etc users.

Run Python script in Python environment?

In the terminal when starting Python, can we run a Python script under Python environment?
I know I can run it on bash, but don't know if I can run it in Python environment. The purpose is to see when the script goes wrong, the values of the variables at that time.
The purpose is to see when the script goes wrong, the values of the variables at that time.
You have two options for that (neither of which is precisely the question you're asking, but is nonetheless the proper way to achieve the desired outcome)
First, the pdb module:
import pdb; pdb.set_trace()
This enters the debugger at whatever point you place this code. Useful for seeing variables.
Second, running the command with -i:
$ python -i script.py
This drops into the full interpreter after execution, with all variables intact

Emacs: methods for debugging python

I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things.
In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand.
I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like
M-x pdb
Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/
Where .../bin/python is the interpreter buildout makes with the path set for all the eggs.
~/bin/pdb is a simple script to call into pdb.main using the current python interpreter
HellooKitty:hydrant twillis$ cat ~/bin/pdb
#! /usr/bin/env python
if __name__ == "__main__":
import sys
sys.version_info
import pdb
pdb.main()
HellooKitty:hydrant twillis$
.../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml
I am first presented with a prompt
Current directory is /Users/twillis/bin/
C-c C-f
Nothing happens but
HellooKitty:hydrant twillis$ ps aux | grep pdb
twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/
twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb
HellooKitty:hydrant twillis$
something is happening
C-x [space]
will report that a breakpoint has been set. But I can't manage to get get things going.
It feels like I am missing something obvious here. Am I?
So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?
Hmm. You're doing this a little differently than I do. I haven't experimented with your method. I use the pdb library module directly, with no wrapper script, just using the "-m" python command-line option to tell python to run the module as a script.
To be excessively thorough, here's my sequence of operations:
I hit Alt-X in EMACS, type "pdb", then return.
EMACS prompts me with "Run pdb (like this):" and I type "python -m pdb myprogram.py".
EMACS creates a debugger mode window for pdb, where I can give the debugger commands, and tracks the execution of the program in the source code.
I suppose it's possible there's some reason this doesn't work well with the appengine. I recommend getting it working first with a trivial python program and once you know that's working, try stepping up to the full app.
In practice, I don't do much python debugging with pdb. Most of my debugging is essentially "printf debugging", done inserting print statements into my unit tests and (occasionally) into the actual code.

Wrap all commands entered within a Bash-Shell with a Python script

What i'd like to have is a mechanism that all commands i enter on a Bash-Terminal are wrapped by a Python-script. The Python-script executes the entered command, but it adds some additional magic (for example setting "dynamic" environment variables).
Is that possible somehow?
I'm running Ubuntu and Debian Squeezy.
Additional explanation:
I have a property-file which changes dynamically (some scripts do alter it at any time). I need the properties from that file as environment variables in all my shell scripts. Of course i could parse the property-file somehow from shell, but i prefer using an object-oriented style for that (especially for writing), as it can be done with Python (and ConfigObject).
Therefore i want to wrap all my scripts with that Python script (without having to modify the scripts themselves) which handles these properties down to all Shell-scripts.
This is my current use case, but i can imagine that i'll find additional cases to which i can extend my wrapper later on.
The perfect way to wrap every command that is typed into a Bash Shell is to change the variable PROMPT_COMMAND inside the .bashrc. For example, if I want to do some Python stuff before every command, liked asked in my question:
.bashrc:
# ...
PROMPT_COMMAND="python mycoolscript.py; $PROMPT_COMMAND;"
export $PROMPT_COMMAND
# ...
now before every command the script mycoolscript.py is run.
Use Bash's DEBUG trap. Let me know if you need me to elaborate.
Edit:
Here's a simple example of the kinds of things you might be able to do:
$ cat prefix.py
#!/usr/bin/env python
print "export prop1=foobar"
print "export prop2=bazinga"
$ cat propscript
#!/bin/bash
echo $prop1
echo $prop2
$ trap 'eval "$(prefix.py)"' DEBUG
$ ./propscript
foobar
bazinga
You should be aware of the security risks of using eval.
I don't know of anything but two things that might help you follow
http://sourceforge.net/projects/pyshint/
The iPython shell has some functionality to execute shell commands in the iterpreter.
There is no direct way you can do it .
But you can make a python script to emulate a bash terminal and you can use the beautiful "Subprocess" module in python to execute commnands the way you like

Categories