Test for my topological/dependancy sort - python

This is more of a puzzle than a question per se, because I think I already have the right answer - I just don't know how to test it (if it works under all scenarios).
My program takes input from user about which modules it will need to load (in the form of an unsorted list or set). Some of those modules depend on other modules. The way module-dependancy information is stored is in a dictionary like this:
all_modules = { 'moduleE':[], 'moduleD':['moduleC'], 'moduleC':['moduleB'], 'moduleB':[], 'moduleA':['moduleD'] }
Where moduleE has no dependancies, and moduleD depends on moduleC, etc...
Also, there is no definitive lists of all possible modules, since the users can generate their own, and I create new ones from time to time, so this solution has to be fairly generic (and thus tested to work in all cases).
What I want to do is to get a list of modules to run in order, such that modules that are dependant on other modules are only run after their dependancies.
So I wrote the following code to try and do this:
def sort_dependencies(modules_to_sort,all_modules,recursions):
## Takes a small set of modules the user wants to run (as a list) and
## the full dependency tree (as a dict) and returns a list of all the
## modules/dependencies needed to be run, in the order to be run in.
## Cycles poorly detected as recursions going above 10.
## If that happens, this function returns False.
if recursions == 10:
return False
result = []
for module in modules_to_sort:
if module not in result:
result.append(module)
dependencies = all_modules[module]
for dependency in dependencies:
if dependency not in result:
result.append(dependency)
else:
result += [ result.pop(result.index(dependency)) ]
subdependencies = sort_dependencies(dependencies, all_modules, recursions+1)
if subdependencies == False:
return False
else:
for subdependency in subdependencies:
if subdependency not in result:
result.append(subdependency)
else:
result += [ result.pop(result.index(subdependency)) ]
return result
And it works like this:
>>> all_modules = { 'moduleE':[], 'moduleD':['moduleC'], 'moduleC':['moduleB'], 'moduleB':[], 'moduleA':['moduleD'] }
>>> sort_dependencies(['moduleA'],all_modules,0)
['moduleA', 'moduleD', 'moduleC', 'moduleB']
Note that 'moduleE' isn't returned, since the user doesn't need to run that.
Question is - does it work for any given all_modules dictionaries and any given required modules_to_load list? Is there a dependancy graph I can put in and a number of user-module-lists to try that, if they work, I can say all graphs/user-lists will work?
After the excellent advice by Marshall Farrier, it looks like i'm trying to do is a topological sort, so after watching this and this I implemented that as the following:
EDIT: Now with cyclic dependancy checking!
def sort_dependencies(all_modules):
post_order = []
tree_edges = {}
for fromNode,toNodes in all_modules.items():
if fromNode not in tree_edges:
tree_edges[fromNode] = 'root'
for toNode in toNodes:
if toNode not in tree_edges:
post_order += get_posts(fromNode,toNode,tree_edges)
post_order.append(fromNode)
return post_order
def get_posts(fromNode,toNode,tree_edges):
post_order = []
tree_edges[toNode] = fromNode
for dependancy in all_modules[toNode]:
if dependancy not in tree_edges:
post_order += get_posts(toNode,dependancy,tree_edges)
else:
parent = tree_edges[toNode]
while parent != 'root':
if parent == dependancy: print 'cyclic dependancy found!'; exit()
parent = tree_edges[parent]
return post_order + [toNode]
sort_dependencies(all_modules)
However, a topological sort like the one above sorts the whole tree, and doesn't actually return just the modules the user needs to run. Of course having the topological sort of the tree helps solve this problem, but it's not really the same question as the OP. I think for my data, the topological sort is probably best, but for a huge graph like all packages in apt/yum/pip/npm, it's probably better to use the original algorithm in the OP (which I don't know if it actually works in all scenarios...) as it only sorts what needs to be used.
So i'm leaving the question up an unanswered, because the question is really "How do i test this?"

Related

How can i accept and run user's code securely on my web app?

I am working on a django based web app that takes python file as input which contains some function, then in backend i have some lists that are passed as parameters through the user's function,which will generate a single value output.The result generated will be used for some further computation.
Here is how the function inside the user's file look like :
def somefunctionname(list):
''' some computation performed on list'''
return float value
At present the approach that i am using is taking user's file as normal file input. Then in my views.py i am executing the file as module and passing the parameters with eval function. Snippet is given below.
Here modulename is the python file name that i had taken from user and importing as module
exec("import "+modulename)
result = eval(f"{modulename}.{somefunctionname}(arguments)")
Which is working absolutely fine. But i know this is not the secured approach.
My question , Is there any other way through which i can run users file securely as the method that i am using is not secure ? I know the proposed solutions can't be full proof but what are the other ways in which i can run this (like if it can be solved with dockerization then what will be the approach or some external tools that i can use with API )?
Or if possible can somebody tell me how can i simply sandbox this or any tutorial that can help me..?
Any reference or resource will be helpful.
It is an important question. In python sandboxing is not trivial.
It is one of the few cases where the question which version of python interpreter you are using. For example, Jyton generates Java bytecode, and JVM has its own mechanism to run code securely.
For CPython, the default interpreter, originally there were some attempts to make a restricted execution mode, that were abandoned long time ago.
Currently, there is that unofficial project, RestrictedPython that might give you what you need. It is not a full sandbox, i.e. will not give you restricted filesystem access or something, but for you needs it may be just enough.
Basically the guys there just rewrote the python compilation in a more restricted way.
What it allows to do is to compile a piece of code and then execute, all in a restricted mode. For example:
from RestrictedPython import safe_builtins, compile_restricted
source_code = """
print('Hello world, but secure')
"""
byte_code = compile_restricted(
source_code,
filename='<string>',
mode='exec'
)
exec(byte_code, {__builtins__ = safe_builtins})
>>> Hello world, but secure
Running with builtins = safe_builtins disables the dangerous functions like open file, import or whatever. There are also other variations of builtins and other options, take some time to read the docs, they are pretty good.
EDIT:
Here is an example for you use case
from RestrictedPython import safe_builtins, compile_restricted
from RestrictedPython.Eval import default_guarded_getitem
def execute_user_code(user_code, user_func, *args, **kwargs):
""" Executed user code in restricted env
Args:
user_code(str) - String containing the unsafe code
user_func(str) - Function inside user_code to execute and return value
*args, **kwargs - arguments passed to the user function
Return:
Return value of the user_func
"""
def _apply(f, *a, **kw):
return f(*a, **kw)
try:
# This is the variables we allow user code to see. #result will contain return value.
restricted_locals = {
"result": None,
"args": args,
"kwargs": kwargs,
}
# If you want the user to be able to use some of your functions inside his code,
# you should add this function to this dictionary.
# By default many standard actions are disabled. Here I add _apply_ to be able to access
# args and kwargs and _getitem_ to be able to use arrays. Just think before you add
# something else. I am not saying you shouldn't do it. You should understand what you
# are doing thats all.
restricted_globals = {
"__builtins__": safe_builtins,
"_getitem_": default_guarded_getitem,
"_apply_": _apply,
}
# Add another line to user code that executes #user_func
user_code += "\nresult = {0}(*args, **kwargs)".format(user_func)
# Compile the user code
byte_code = compile_restricted(user_code, filename="<user_code>", mode="exec")
# Run it
exec(byte_code, restricted_globals, restricted_locals)
# User code has modified result inside restricted_locals. Return it.
return restricted_locals["result"]
except SyntaxError as e:
# Do whaever you want if the user has code that does not compile
raise
except Exception as e:
# The code did something that is not allowed. Add some nasty punishment to the user here.
raise
Now you have a function execute_user_code, that receives some unsafe code as a string, a name of a function from this code, arguments, and returns the return value of the function with the given arguments.
Here is a very stupid example of some user code:
example = """
def test(x, name="Johny"):
return name + " likes " + str(x*x)
"""
# Lets see how this works
print(execute_user_code(example, "test", 5))
# Result: Johny likes 25
But here is what happens when the user code tries to do something unsafe:
malicious_example = """
import sys
print("Now I have the access to your system, muhahahaha")
"""
# Lets see how this works
print(execute_user_code(malicious_example, "test", 5))
# Result - evil plan failed:
# Traceback (most recent call last):
# File "restr.py", line 69, in <module>
# print(execute_user_code(malitious_example, "test", 5))
# File "restr.py", line 45, in execute_user_code
# exec(byte_code, restricted_globals, restricted_locals)
# File "<user_code>", line 2, in <module>
#ImportError: __import__ not found
Possible extension:
Pay attention that the user code is compiled on each call to the function. However, it is possible that you would like to compile the user code once, then execute it with different parameters. So all you have to do is to save the byte_code somewhere, then to call exec with a different set of restricted_locals each time.
EDIT2:
If you want to use import, you can write your own import function that allows to use only modules that you consider safe. Example:
def _import(name, globals=None, locals=None, fromlist=(), level=0):
safe_modules = ["math"]
if name in safe_modules:
globals[name] = __import__(name, globals, locals, fromlist, level)
else:
raise Exception("Don't you even think about it {0}".format(name))
safe_builtins['__import__'] = _import # Must be a part of builtins
restricted_globals = {
"__builtins__": safe_builtins,
"_getitem_": default_guarded_getitem,
"_apply_": _apply,
}
....
i_example = """
import math
def myceil(x):
return math.ceil(x)
"""
print(execute_user_code(i_example, "myceil", 1.5))
Note that this sample import function is VERY primitive, it will not work with stuff like from x import y. You can look here for a more complex implementation.
EDIT3
Note, that lots of python built in functionality is not available out of the box in RestrictedPython, it does not mean it is not available at all. You may need to implement some function for it to become available.
Even some obvious things like sum or += operator are not obvious in the restricted environment.
For example, the for loop uses _getiter_ function that you must implement and provide yourself (in globals). Since you want to avoid infinite loops, you may want to put some limits on the number of iterations allowed. Here is a sample implementation that limits number of iterations to 100:
MAX_ITER_LEN = 100
class MaxCountIter:
def __init__(self, dataset, max_count):
self.i = iter(dataset)
self.left = max_count
def __iter__(self):
return self
def __next__(self):
if self.left > 0:
self.left -= 1
return next(self.i)
else:
raise StopIteration()
def _getiter(ob):
return MaxCountIter(ob, MAX_ITER_LEN)
....
restricted_globals = {
"_getiter_": _getiter,
....
for_ex = """
def sum(x):
y = 0
for i in range(x):
y = y + i
return y
"""
print(execute_user_code(for_ex, "sum", 6))
If you don't want to limit loop count, just use identity function as _getiter_:
restricted_globals = {
"_getiter_": labmda x: x,
Note that simply limiting the loop count does not guarantee security. First, loops can be nested. Second, you cannot limit the execution count of a while loop. To make it secure, you have to execute unsafe code under some timeout.
Please take a moment to read the docs.
Note that not everything is documented (although many things are). You have to learn to read the project's source code for more advanced things. Best way to learn is to try and run some code, and to see what kind function is missing, then to see the source code of the project to understand how to implement it.
EDIT4
There is still another problem - restricted code may have infinite loops. To avoid it, some kind of timeout is required on the code.
Unfortunately, since you are using django, that is multi threaded unless you explicitly specify otherwise, simple trick for timeouts using signeals will not work here, you have to use multiprocessing.
Easiest way in my opinion - use this library. Simply add a decorator to execute_user_code so it will look like this:
#timeout_decorator.timeout(5, use_signals=False)
def execute_user_code(user_code, user_func, *args, **kwargs):
And you are done. The code will never run more than 5 seconds.
Pay attention to use_signals=False, without this it may have some unexpected behavior in django.
Also note that this is relatively heavy on resources (and I don't really see a way to overcome this). I mean not really crazy heavy, but it is an extra process spawn. You should hold that in mind in your web server configuration - the api which allows to execute arbitrary user code is more vulnerable to ddos.
For sure with docker you can sandbox the execution if you are careful. You can restrict CPU cycles, max memory, close all network ports, run as a user with read only access to the file system and all).
Still,this would be extremely complex to get it right I think. For me you shall not allow a client to execute arbitrar code like that.
I would be to check if a production/solution isn't already done and use that. I was thinking that some sites allow you to submit some code (python, java, whatever) that is executed on the server.

How to run ipython script in python?

I'd like to run ipython script in python, ie:
code='''a=1
b=a+1
b
c'''
from Ipython import executor
for l in code.split("\n"):
print(executor(l))
that whould print
None
None
2
NameError: name 'c' is not defined
does it exists ? I searched the doc, but it does not seems to be (well) documented.
In short, depending on what you want to do and how much IPython features you want to include, you will need to do more.
First thing you need to know is that IPython separates its code into blocks.
Each block has its own result.
If you use blocks use this advice
If you don't any magic IPython provides you with and don't want any results given by each block, then you could just try to use exec(compile(script, "exec"), {}, {}).
If you want more than that, you will need to actually spawn an InteractiveShell-instance as features like %magic and %%magic will need a working InteractiveShell.
In one of my projects I have this function to execute code in an InteractiveShell-instance:
https://github.com/Irrational-Encoding-Wizardry/yuuno/blob/master/yuuno_ipython/ipython/utils.py#L28
If you want to just get the result of each expression,
then you should parse the code using the ast-Module and add code to return each result.
You will see this in the function linked above from line 34 onwards.
Here is the relevant except:
if isinstance(expr_ast.body[-1], ast.Expr):
last_expr = expr_ast.body[-1]
assign = ast.Assign( # _yuuno_exec_last_ = <LAST_EXPR>
targets=[ast.Name(
id=RESULT_VAR,
ctx=ast.Store()
)],
value=last_expr.value
)
expr_ast.body[-1] = assign
else:
assign = ast.Assign( # _yuuno_exec_last_ = None
targets=[ast.Name(
id=RESULT_VAR,
ctx=ast.Store(),
)],
value=ast.NameConstant(
value=None
)
)
expr_ast.body.append(assign)
ast.fix_missing_locations(expr_ast)
Instead doing this for every statement in the body instead of the last one and replacing it with some "printResult"-transformation will do the same for you.

Instantiating a class gives dubious results the 2nd time around while looping

Edit:
Firstly, thank you #martineau and #jonrsharpe for your prompt reply.
I was initially hesitant to write a verbose description, but I now realize that I am sacrificing clarity for brevity. (thanks #jonrsharpe for the link).
So here's my attempt to describe what I am upto as succinctly as possible:
I have implemented the Lempel-Ziv-Welch text file compression algorithm in form of a python package. Here's the link to the repository.
Basically, I have a compress class in the lzw.Compress module, which takes in as input the file name(and a bunch of other jargon parameters) and generates the compressed file which is then decompressed by the decompress class within the lzw.Decompress module generating the original file.
Now what I want to do is to compress and decompress a bunch of files of various sizes stored in a directory and save and visualize graphically the time taken for compression/decompression along with the compression ratio and other metrics. For this, I am iterating over the list of the file names and passing them as parameters to instantiate the compress class and begin compression by calling the encode() method on it as follows:
import os
os.chdir('/path/to/files/to/be/compressed/')
results = dict()
results['compress_time'] = []
results['other_metrics'] = []
file_path = '/path/to/files/to/be/compressed/'
comp_path = '/path/to/store/compressed/files/'
decomp_path = '/path/to/store/decompressed/file'
files = [_ for _ in os.listdir()]
for f in files:
from lzw.Compress import compress as comp
from lzw.Decompress import decompress as decomp
c = comp(file_path+f,comp_path) #passing the input file and the output path for storing compressed file.
c.encode()
#Then measure time required for comression using time.monotonic()
del c
del comp
d = decomp('/path/to/compressed/file',decomp_path) #Decompressing
d.decode()
#Then measure time required for decompression using
#time.monotonic()
#append metrics to lists in the results dict for this particular
#file
if decompressed_file_size != original_file_size:
print("error")
break
del d
del decomp
I have run this code independently for each file without the for loop and have achieved compression and decompression successfully. So there are no problems in the files I wish to compress.
What happens is that whenever I run this loop, the first file (the first iteration) runs successfully and the on the next iteration, after the entire process happens for the 2nd file, "error" is printed and the loop exits. I have tried reordering the list or even reversing it(maybe a particular file is having a problem), but to no avail.
For the second file/iteration, the decompressed file contents are dubious(not matching the original file). Typically, the decompressed file size is nearly double that of the original.
I strongly suspect that there is something to do with the variables of the class/package retaining their state somehow among different iterations of the loop. (To counter this I am deleting both the instance and the class at the end of the loop as shown in the above snippet, but no success.)
I have also tried to import the classes outside the loop, but no success.
P.S.: I am a python newbie and don't have much of an expertise, so forgive me for not being "pythonic" in my exposition and raising a rather naive issue.
Update:
Thanks to #martineau, one of the problem was regarding the importing of global variables from another submodule.
But there was another issue which crept in owing to my superficial knowledge about the 'del' operator in python3.
I have this trie data structure in my program which is basically just similar to a binary tree.
I had a self_destruct method to delete the tree as follows:
class trie():
def __init__(self):
self.next = {}
self.value = None
self.addr = None
def insert(self, word=str(),addr=int()):
node = self
for index,letter in enumerate(word):
if letter in node.next.keys():
node = node.next[letter]
else:
node.next[letter] = trie()
node = node.next[letter]
if index == len(word) - 1:
node.value = word
node.addr = addr
def self_destruct(self):
node = self
if node.next == {}:
return
for i in node.next.keys():
node.next[i].self_destruct()
del node
Turns out that this C-like recursive deletion of objects makes no sense in python as here simply its association in the namespace is removed while the real work is done by the garbage collector.
Still, its kinda weird why python is retaining the state/association of variables even on creating a new object(as shown in my loop snippet in the edit).
So 2 things solved the problem. Firstly, I removed the global variables and made them local to the module where I need them(so no need to import). Also, I deleted the self_destruct method of the trie and simple did: del root where root = trie() after use.
Thanks #martineau & #jonrsharpe.

Python unit test advice

Can I get some advice on writing a unit test for the following piece of code?
%python
import sys
import json
sys.argv = []
sys.argv.append('{"product1":{"brand":"x","type":"y"}}')
sys.argv.append('{"product1":{"brand":"z","type":"a"}}')
products = sys.argv
yy= {}
my_products = []
for n, i in enumerate(products[:]):
xx = json.loads(i)
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
print my_products
As it stands there aren't any units to test!!!
A test might consist of:
packaging your program in a script
invoking your program from python unit test as a subprocess
piping the output of your command process to a buffer
asserting the buffer is what you except it to be
While the above would technically allow you to have an automated test on your code it comes with a lot of burden:
- multi processing
- weak assertions by not having types
- coarse interaction (have to invoke a script, can't just assert on the brand/type logic
One way to address those issues could be to package your code into smaller units, ie create a method to encapsulate:
for j in xx.keys():
yy["brand"] = xx[j]['brand']
yy["type"] = xx[j]["type"]
my_products.append(yy)
Import it, exercise it and assert on its output. Then there might be something to map the loading and application of xx.keys() loop to an array (which you could also encapsulate as a function).
And then there could be the highest level taking in args and composing the product mapper loader transformer. And since your code will be thoroughly unit tested at this point, you may get away with not having a test for your top level script?

libtorrent + python basic Functions

I am building a python app using libtorrent, But its hard to find good documentation for it.
How to :
Get the total size or an easy way to calculate it?
Get the number of files
I found a bug that
s = h.status()
s.progress*100
Never return 100 on completion, instead it return some thing like 99.87954613....
TO check whether you've completed you can call to h.is_seed() method which will return true if you are only seeding.
So something like
while not h.is_seed():
//Keep on downloading
print "all done"

Categories