Related
I'm currently building quite a complex system in Python, and when I'm debugging I often put simple print statements in several scripts. To keep an overview I often also want to print out the file name and line number where the print statement is located. I can of course do that manually, or with something like this:
from inspect import currentframe, getframeinfo
print getframeinfo(currentframe()).filename + ':' + str(getframeinfo(currentframe()).lineno) + ' - ', 'what I actually want to print out here'
Which prints something like:
filenameX.py:273 - what I actually want to print out here
To make it more simple, I want to be able to do something like:
print debuginfo(), 'what I actually want to print out here'
So I put it into a function somewhere and tried doing:
from debugutil import debuginfo
print debuginfo(), 'what I actually want to print out here'
print debuginfo(), 'and something else here'
Unfortunately, I get:
debugutil.py:3 - what I actually want to print out here
debugutil.py:3 - and something else here
It prints out the file name and line number on which I defined the function, instead of the line on which I call debuginfo(). This is obvious, because the code is located in the debugutil.py file.
So my question is actually: How can I get the filename and line number from which this debuginfo() function is called?
The function inspect.stack() returns a list of frame records, starting with the caller and moving out, which you can use to get the information you want:
from inspect import getframeinfo, stack
def debuginfo(message):
caller = getframeinfo(stack()[1][0])
print("%s:%d - %s" % (caller.filename, caller.lineno, message)) # python3 syntax print
def grr(arg):
debuginfo(arg) # <-- stack()[1][0] for this line
grr("aargh") # <-- stack()[2][0] for this line
Output:
example.py:8 - aargh
If you put your trace code in another function, and call that from your main code, then you need to make sure you get the stack information from the grandparent, not the parent or the trace function itself
Below is a example of 3 level deep system to further clarify what I mean. My main function calls a trace function, which calls yet another function to do the work.
######################################
import sys, os, inspect, time
time_start = 0.0 # initial start time
def trace_libary_init():
global time_start
time_start = time.time() # when the program started
def trace_library_do(relative_frame, msg=""):
global time_start
time_now = time.time()
# relative_frame is 0 for current function (this one),
# 1 for direct parent, or 2 for grand parent..
total_stack = inspect.stack() # total complete stack
total_depth = len(total_stack) # length of total stack
frameinfo = total_stack[relative_frame][0] # info on rel frame
relative_depth = total_depth - relative_frame # length of stack there
# Information on function at the relative frame number
func_name = frameinfo.f_code.co_name
filename = os.path.basename(frameinfo.f_code.co_filename)
line_number = frameinfo.f_lineno # of the call
func_firstlineno = frameinfo.f_code.co_firstlineno
fileline = "%s:%d" % (filename, line_number)
time_diff = time_now - time_start
print("%13.6f %-20s %-24s %s" % (time_diff, fileline, func_name, msg))
################################
def trace_do(msg=""):
trace_library_do(1, "trace within interface function")
trace_library_do(2, msg)
# any common tracing stuff you might want to do...
################################
def main(argc, argv):
rc=0
trace_libary_init()
for i in range(3):
trace_do("this is at step %i" %i)
time.sleep((i+1) * 0.1) # in 1/10's of a second
return rc
rc=main(sys.argv.__len__(), sys.argv)
sys.exit(rc)
This will print something like:
$ python test.py
0.000005 test.py:39 trace_do trace within interface func
0.001231 test.py:49 main this is at step 0
0.101541 test.py:39 trace_do trace within interface func
0.101900 test.py:49 main this is at step 1
0.302469 test.py:39 trace_do trace within interface func
0.302828 test.py:49 main this is at step 2
The trace_library_do() function at the top is an example of something that you can drop into a library, and then call it from other tracing functions. The relative depth value controls which entry in the python stack gets printed.
I showed pulling out a few other interesting values in that function, like the line number of start of the function, the total stack depth, and the full path to the file. I didn't show it, but the global and local variables in the function are also available in inspect, as well as the full stack trace to all other functions below yours. There is more than enough information with what I am showing above to make hierarchical call/return timing traces. It's actually not that much further to creating the main parts of your own source level debugger from here -- and it's all mostly just sitting there waiting to be used.
I'm sure someone will object that I'm using internal fields with data returned by the inspect structures, as there may well be access functions that do this same thing for you. But I found them in by stepping through this type of code in a python debugger, and they work at least here. I'm running python 2.7.12, your results might very if you are running a different version.
In any case, I strongly recommend that you import the inspect code into some python code of your own, and look at what it can provide you -- Especially if you can single step through your code in a good python debugger. You will learn a lot on how python works, and get to see both the benefits of the language, and what is going on behind the curtain to make that possible.
Full source level tracing with timestamps is a great way to enhance your understanding of what your code is doing, especially in more of a dynamic real time environment. The great thing about this type of trace code is that once it's written, you don't need debugger support to see it.
An update to the accepted answer using string interpolation and displaying the caller's function name.
import inspect
def debuginfo(message):
caller = inspect.getframeinfo(inspect.stack()[1][0])
print(f"{caller.filename}:{caller.function}:{caller.lineno} - {message}")
The traceprint package can now do that for you:
import traceprint
def func():
print(f'Hello from func')
func()
# File "/traceprint/examples/example.py", line 6, in <module>
# File "/traceprint/examples/example.py", line 4, in func
# Hello from func
PyCharm will automatically make the file link clickable / followable.
Install via pip install traceprint.
Just put the code you posted into a function:
from inspect import currentframe, getframeinfo
def my_custom_debuginfo(message):
print getframeinfo(currentframe()).filename + ':' + str(getframeinfo(currentframe()).lineno) + ' - ', message
and then use it as you want:
# ... some code here ...
my_custom_debuginfo('what I actually want to print out here')
# ... more code ...
I recommend you put that function in a separate module, that way you can reuse it every time you need it.
Discovered this question for a somewhat related problem, but I wanted more details re: the execution (and I didn't want to install an entire call graph package).
If you want more detailed information, you can retrieve a full traceback with the standard library module traceback, and either stash the stack object (a list of tuples) with traceback.extract_stack() or print it out with traceback.print_stack(). This was more suitable for my needs, hope it helps someone else!
I am working on a django based web app that takes python file as input which contains some function, then in backend i have some lists that are passed as parameters through the user's function,which will generate a single value output.The result generated will be used for some further computation.
Here is how the function inside the user's file look like :
def somefunctionname(list):
''' some computation performed on list'''
return float value
At present the approach that i am using is taking user's file as normal file input. Then in my views.py i am executing the file as module and passing the parameters with eval function. Snippet is given below.
Here modulename is the python file name that i had taken from user and importing as module
exec("import "+modulename)
result = eval(f"{modulename}.{somefunctionname}(arguments)")
Which is working absolutely fine. But i know this is not the secured approach.
My question , Is there any other way through which i can run users file securely as the method that i am using is not secure ? I know the proposed solutions can't be full proof but what are the other ways in which i can run this (like if it can be solved with dockerization then what will be the approach or some external tools that i can use with API )?
Or if possible can somebody tell me how can i simply sandbox this or any tutorial that can help me..?
Any reference or resource will be helpful.
It is an important question. In python sandboxing is not trivial.
It is one of the few cases where the question which version of python interpreter you are using. For example, Jyton generates Java bytecode, and JVM has its own mechanism to run code securely.
For CPython, the default interpreter, originally there were some attempts to make a restricted execution mode, that were abandoned long time ago.
Currently, there is that unofficial project, RestrictedPython that might give you what you need. It is not a full sandbox, i.e. will not give you restricted filesystem access or something, but for you needs it may be just enough.
Basically the guys there just rewrote the python compilation in a more restricted way.
What it allows to do is to compile a piece of code and then execute, all in a restricted mode. For example:
from RestrictedPython import safe_builtins, compile_restricted
source_code = """
print('Hello world, but secure')
"""
byte_code = compile_restricted(
source_code,
filename='<string>',
mode='exec'
)
exec(byte_code, {__builtins__ = safe_builtins})
>>> Hello world, but secure
Running with builtins = safe_builtins disables the dangerous functions like open file, import or whatever. There are also other variations of builtins and other options, take some time to read the docs, they are pretty good.
EDIT:
Here is an example for you use case
from RestrictedPython import safe_builtins, compile_restricted
from RestrictedPython.Eval import default_guarded_getitem
def execute_user_code(user_code, user_func, *args, **kwargs):
""" Executed user code in restricted env
Args:
user_code(str) - String containing the unsafe code
user_func(str) - Function inside user_code to execute and return value
*args, **kwargs - arguments passed to the user function
Return:
Return value of the user_func
"""
def _apply(f, *a, **kw):
return f(*a, **kw)
try:
# This is the variables we allow user code to see. #result will contain return value.
restricted_locals = {
"result": None,
"args": args,
"kwargs": kwargs,
}
# If you want the user to be able to use some of your functions inside his code,
# you should add this function to this dictionary.
# By default many standard actions are disabled. Here I add _apply_ to be able to access
# args and kwargs and _getitem_ to be able to use arrays. Just think before you add
# something else. I am not saying you shouldn't do it. You should understand what you
# are doing thats all.
restricted_globals = {
"__builtins__": safe_builtins,
"_getitem_": default_guarded_getitem,
"_apply_": _apply,
}
# Add another line to user code that executes #user_func
user_code += "\nresult = {0}(*args, **kwargs)".format(user_func)
# Compile the user code
byte_code = compile_restricted(user_code, filename="<user_code>", mode="exec")
# Run it
exec(byte_code, restricted_globals, restricted_locals)
# User code has modified result inside restricted_locals. Return it.
return restricted_locals["result"]
except SyntaxError as e:
# Do whaever you want if the user has code that does not compile
raise
except Exception as e:
# The code did something that is not allowed. Add some nasty punishment to the user here.
raise
Now you have a function execute_user_code, that receives some unsafe code as a string, a name of a function from this code, arguments, and returns the return value of the function with the given arguments.
Here is a very stupid example of some user code:
example = """
def test(x, name="Johny"):
return name + " likes " + str(x*x)
"""
# Lets see how this works
print(execute_user_code(example, "test", 5))
# Result: Johny likes 25
But here is what happens when the user code tries to do something unsafe:
malicious_example = """
import sys
print("Now I have the access to your system, muhahahaha")
"""
# Lets see how this works
print(execute_user_code(malicious_example, "test", 5))
# Result - evil plan failed:
# Traceback (most recent call last):
# File "restr.py", line 69, in <module>
# print(execute_user_code(malitious_example, "test", 5))
# File "restr.py", line 45, in execute_user_code
# exec(byte_code, restricted_globals, restricted_locals)
# File "<user_code>", line 2, in <module>
#ImportError: __import__ not found
Possible extension:
Pay attention that the user code is compiled on each call to the function. However, it is possible that you would like to compile the user code once, then execute it with different parameters. So all you have to do is to save the byte_code somewhere, then to call exec with a different set of restricted_locals each time.
EDIT2:
If you want to use import, you can write your own import function that allows to use only modules that you consider safe. Example:
def _import(name, globals=None, locals=None, fromlist=(), level=0):
safe_modules = ["math"]
if name in safe_modules:
globals[name] = __import__(name, globals, locals, fromlist, level)
else:
raise Exception("Don't you even think about it {0}".format(name))
safe_builtins['__import__'] = _import # Must be a part of builtins
restricted_globals = {
"__builtins__": safe_builtins,
"_getitem_": default_guarded_getitem,
"_apply_": _apply,
}
....
i_example = """
import math
def myceil(x):
return math.ceil(x)
"""
print(execute_user_code(i_example, "myceil", 1.5))
Note that this sample import function is VERY primitive, it will not work with stuff like from x import y. You can look here for a more complex implementation.
EDIT3
Note, that lots of python built in functionality is not available out of the box in RestrictedPython, it does not mean it is not available at all. You may need to implement some function for it to become available.
Even some obvious things like sum or += operator are not obvious in the restricted environment.
For example, the for loop uses _getiter_ function that you must implement and provide yourself (in globals). Since you want to avoid infinite loops, you may want to put some limits on the number of iterations allowed. Here is a sample implementation that limits number of iterations to 100:
MAX_ITER_LEN = 100
class MaxCountIter:
def __init__(self, dataset, max_count):
self.i = iter(dataset)
self.left = max_count
def __iter__(self):
return self
def __next__(self):
if self.left > 0:
self.left -= 1
return next(self.i)
else:
raise StopIteration()
def _getiter(ob):
return MaxCountIter(ob, MAX_ITER_LEN)
....
restricted_globals = {
"_getiter_": _getiter,
....
for_ex = """
def sum(x):
y = 0
for i in range(x):
y = y + i
return y
"""
print(execute_user_code(for_ex, "sum", 6))
If you don't want to limit loop count, just use identity function as _getiter_:
restricted_globals = {
"_getiter_": labmda x: x,
Note that simply limiting the loop count does not guarantee security. First, loops can be nested. Second, you cannot limit the execution count of a while loop. To make it secure, you have to execute unsafe code under some timeout.
Please take a moment to read the docs.
Note that not everything is documented (although many things are). You have to learn to read the project's source code for more advanced things. Best way to learn is to try and run some code, and to see what kind function is missing, then to see the source code of the project to understand how to implement it.
EDIT4
There is still another problem - restricted code may have infinite loops. To avoid it, some kind of timeout is required on the code.
Unfortunately, since you are using django, that is multi threaded unless you explicitly specify otherwise, simple trick for timeouts using signeals will not work here, you have to use multiprocessing.
Easiest way in my opinion - use this library. Simply add a decorator to execute_user_code so it will look like this:
#timeout_decorator.timeout(5, use_signals=False)
def execute_user_code(user_code, user_func, *args, **kwargs):
And you are done. The code will never run more than 5 seconds.
Pay attention to use_signals=False, without this it may have some unexpected behavior in django.
Also note that this is relatively heavy on resources (and I don't really see a way to overcome this). I mean not really crazy heavy, but it is an extra process spawn. You should hold that in mind in your web server configuration - the api which allows to execute arbitrary user code is more vulnerable to ddos.
For sure with docker you can sandbox the execution if you are careful. You can restrict CPU cycles, max memory, close all network ports, run as a user with read only access to the file system and all).
Still,this would be extremely complex to get it right I think. For me you shall not allow a client to execute arbitrar code like that.
I would be to check if a production/solution isn't already done and use that. I was thinking that some sites allow you to submit some code (python, java, whatever) that is executed on the server.
I raised a feature request on the CDK github account recently and was pointed in the direction of Core.Token as being pretty much the exact functionality I was looking for. I'm now having some issues implementing it and getting similar errors, heres the feature request I raised previously: https://github.com/aws/aws-cdk/issues/3800
So my current code looks something like this:
fargate_service = ecs_patterns.LoadBalancedFargateService(
self, "Fargate",
cluster = cluster,
memory_limit_mib = core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='template-service-memory_limit')),
execution_role=fargate_iam_role,
container_port=core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='port')),
cpu = core.Token.as_number(ssm.StringParameter.value_from_lookup(self, parameter_name='template-service-container_cpu')),
image=ecs.ContainerImage.from_registry(ecrRepo)
)
When I try synthesise this code I get the following error:
jsii.errors.JavaScriptError:
Error: Resolution error: Supplied properties not correct for "CfnSecurityGroupEgressProps"
fromPort: "dummy-value-for-template-service-container_port" should be a number
toPort: "dummy-value-for-template-service-container_port" should be a number.
Object creation stack:
To me it seems to be getting past the validation requiring a number to be passed into the FargateService validation, but when it tried to create the resources after that ("CfnSecurityGroupEgressProps") it cant resolve the dummy string as a number. I'd appreciate any help on solving this or alternative suggestions to passing in values from AWS system params instead (I thought it might be possible to parse the values into here via a file pulled from S3 during the build pipeline or something along those lines, but that seems hacky).
With some help I think we've cracked this!
The problem was that I was passing "ssm.StringParameter.value_from_lookup" the solution is to provide the token with "ssm.StringParameter.value_for_string_parameter", when this is synthesised it stores the token and then upon deployment the value stored in system parameter store is substituted.
(We also came up with another approach for achieving similar which we're probably going to use over SSM approach, I've detailed below the code snippet if you're interested)
See the complete code below:
from aws_cdk import (
aws_ec2 as ec2,
aws_ssm as ssm,
aws_iam as iam,
aws_ecs as ecs,
aws_ecs_patterns as ecs_patterns,
core,
)
class GenericFargateService(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, **kwargs)
containerPort = core.Token.as_number(ssm.StringParameter.value_for_string_parameter(
self, 'template-service-container_port'))
vpc = ec2.Vpc(
self, "cdk-test-vpc",
max_azs=2
)
cluster = ecs.Cluster(
self, 'cluster',
vpc=vpc
)
fargate_iam_role = iam.Role(self,"execution_role",
assumed_by = iam.ServicePrincipal("ecs-tasks"),
managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEC2ContainerRegistryFullAccess")]
)
fargate_service = ecs_patterns.LoadBalancedFargateService(
self, "Fargate",
cluster = cluster,
memory_limit_mib = 1024,
execution_role=fargate_iam_role,
container_port=containerPort,
cpu = 512,
image=ecs.ContainerImage.from_registry("000000000000.dkr.ecr.eu-west-1.amazonaws.com/template-service-ecr")
)
fargate_service.target_group.configure_health_check(path=self.node.try_get_context("health_check_path"), port="9000")
app = core.App()
GenericFargateService(app, "generic-fargate-service", env={'account':'000000000000', 'region': 'eu-west-1'})
app.synth()
Solutions to problems are like buses, apparently you spend ages waiting for one and then two arrive together. And I think this new bus is the option we're probably going to run with.
The plan is to have developers provide an override for the cdk.json file withing their code repos, which can then put parsed into the CDK pipeline where the generic code will be synthesised. This file will contain some "context", the context will then be used within the CDK to set our variables for the LoadBalancedFargate service.
I've included some code snippets for setting cdk.json file and then using its values within code below.
Example CDK.json:
{
"app": "python3 app.py",
"context": {
"container_name":"template-service",
"memory_limit":1024,
"container_cpu":512,
"health_check_path": "/gb/template/v1/status",
"ecr_repo": "000000000000.dkr.ecr.eu-west-1.amazonaws.com/template-service-ecr"
}
}
Python example for assigning context to variables:
memoryLimitMib = self.node.try_get_context("memory_limit")
I believe we could also use a Try/Catch block to assign some default values to this if not provided by the developer in their CDK.json file.
I hope this post has provided some useful information to those looking for ways to create a generic template for deploying CDK code! I don't know if we're doing the right thing here, but this tool is so new it feels like some common patterns dont exist yet.
I'm currently building quite a complex system in Python, and when I'm debugging I often put simple print statements in several scripts. To keep an overview I often also want to print out the file name and line number where the print statement is located. I can of course do that manually, or with something like this:
from inspect import currentframe, getframeinfo
print getframeinfo(currentframe()).filename + ':' + str(getframeinfo(currentframe()).lineno) + ' - ', 'what I actually want to print out here'
Which prints something like:
filenameX.py:273 - what I actually want to print out here
To make it more simple, I want to be able to do something like:
print debuginfo(), 'what I actually want to print out here'
So I put it into a function somewhere and tried doing:
from debugutil import debuginfo
print debuginfo(), 'what I actually want to print out here'
print debuginfo(), 'and something else here'
Unfortunately, I get:
debugutil.py:3 - what I actually want to print out here
debugutil.py:3 - and something else here
It prints out the file name and line number on which I defined the function, instead of the line on which I call debuginfo(). This is obvious, because the code is located in the debugutil.py file.
So my question is actually: How can I get the filename and line number from which this debuginfo() function is called?
The function inspect.stack() returns a list of frame records, starting with the caller and moving out, which you can use to get the information you want:
from inspect import getframeinfo, stack
def debuginfo(message):
caller = getframeinfo(stack()[1][0])
print("%s:%d - %s" % (caller.filename, caller.lineno, message)) # python3 syntax print
def grr(arg):
debuginfo(arg) # <-- stack()[1][0] for this line
grr("aargh") # <-- stack()[2][0] for this line
Output:
example.py:8 - aargh
If you put your trace code in another function, and call that from your main code, then you need to make sure you get the stack information from the grandparent, not the parent or the trace function itself
Below is a example of 3 level deep system to further clarify what I mean. My main function calls a trace function, which calls yet another function to do the work.
######################################
import sys, os, inspect, time
time_start = 0.0 # initial start time
def trace_libary_init():
global time_start
time_start = time.time() # when the program started
def trace_library_do(relative_frame, msg=""):
global time_start
time_now = time.time()
# relative_frame is 0 for current function (this one),
# 1 for direct parent, or 2 for grand parent..
total_stack = inspect.stack() # total complete stack
total_depth = len(total_stack) # length of total stack
frameinfo = total_stack[relative_frame][0] # info on rel frame
relative_depth = total_depth - relative_frame # length of stack there
# Information on function at the relative frame number
func_name = frameinfo.f_code.co_name
filename = os.path.basename(frameinfo.f_code.co_filename)
line_number = frameinfo.f_lineno # of the call
func_firstlineno = frameinfo.f_code.co_firstlineno
fileline = "%s:%d" % (filename, line_number)
time_diff = time_now - time_start
print("%13.6f %-20s %-24s %s" % (time_diff, fileline, func_name, msg))
################################
def trace_do(msg=""):
trace_library_do(1, "trace within interface function")
trace_library_do(2, msg)
# any common tracing stuff you might want to do...
################################
def main(argc, argv):
rc=0
trace_libary_init()
for i in range(3):
trace_do("this is at step %i" %i)
time.sleep((i+1) * 0.1) # in 1/10's of a second
return rc
rc=main(sys.argv.__len__(), sys.argv)
sys.exit(rc)
This will print something like:
$ python test.py
0.000005 test.py:39 trace_do trace within interface func
0.001231 test.py:49 main this is at step 0
0.101541 test.py:39 trace_do trace within interface func
0.101900 test.py:49 main this is at step 1
0.302469 test.py:39 trace_do trace within interface func
0.302828 test.py:49 main this is at step 2
The trace_library_do() function at the top is an example of something that you can drop into a library, and then call it from other tracing functions. The relative depth value controls which entry in the python stack gets printed.
I showed pulling out a few other interesting values in that function, like the line number of start of the function, the total stack depth, and the full path to the file. I didn't show it, but the global and local variables in the function are also available in inspect, as well as the full stack trace to all other functions below yours. There is more than enough information with what I am showing above to make hierarchical call/return timing traces. It's actually not that much further to creating the main parts of your own source level debugger from here -- and it's all mostly just sitting there waiting to be used.
I'm sure someone will object that I'm using internal fields with data returned by the inspect structures, as there may well be access functions that do this same thing for you. But I found them in by stepping through this type of code in a python debugger, and they work at least here. I'm running python 2.7.12, your results might very if you are running a different version.
In any case, I strongly recommend that you import the inspect code into some python code of your own, and look at what it can provide you -- Especially if you can single step through your code in a good python debugger. You will learn a lot on how python works, and get to see both the benefits of the language, and what is going on behind the curtain to make that possible.
Full source level tracing with timestamps is a great way to enhance your understanding of what your code is doing, especially in more of a dynamic real time environment. The great thing about this type of trace code is that once it's written, you don't need debugger support to see it.
An update to the accepted answer using string interpolation and displaying the caller's function name.
import inspect
def debuginfo(message):
caller = inspect.getframeinfo(inspect.stack()[1][0])
print(f"{caller.filename}:{caller.function}:{caller.lineno} - {message}")
The traceprint package can now do that for you:
import traceprint
def func():
print(f'Hello from func')
func()
# File "/traceprint/examples/example.py", line 6, in <module>
# File "/traceprint/examples/example.py", line 4, in func
# Hello from func
PyCharm will automatically make the file link clickable / followable.
Install via pip install traceprint.
Just put the code you posted into a function:
from inspect import currentframe, getframeinfo
def my_custom_debuginfo(message):
print getframeinfo(currentframe()).filename + ':' + str(getframeinfo(currentframe()).lineno) + ' - ', message
and then use it as you want:
# ... some code here ...
my_custom_debuginfo('what I actually want to print out here')
# ... more code ...
I recommend you put that function in a separate module, that way you can reuse it every time you need it.
Discovered this question for a somewhat related problem, but I wanted more details re: the execution (and I didn't want to install an entire call graph package).
If you want more detailed information, you can retrieve a full traceback with the standard library module traceback, and either stash the stack object (a list of tuples) with traceback.extract_stack() or print it out with traceback.print_stack(). This was more suitable for my needs, hope it helps someone else!
I have written a simple MapReduce flow to read in lines from a CSV from a file on Google Cloud Storage and subsequently make an Entity. However, I can't seem to get it to run on more than one shard.
The code makes use of mapreduce.control.start_map and looks something like this.
class LoadEntitiesPipeline(webapp2.RequestHandler):
id = control.start_map(map_name,
handler_spec="backend.line_processor",
reader_spec="mapreduce.input_readers.FileInputReader",
queue_name=get_queue_name("q-1"),
shard_count=shard_count,
mapper_parameters={
'shard_count': shard_count,
'batch_size': 50,
'processing_rate': 1000000,
'files': [gsfile],
'format': 'lines'})
I have shard_count in both places, because I'm not sure what methods actually need it. Setting shard_count anywhere from 8 to 32, doesn't change anything as the status page always says 1/1 shards running. To separate things, I've made everything run on a backend queue with a large number of instances. I've tried adjusting the queue parameters per this wiki. In the end, it seems to just run serially.
Any ideas? Thanks!
Update (Still no success):
In trying to isolate things, I tried making the call using direct calls to pipeline like so:
class ImportHandler(webapp2.RequestHandler):
def get(self, gsfile):
pipeline = LoadEntitiesPipeline2(gsfile)
pipeline.start(queue_name=get_queue_name("q-1"))
self.redirect(pipeline.base_path + "/status?root=" + pipeline.pipeline_id)
class LoadEntitiesPipeline2(base_handler.PipelineBase):
def run(self, gsfile):
yield mapreduce_pipeline.MapperPipeline(
'loadentities2_' + gsfile,
'backend.line_processor',
'mapreduce.input_readers.FileInputReader',
params={'files': [gsfile], 'format': 'lines'},
shards=32
)
With this new code, it still only runs on one shard. I'm starting to wonder if mapreduce.input_readers.FileInputReader is capable of parallelizing input by line.
It looks like FileInputReader can only shard via files. The format params only change the way mapper function got call. If you pass more than one files to the mapper, it will start to run on more than one shard. Otherwise it will only use one shard to process the data.
EDIT #1:
After dig deeper in the mapreduce library. MapReduce will decide whether or not to split file into pieces based on the can_split method return for each file type it defined. Currently, the only format which implement split method is ZipFormat. So, if your file format is not zip, it won't split the file to run on more than one shard.
#classmethod
def can_split(cls):
"""Indicates whether this format support splitting within a file boundary.
Returns:
True if a FileFormat allows its inputs to be splitted into
different shards.
"""
https://code.google.com/p/appengine-mapreduce/source/browse/trunk/python/src/mapreduce/file_formats.py
But it looks like it is possible to write your own file format split method. You can try to hack and add split method on _TextFormat first and see if more than one shard running.
#classmethod
def split(cls, desired_size, start_index, opened_file, cache):
pass
EDIT #2:
An easy workaround would be left the FileInputReader run serially but move the time-cosuming task to parallel reduce stage.
def line_processor(line):
# serial
yield (random.randrange(1000), line)
def reducer(key, values):
# parallel
entities = []
for v in values:
entities.append(CREATE_ENTITY_FROM_VALUE(v))
db.put(entities)
EDIT #3:
If try to modify the FileFormat, here is an example (haven't been test yet)
from file_formats import _TextFormat, FORMATS
class _LinesSplitFormat(_TextFormat):
"""Read file line by line."""
NAME = 'split_lines'
def get_next(self):
"""Inherited."""
index = self.get_index()
cache = self.get_cache()
offset = sum(cache['infolist'][:index])
self.get_current_file.seek(offset)
result = self.get_current_file().readline()
if not result:
raise EOFError()
if 'encoding' in self._kwargs:
result = result.encode(self._kwargs['encoding'])
return result
#classmethod
def can_split(cls):
"""Inherited."""
return True
#classmethod
def split(cls, desired_size, start_index, opened_file, cache):
"""Inherited."""
if 'infolist' in cache:
infolist = cache['infolist']
else:
infolist = []
for i in opened_file:
infolist.append(len(i))
cache['infolist'] = infolist
index = start_index
while desired_size > 0 and index < len(infolist):
desired_size -= infolist[index]
index += 1
return desired_size, index
FORMATS['split_lines'] = _LinesSplitFormat
Then the new file format can be called via change the mapper_parameters from lines to split_line.
class LoadEntitiesPipeline(webapp2.RequestHandler):
id = control.start_map(map_name,
handler_spec="backend.line_processor",
reader_spec="mapreduce.input_readers.FileInputReader",
queue_name=get_queue_name("q-1"),
shard_count=shard_count,
mapper_parameters={
'shard_count': shard_count,
'batch_size': 50,
'processing_rate': 1000000,
'files': [gsfile],
'format': 'split_lines'})
It looks to me like FileInputReader should be capable of sharding based on a quick reading of:
https://code.google.com/p/appengine-mapreduce/source/browse/trunk/python/src/mapreduce/input_readers.py
It looks like 'format': 'lines' should split using: self.get_current_file().readline()
Does it seem to be interpreting the lines correctly when it is working serially? Maybe the line breaks are the wrong encoding or something.
From experience FileInputReader will do a max of one shard per file.
Solution: Split your big files. I use split_file in https://github.com/johnwlockwood/karl_data to shard files before uploading them to Cloud Storage.
If the big files are already up there, you can use a Compute Engine instance to pull them down and do the sharding because the transfer speed will be fastest.
FYI: karld is in the cheeseshop so you can pip install karld