Alternatives to using nested functions in PySpark mapPartitions when using Cython? - python

I have a row-wise operation I wish to perform on my dataframe which takes in some fixed variables as parameters. The only way I know how to do this is with the use of nested functions. I'm trying to use Cython to compile a portion of my code, then call the Cython function from within mapPartitions, but it raised the error PicklingError: Can't pickle <cyfunction outer_function.<locals>._nested_function at 0xfffffff>.
When using pure Python, I do
def outer_function(fixed_var_1, fixed_var_2):
def _nested_function(partition):
for row in partition:
yield dosomething(row, fixed_var_1, fixed_var_2)
return _nested_function
output_df = input_df.repartition(some_col).rdd \
.mapPartitions(outer_function(a, b))
Right now I have outer_function defined in a separate file, like this
# outer_func.pyx
def outer_function(fixed_var_1, fixed_var_2):
def _nested_function(partition):
for row in partition:
yield dosomething(row, fixed_var_1, fixed_var_2)
return _nested_function
and this
# runner.py
from outer_func import outer_function
output_df = input_df.repartition(some_col).rdd \
.mapPartitions(outer_function(a, b))
And this throws the pickling error above.
I've looked at https://docs.databricks.com/user-guide/faq/cython.html and tried to get outer_function. Still, the same error occurs. The problem is that the nested function does not appear in the global space of the module, thus it cannot be found and serialized.
I've also tried doing this
def outer_function(fixed_var_1, fixed_var_2):
global _nested_function
def _nested_function(partition):
for row in partition:
yield dosomething(row, fixed_var_1, fixed_var_2)
return _nested_function
This throws a different error AttributeError: 'module' object has no attribute '_nested_function'.
Is there any way of not using nested function in this case? Or is there another way I can make the nested function "serializable"?
Thanks!
EDIT: I also tried doing
# outer_func.pyx
class PartitionFuncs:
def __init__(self, fixed_var_1, fixed_var_2):
self.fixed_var_1 = fixed_var_1
self.fixed_var_2 = fixed_var_2
def nested_func(self, partition):
for row in partition:
yield dosomething(row, self.fixed_var_1, self.fixed_var_2)
# main.py
from outer_func import PartitionFuncs
p_funcs = PartitionFuncs(a, b)
output_df = input_df.repartition(some_col).rdd \
.mapPartitions(p_funcs.nested_func)
And still I get PicklingError: Can't pickle <cyfunction PartitionFuncs.nested_func at 0xfffffff>. Oh well, the idea didn't work.

This is a sort-of-half answer because when I tried your class PartitionFuncs method p_funcs.nested_func pickled/unpickled fine for me (I didn't try combining it with PySpark though), so whether the solution below is necessary may depend on your Python version/platform etc. Pickle should support bound methods from Python 3.4, however it looks like PySpark forces the pickle protocol to 3, which will stop that working. There might be ways to change this but I don't know them.
Nested functions are known not to be pickleable, so that approach definitely work work. The class approach is the right one.
My suggestion in the comments was to just try pickling the class, not the bound function. For this to work an instance of the class needs to be callable, so you rename your function to __call__
class PartitionFuncs:
def __init__(self, fixed_var_1, fixed_var_2):
self.fixed_var_1 = fixed_var_1
self.fixed_var_2 = fixed_var_2
def __call__(self, partition):
for row in partition:
yield dosomething(row, self.fixed_var_1, self.fixed_var_2)
This does depend on both the fixed_var variables being pickleable by default. If they're not you can write custom saving and loading methods, as described in the pickle documentation.
As you point out in your comment, this does mean you need a separate class for each function you define. Options here involve inheritance, so having a separate PickleableData class, that each of the Func classes can hold a reference to.

Related

Using numba to compile dynamic functions

I'm writing a program which dynamically detects and imports python functions and detects which input parameters and outputs that is will expect/generate.
Like so:
def importFunctions(self, filename):
moduleImport = __import__(filename)
members = getmembers(moduleImport, isfunction)
functions = []
for m in members:
function = getattr(moduleImport, m[0])
number_of_inputs = function.__code__.co_argcount
inputs = function.__code__.co_varnames
if number_of_inputs > 1:
inputs = inputs[0:number_of_inputs-1]
elif number_of_inputs == 1:
inputOne = inputs[0]
inputs = []
inputs.append(inputOne)
outputs = function.__annotations__["return"]
functions.append([function, inputs, outputs])
return functions
This works only when I properly annotate the function, an example function could look something like this:
from numba import jit
#jit
def subtraction(a, b) -> ["difference"]:
a = float(a)
b = float(b)
difference = a - b
return (difference,)
This work perfectly fine without the decorator, but when I want to add the numba "jit" decorator to a function, I get an error saying that the imported function is missing the "return"-annotation.
UPDATE
Having tried to aces the original function by using "func.py_func" as suggested by #Rutger Kassies, my suspicions are that either getmembers or getattr it not proporely importing the numba to-be-compiled function.
It seems that getmembers finds "jit" as a separate member, and doesn't correctly associate it with the original function. The way it's written above, the 'function' named "jit", is of type function, as it should be. However, calling it returns a "<function _jit..wrapper". This has me scratching my head quite a bit but I suppose the 'getattr' is somehow behind this.
My guess is that I will have to fin another approach to dynamically importing functions that doesn't rely on "getattr".
If you're dealing with the numba.jit or numba.njit decorators, you can access the original function, in all it's annotated glory, by accessing the .py_func attribute. A simple example:
import numpy as np
import numba
from typing import get_type_hints, Annotated, Any
custom_output_type = Annotated[Any, "something"]
#numba.njit
def func(x: float) -> custom_output_type:
return x**2
# trigger compilation, not required
func(1.2)
get_type_hints(func.py_func, include_extras=True)
Which returns what you would expect from a regular Python function:
{'x': float, 'return': typing.Annotated[typing.Any, 'something']}
It would be similar when using the inspect module.
It gets more complicated when you use the other decorators lie vectorize & guvectorize, unfortunately. See for example:
https://numba.discourse.group/t/using-annotations-with-numba-gu-vectorize-functions/1008
It's probably best to rely as much as possible on the inspect & typing modules over accessing the private attributes of a function.

Inspect (duck) type of python function arguments

Given the following display function,
def display(some_object):
print(some_object.value)
is there a way to programmatically determine that the attributes of some_object must include value?
Modern IDEs (like PyCharm) yield a syntax error if I try to pass an int to the display function, so they are obviously doing this kind of analysis behind the scenes. I am aware how to get the function signature, this question is only about how to get the (duck) type information, i.e. which attributes are expected for each function argument.
EDIT: In my specific use case, I have access to the source code (non obfuscated), but I am not in control of adding the type hints as the functions are user defined.
Toy example
For the simple display function, the following inspection code would do,
class DuckTypeInspector:
def __init__(self):
self.attrs = []
def __getattr__(self, attr):
return self.attrs.append(attr)
dti = DuckTypeInspector()
display(dti)
print(dti.attrs)
which outputs
None # from the print in display
['value'] # from the last print statement, this is what i am after
However, as the DuckTypeInspector always returns None, this approach won't work in general. A simple add function for example,
def add(a, b):
return a + b
dti1 = DuckTypeInspector()
dti2 = DuckTypeInspector()
add(dti1, dti2)
would yield the following error,
TypeError: unsupported operand type(s) for +: 'DuckTypeInspector' and 'DuckTypeInspector'
The way to do this with static analysis is to declare the parameters as adhering to a protocol and then use mypy to validate that the actual parameters implement that protocol:
from typing import Protocol
class ValueProtocol(Protocol):
value: str
class ValueThing:
def __init__(self):
self.value = "foo"
def display(some_object: ValueProtocol):
print(some_object.value)
display(ValueThing()) # no errors, because ValueThing implements ValueProtocol
display("foo") # mypy error: Argument 1 to "display" has incompatible type "str"; expected "ValueProtocol"
Doing this at runtime with mock objects is impossible to do in a generic way, because you can't be certain that the function will go through every possible code path; you would need to write a unit test with carefully constructed mock objects for each function and make sure that you maintain 100% code coverage.
Using type annotations and static analysis is much easier, because mypy (or similar tools) can check each branch of the function to make sure that the code is compatible with the declared type of the parameter, without having to generate fake values and actually execute the function against them.
If you want to programmatically inspect the annotations from someone else's module, you can use the magic __annotations__ attribute:
>>> display.__annotations__
{'some_object': <class '__main__.ValueProtocol'>, 'return': None}

is it possible to revert (return to old method) python monkey patch

I use a specialized python module which modifies some of the Django class methods in the runtime (aka monkey-patching). If I need these 'old' versions is it possible to 'come back' to them overriding monkey patching?
Something like importing the initial version of these classes, for example?
Here is an example of how patching was done in the package:
from django.template.base import FilterExpression
def patch_filter_expression():
original_resolve = FilterExpression.resolve
def resolve(self, context, ignore_failures=False):
return original_resolve(self, context, ignore_failures=False)
FilterExpression.resolve = resolve
It depends on what the patch did. Monkeypatching is nothing special, it's just an assignment of a different object to a name. If nothing else references the old value anymore, then it's gone from Python's memory.
But if the code that patched the name has kept a reference to the original object in the form of a different variable, then the original object is still there to be 'restored':
import target.module
_original_function = target.module.target_function
def new_function(*args, **kwargs):
result = _original_function(*args, **kwargs)
return result * 5
target.module.target_function = new_function
Here the name target_function in the target.module module namespace was re-bound to point to new_function, but the original object is still available as _original_function in the namespace of the patching code.
If this is done in a function, then the original could be available as a closure too. For your specific example, you can get the original with:
FilterExpression.resolve.__closure__[0].cell_contents
or, if you prefer access by name:
def closure_mapping(func):
closures, names = func.__closure__, func.__code__.co_freevars
return {n: c.cell_contents for n, c in zip(names, closures)}
original_resolve = closure_mapping(FilterExpression.resolve)['original_resolve']
Otherwise, you can tell Python to reload the original module with importlib.reload():
import target.module
importlib.reload(target.module)
This refreshes the module namespace, 'resetting' all global names to what they'd been set to at import time (any additional names are retained).
Note, however, that any code holding a direct reference to the patched object (such as your class object), would not see the updated objects! That's because from target.module import target_function creates a new reference to the target_function object in the current namespace and no amount of reloading of the original target.module module will update any of the other direct references. You'd have to update those other references manually, or reload their namespaces too.

Is it possible to pickle Python object by reference (by name)?

I have a situation where there's a complex object that can be referenced by unique name like package.subpackage.MYOBJECT. While it's possible to pickle this object using standard pickle algorithm, resulting data string will be very big.
I'm looking for some way to get same pickling semantic for an object that is already here for classes and functions: Python's pickle just dumps their fully qualified names, not code. This way just string like package.subpackage.MYOBJECT will be dumped and upon unpickling object will be imported, just like it happens for functions or classes.
It seems that this task boils down to making object aware of variable name it's bound to, but I have no clues how to do it.
Here's short example to explain myself clearly (obvious imports are skipped).
File bigpackage/bigclasses/models.py:
class SomeInterface():
__meta__ = ABCMeta
#abstractmethod
def operation():
pass
class ImplementationA(SomeInterface):
def operation():
print "ImplementationA"
class ImplementationB(SomeInterface):
def operation():
print "ImplementationB"
IMPL_A = ImplementationA()
IMPL_B = ImplementationB()
File bigpackage/bigclasses/tasks.py:
#celery.task
def background_task(impl, somearg):
assert isinstance(impl, SomeInterface)
impl.operation()
print somearg
File bigpackage/bigclasses/work.py:
from bigpackage.bigclasses.models import IMPL_A, IMPL_B
from bigpackage.bigclasses.tasks import background_task
background_task.submit(IMPL_A, "arg1")
background_task.submit(IMPL_B, "arg2")
Here I have trivial background Celery task that accept one of two available implementations of SomeInterface as an argument. Task's arguments are pickled by Celery, passed to a queue and executed on some worker server, that runs exactly the same code base. My idea is to avoid deep pickling of IMPL_A and IMPL_B and instead pass them as bigpackage.bigclasses.models.IMPL_A and bigpackage.bigclasses.models.IMPL_B correspondingly. That will help with performance and total traffic for queue server and also provide some safety against changes in IMPL_A and IMPL_B that will make them non-pickleable (for example lambda anywhere in object attributes hierarchy).

Why does this not pickle functions as redefined through code? [duplicate]

I'm trying to transfer a function across a network connection (using asyncore). Is there an easy way to serialize a python function (one that, in this case at least, will have no side effects) for transfer like this?
I would ideally like to have a pair of functions similar to these:
def transmit(func):
obj = pickle.dumps(func)
[send obj across the network]
def receive():
[receive obj from the network]
func = pickle.loads(s)
func()
You could serialise the function bytecode and then reconstruct it on the caller. The marshal module can be used to serialise code objects, which can then be reassembled into a function. ie:
import marshal
def foo(x): return x*x
code_string = marshal.dumps(foo.__code__)
Then in the remote process (after transferring code_string):
import marshal, types
code = marshal.loads(code_string)
func = types.FunctionType(code, globals(), "some_func_name")
func(10) # gives 100
A few caveats:
marshal's format (any python bytecode for that matter) may not be compatable between major python versions.
Will only work for cpython implementation.
If the function references globals (including imported modules, other functions etc) that you need to pick up, you'll need to serialise these too, or recreate them on the remote side. My example just gives it the remote process's global namespace.
You'll probably need to do a bit more to support more complex cases, like closures or generator functions.
Check out Dill, which extends Python's pickle library to support a greater variety of types, including functions:
>>> import dill as pickle
>>> def f(x): return x + 1
...
>>> g = pickle.dumps(f)
>>> f(1)
2
>>> pickle.loads(g)(1)
2
It also supports references to objects in the function's closure:
>>> def plusTwo(x): return f(f(x))
...
>>> pickle.loads(pickle.dumps(plusTwo))(1)
3
Pyro is able to do this for you.
The most simple way is probably inspect.getsource(object) (see the inspect module) which returns a String with the source code for a function or a method.
It all depends on whether you generate the function at runtime or not:
If you do - inspect.getsource(object) won't work for dynamically generated functions as it gets object's source from .py file, so only functions defined before execution can be retrieved as source.
And if your functions are placed in files anyway, why not give receiver access to them and only pass around module and function names.
The only solution for dynamically created functions that I can think of is to construct function as a string before transmission, transmit source, and then eval() it on the receiver side.
Edit: the marshal solution looks also pretty smart, didn't know you can serialize something other thatn built-ins
In modern Python you can pickle functions, and many variants. Consider this
import pickle, time
def foobar(a,b):
print("%r %r"%(a,b))
you can pickle it
p = pickle.dumps(foobar)
q = pickle.loads(p)
q(2,3)
you can pickle closures
import functools
foobar_closed = functools.partial(foobar,'locked')
p = pickle.dumps(foobar_closed)
q = pickle.loads(p)
q(2)
even if the closure uses a local variable
def closer():
z = time.time()
return functools.partial(foobar,z)
p = pickle.dumps(closer())
q = pickle.loads(p)
q(2)
but if you close it using an internal function, it will fail
def builder():
z = 'internal'
def mypartial(b):
return foobar(z,b)
return mypartial
p = pickle.dumps(builder())
q = pickle.loads(p)
q(2)
with error
pickle.PicklingError: Can't pickle <function mypartial at 0x7f3b6c885a50>: it's not found as __ main __.mypartial
Tested with Python 2.7 and 3.6
The cloud package (pip install cloud) can pickle arbitrary code, including dependencies. See https://stackoverflow.com/a/16891169/1264797.
code_string = '''
def foo(x):
return x * 2
def bar(x):
return x ** 2
'''
obj = pickle.dumps(code_string)
Now
exec(pickle.loads(obj))
foo(1)
> 2
bar(3)
> 9
Cloudpickle is probably what you are looking for.
Cloudpickle is described as follows:
cloudpickle is especially useful for cluster computing where Python
code is shipped over the network to execute on remote hosts, possibly
close to the data.
Usage example:
def add_one(n):
return n + 1
pickled_function = cloudpickle.dumps(add_one)
pickle.loads(pickled_function)(42)
You can do this:
def fn_generator():
def fn(x, y):
return x + y
return fn
Now, transmit(fn_generator()) will send the actual definiton of fn(x,y) instead of a reference to the module name.
You can use the same trick to send classes across network.
The basic functions used for this module covers your query, plus you get the best compression over the wire; see the instructive source code:
y_serial.py module :: warehouse Python objects with SQLite
"Serialization + persistance :: in a few lines of code, compress and annotate Python objects into SQLite; then later retrieve them chronologically by keywords without any SQL. Most useful "standard" module for a database to store schema-less data."
http://yserial.sourceforge.net
Here is a helper class you can use to wrap functions in order to make them picklable. Caveats already mentioned for marshal will apply but an effort is made to use pickle whenever possible. No effort is made to preserve globals or closures across serialization.
class PicklableFunction:
def __init__(self, fun):
self._fun = fun
def __call__(self, *args, **kwargs):
return self._fun(*args, **kwargs)
def __getstate__(self):
try:
return pickle.dumps(self._fun)
except Exception:
return marshal.dumps((self._fun.__code__, self._fun.__name__))
def __setstate__(self, state):
try:
self._fun = pickle.loads(state)
except Exception:
code, name = marshal.loads(state)
self._fun = types.FunctionType(code, {}, name)

Categories