Using dask for loop parallelization in nested loops - python

I am just learning to use dask and read many threads on this forum related to Dask and for loops. But I am still unclear how to apply those solutions to my problem. I am working with climate data that are functions of (time, depth, location). The 'location' coordinate is a linear index such that each value corresponds to a unique (longitude, latitude). I am showing below a basic skeleton of what I am trying to do, assuming var1 and var2 are two input variables. I want to parallelize over the location parameter 'nxy', as my calculations can proceed simultaneously at different locations.
for loc in range(0,nxy): # nxy = total no. of locations
for it in range(0,ntimes):
out1 = expression1 involving ( var1(loc), var2(it,loc) )
out2 = expression2 involving ( var1(loc), var2(it,loc) )
# <a dozen more output variables>
My questions:
(i) Many examples illustrating the use of 'delayed' show something like "delayed(function)(arg)". In my case, I don't have too many (if any) functions, but lots of expressions. If 'delayed' only operates at the level of functions, should I convert each expression into a function and add a 'delayed' in front?
(ii) Should I wrap the entire for loop shown above inside a function and then call that function using 'delayed'? I tried doing something like this but might not be doing it correctly as I did not get any speed-up compared to without using dask. Here's what I did:
def test_dask(n):
for loc in range(0,n):
# same code as before
return var1 # just returning one variable for now
var1=delayed(tast_dask)(nxy)
var1.compute()
Thanks for your help.

Every delayed task adds about 1ms of overhead. So if your expression is slow (maybe you're calling out to some other expensive function), then yes dask.delayed might be a good fit. If not, then you should probably look elsewhere.
In particular, it looks like you're just iterating through a couple arrays and operating element by element. Please be warned that Python is very slow at this. You might want to not use Dask at all, but instead try one of the following approaches:
Find some clever way to rewrite your computation with Numpy expressions
Use Numba
Also, given the terms your using like lat/lon/depth, it may be that Xarray is a good project for you.

Related

Should the DataFrame function groupBy be avoided?

This link and others tell me that the Spark groupByKey is not to be used if there is a large number of keys, since Spark shuffles all the keys around. Does the same apply to the groupBy function as well? Or is this something different?
I'm asking this because I want to do what this question tries to do, but I have a very large number of keys. It should be possible to do this without shuffling all the data around by reducing on each node locally, but I can't find the PySpark way to do this (frankly, I find the documentation quite lacking).
Essentially, I am trying to do is:
# Non-working pseudocode
df.groupBy("A").reduce(lambda x,y: if (x.TotalValue > y.TotalValue) x else y)
However, the dataframe API does not offer a "reduce" option. I'm probably misunderstanding what exactly dataframe is trying to achieve.
A DataFrame groupBy followed by an agg will not move the data around unnecessarily, see here for a good example. Hence, there is no need to avoid it.
When using the RDD API, the opposite is true. Here it is preferable to avoid groupByKey and use a reducebyKey or combineByKey where possible. Some situations, however, do require one to use groupByKey.
The normal way to do this type of operation with the DataFrame API is to use groupBy followed by an aggregation using agg. In your example case, you want to find the maximum value for a single column for each group, this can be achived by the max function:
from pyspark.sql import functions as F
joined_df.groupBy("A").agg(F.max("TotalValue").alias("MaxValue"))
In addition to max there are a multitude of functions that can be used in combination with agg, see here for all operations.
The documentation is pretty all over the place.
There has been a lot of optimization work for dataframes. Dataframes has additional information about the structure of your data, which helps with this. I often find that many people recommend dataframes over RDDs due to "increased optimization."
There is a lot of heavy wizardry behind the scenes.
I recommend that you try "groupBy" on both RDDs and dataframes on large datasets and compare the results. Sometimes, you may need to just do it.
Also, for performance improvements, I suggest fiddling (through trial and error) with:
the spark configurations Doc
shuffle.partitions Doc

How to quickly rename all variables in a formula with Z3 (python API)

I am looking for a way to rename all variables in a formula according to a given substitution map. I am currently using the substitute function, but it seems to be quite slow.
Is there another function I can use which is faster than it? is there any other way of doing it quickly?
N.B. I am only substituting fresh variables to the variables in the original formula, so there are no renaming clashes. Is there any way to perform the renaming faster under this assumption?
For instance,
# given
f = And(Int('x') > Int('y'), Or(Int('x') - 5 >= Int('z'), Int('k') > 1))
# expected result after substitution
# f = And(Int('v0') > Int('v1'), Or(Int('v0') - 5 >= Int('v2'), Int('v3') > 1))
Is there any way to do it working on the context of f?
There isn't an inherently faster way over the API. I have a few comments regarding speed:
You seem to be using the Python API, which by itself has a huge overhead. It may help to time
the portion used in python separately from Z3.
The implementation of the substitute function uses class that gets allocated on the stack.
It is quite possible that making this class a persisted attribute on the context will
speed up amortized time because it will not be allocating and re-allocating memory repeatedly. I would have to profile an instance to be able to tell if this change really pays off.
The more fundamental way to perform renaming is to work with implicit renaming, so not apply substitution at all, but access variables with different offsets. This low level way of dereferencing variables is not available in any way over the API or even the way we represent high-level expressions so it is not going to be an option.
If your application allows it, you may be able to work with existing terms and encode substitutions implicitly. For example in some applications one can just add equality constraints between old and new variables.

How to improve Python code speed

I was solving this python challenge http://coj.uci.cu/24h/problem.xhtml?abb=2634 and this is my answer
c = int(input())
l = []
for j in range(c) :
i = raw_input().split()[1].split('/')
l.append(int(i[1]))
for e in range(1,13) :
print e , l.count(e)
But it was not the fastest python solution, so i tried to find how to improve the speed and i found that xrange was faster than range. But when i tried the following code it was actually slower
c = int(input())
l = []
for j in xrange(c):
i = raw_input().split()[1].split('/')[1]
l.append(i)
for e in xrange(1,13) :
print e , l.count(`e`)
so i have 2 questions :
How can i improve the speed of my script
Where can i find information on how to improve python speed
When i was looking for this info i found sites like this one https://wiki.python.org/moin/PythonSpeed/PerformanceTips
but it doesn't specify for example, if it is faster/slower to split a string multiple times in a single line or in multiple lines, for example using part of the script mentioned above :
i = raw_input().split()[1].split('/')[1]
vs
i = raw_input().split()
i = i[1].split('/')
i = i[1]
Edit : I have tried all your suggestions but my first answer is still the fastest and i don't know why. My firs answer was 151ms and #Bakuriu's answer was 197ms and my answer using collections.Counter was 188ms.
Edit 2 : Please disregard my last edit, i just found out that the method for checking your code performance in the site mentioned above does not work, if you upload the same code more times the performance is different each time, some times it's slower and sometimes faster
Assuming you are using CPython, the golden rule is to push as much work as possible into built-in functions, since these are written in C and thus avoid the interpreter overhead.
This means that you should:
Avoid explicit loops when there is a function/method that already does what you want
Avoid expensive lookups in inner loops. In rare circumstances you may go as far as use local variables to store built-in functions.
Use the right data structures. Don't simply use lists and dicts. The standard library contains other data types, and there are many libraries out there. Consider which should be the efficient operations to solve your problem and choose the correct data structure
Avoid meta-programming. If you need speed you don't want a simple attribute lookup to trigger 10 method calls with complex logic behind the scenes. (However where you don't really need speed metaprogramming is really cool!)
Profile your code to find the bottleneck and optimize the bottleneck. Often what we think about performance of some concrete code is completely wrong.
Use the dis module to disassemble the bytecode. This gives you a simple way to see what the interpreter will really do. If you really want to know how the interpreter works you should try to read the source for PyEval_EvalFrameEx which contains the mainloop of the interpreter (beware: hic sunt leones!).
Regarding CPython you should read An optimization anecdote by Guido Van Rossum. It gives many insights as to how performance can change with various solutions. An other example could be this answer (disclaimer: it's mine) where the fastest solution is probably very counter intuitive for someone not used to CPython workings.
An other good thing to do is to study all most used built-in and stdlib data types, since each one has both positive and negative proporties. In this specific case calling list.count() is an heavy operation, since it has to scan the whole list every time it is performed. That's probably were a lot of the time is consumed in your solution.
One way to minimize interpreter overhead is to use collections.Counter, which also avoids scanning the data multiple times:
from collections import Counter
counts = Counter(raw_input().split('/')[-2] for _ in range(int(raw_input())))
for i in range(1, 13):
print(i, counts[str(i)])
Note that there is no need to convert the month to an integer, so you can avoid those function calls (assuming the months are always written in the same way. No 07 and 7).
Also I don't understand why you are splitting on whitespace and then on the / when you can simply split by the / and take the one-to-last element from the list.
An other (important) optimization could be to read all stdin to avoid multiple IO calls, however this may not work in this situation since the fact that they tell you how many employees are there probably means that they are not sending an EOF.
Note that different versions of python have completely different ways of optimizing code. For example PyPy's JIT works best when you perform simply operations in loops that the JIT is able to analyze and optimize. So it's about the opposite of what you would do in CPython.

Python Style: nested vs extra function

I'm quite new to python (2.7) and have a question about what's the most Pythonic way to do something; my code (part of a class) Looks like this (a somewhat naive Version):
def calc_pump_height(self):
for i in range(len(self.primary_)):
for j in range(len(self.primary_)):
if self.connections_[i][j].sub_kind_ in [1,4]:
self.calc_spec_pump_height(i,j)
def calc_spec_pump_height(self,i,j):
pass
(obviously pass will be replaced by something else, manipulating attributes of the object of this class, without generating a return value)
I'd like to ask how I should do this: I could avoid the second function and write the extra code directly into the first function, getting rid of one function (Simple is better than complex), but creating a heavily nested function at the same time (Flat is better than nested).
I could also create some sort of list comprehension to avoid using a double Loop, eg:
def calc_pump_height(self):
ra = range(len(self.primary_))
[self.calc_spec_pump_height(i,j) for i,j in zip(ra, ra)]
(I'd have to move the if condition into the 2nd function; this would also create a null-list but I don't care about this, since calc_spec_pump_height is supposed to manipulate the object, not return something useful)
In essence: I'm iterating over a 2D list, testing each object for a certain characteristic and then do something with that object.
Which of the above methods is 'the best'? Or is there another way that I'm missing?
The key thing about functions/methods is that they should do one thing.
calc_pump_height implements two things: It finds elements in a 2D list that match some criteria, and then it calculates a value for each of those elements. It's ok for its purpose to be combining the other two operations, if that makes sense for the object's public API, but its not ok for it to implement either or both.
Finding the elements that match the criteria is a discrete step; that should be a function.
Calculating your value is clearly a discrete step; that should be a function.
I would implement the element matcher as a (private) generator, that takes the test condition as an argument, and yields all matching elements. It's just an iterator over your data structure, masked by the logical test. You can wrap that in a named public method called get_1_4_subkinds() or something that makes more sense in your domain. That generalises the code and gives you the flexibility to implement other conditions in the future. Also, your i and j are tightly coupled, so it makes sense to pass them around as a single concept. Then your code becomes:
def calc_pump_height(self):
for subkind_indices in self.get_1_4_subkinds():
self.calc_pump_spec_height(subkind_indices)
You have misunderstood “simplicity”:
write the extra code directly into the first function, getting rid of one function (Simple is better than complex)
That's not simple. Breaking complex sequences into discrete, focussed functions increases simplicity.
In that light, I would say that yes, you should definitely prefer calc_spec_pump_height as a separate function.
You can eliminate one level of nesting in your first function by using itertools.product to generate your i and j values at the same time (itertools.product(range(len(self.primary_)), repeat=2). The zip you use in the your second version won't work correctly, it will only yield identical pairs, 0,0, 1,1, 2,2, etc.
As for the overall design, you should not use a list comprehension if you don't care about the return value from the function you're calling. Use an explicit loop when it's the looping you want (rather than a list of computed values).
If there's a non-trivial amount of code that will go in calc_spec_pump_height, it makes perfect sense to make it as a separate method. If it's a one or two liner, then it might be OK to inline within calc_pump_height, but that method's loops and condition testing may be complicated enough already to justify factoring out the inner part of the algorithm.
You should usually think about splitting a big function up when it is too long to fit onto a single screen in your editor. That is about the limit of how many details (variable names, etc.) we can keep in our mind simultaneously. On the other hand, you shouldn't waste time (either your own programming time or function call overhead at run time) by factoring out every little piece of every problem. Factor part of a function out if you're using it from more than one place, or if you can't keep the details of the whole function in your head at once otherwise.
So, other than the (marginal) improvement of itertools.product and given the limited information you've provided about what calc_spec_pump_height will do, I think your code is already about as good as it can get!

Why don't any and all take multiple parameters like min and max?

The functions min and max are very flexible; they can take any number of parameters, or a single parameter that is an iterable. any and all are similar in taking an iterable of any size, but they do not take more than one parameter. Is there a reason for this difference in behavior?
I realize that the question might seem unanswerable, but the process of enhancing Python is pretty open; many seemingly arbitrary design decisions are part of the public record. I've seen similar questions answered in the past, and I'm hoping this one can be as well.
Inspired by this question: Is there a builtin function version of and and/or or in Python?
A lot of the features in Python are suggested based on how much users need them, however they must also conform to the style of the language. People often need to do this:
max_val = 0
for x in seq:
# ... do complex calculations
max_val = max(max_val, result)
which warrants the use of the multiple parameters. It also looks good. I haven't heard of anyone needing to use any(x, y, z) because it is most often used on sequences. For a small number of values you can just use the and/or logical operators and for a lot of values you really should be using a list anyway or your code gets messy. I'm certain that not much thought has gone into this because it really wouldn't benefit anyone, it hasn't been under large demand so the Python devs don't worry about it.

Categories