Cumulative custom function - python

I am trying to add a column to a pandas dataframe
import pandas as pd
df = pd.DataFrame([['a',1],['b',0],['c',1],['d',1],['e',0],['f',1]])
such that it contains the result of a cumulative custom function
a --> (total + a) * a
that is, it takes the value a, sums it up with the total and multiplies the result. In my example I would like to have as output:
pd.DataFrame([['a',1,1],['b',0,0],['c',1,1],['d',1,2],['e',0,0],['f',1,1]])
I understand that this could be done using
df.expanding.apply(some_lambda_function)
but I have some difficult in understanding how to code it.
Do you have any idea?
many thanks.

I will recommend for loop ..
start=0
total=[]
for x ,y in df.iterrows():
start=(y[1]+start)*y[1]
total.append(start)
total
Out[201]: [1, 0, 1, 2, 0, 1]

Related

Create a column and assign values randomly

I have a dataframe containing customers ID.
I want to create a new column named group_user which would take only 3 values : 0,1,2
I want these values to be assigned randomly to customers in balanced proportions.
The output would be :
ID group_user
341 1
127 0
389 2
Thanks !
You could try this:
>>> lst = [0, 1, 2]
>>> df['group_user'] = pd.Series(np.tile(lst, len(df) // len(lst) + 1)[:len(df)]).sample(frac=1)
>>> df
This would work for all length columns and list.
I think this may work for you:
import pandas as pd
import numpy as np
randints = [0, 1, 2]
N = 100
# Generate a dataframe with N entries, where the ID is a three digit integer and group_usr is selected in random from the variable randints.
df = pd.DataFrame({'ID': np.random.randint(low=100,high=999,size = N),
'group_usr': np.random.choice(randints, size = N, replace=True)})
if the dataframe is large (long) enough you should get more or less equal proportions. So, for example, when you have a 100 entries in you dataframe this is the distribution of the group_usr column:
You can try this:
import random
df= pd.DataFrame({'ID':random.sample(range(100,1000),25), 'col2':np.nan*25})
groups=random.choices(([0]*3)+([1]*5)+([2]*5), k=len(df.ID))
df['groups']=groups
proportions are 3, 5, 5.

Pandas: extract number from calculation within loop

I'm trying to do calculations within a loop from multiple columns within a pandas dataframe. I want the output to be just a number, but it is in the form [index number dtype: int64]. It seems like it should be easy to get just that number, but I can't figure it out. Here is a simple example of some data and a basic calculation
import pandas as pd
# create a little dataframe
df = pd.DataFrame({
'A': [1,2],
'B': [3,4]
})
# create a list to hold results
l1 = []
# run a loop to do a simple example calculation
for i,_ in enumerate(df.A):
val = df.A[[i]] + df.B[[i]]
l1.append(val)
This is what I get for l1:
[0 4
dtype: int64,
1 6
dtype: int64]
My desired output is
[4, 6]
I can take the second value from each element in the list, but I want to do something faster, because my dataset is large, and it seems like I should be able to return a calculation without the index and dtype. Thank you in advance.
Change you last line within for loop, the original one return Series which will cause the 'issue' you mentioned
l1 = []
# run a loop to do a simple example calculation
for i,_ in enumerate(df.A):
val = df.A[[i]] + df.B[[i]]
l1.append(val.iloc[0])
l1
Out[154]: [4, 6]

Get index of elements in first Series within the second series

I want to get the index of all values in the smaller series for the larger series. The answer is in the code snippet below stored in the ans variable.
import pandas as pd
smaller = pd.Series(["a","g","b","k"])
larger = pd.Series(["a","b","c","d","e","f","g","h","i","j","k","l","m"])
# ans to be generated by some unknown combination of functions
ans = [0,6,1,10]
print(larger.iloc[ans,])
print(smaller)
assert(smaller.tolist() == larger.iloc[ans,].tolist())
Context: Series larger serves as an index for the columns in a numpy matrix, and series smaller serves as an index for the columns in a numpy vector. I need indexes for the matrix and vector to match.
You can reverse your larger series, then index this with smaller:
larger_rev = pd.Series(larger.index, larger.values)
res = larger_rev[smaller].values
print(res)
array([ 0, 6, 1, 10], dtype=int64)
for i in list(smaller):
if i in list(larger):
print((list(larger).index(i)))
This will get you the desired output
Using Series get
pd.Series(larger.index, larger.values).get(smaller)
Out[8]:
a 0
g 6
b 1
k 10
dtype: int64
try this :)
import pandas as pd
larger = pd.Series(["a","b","c","d","e","f","g","h","i","j","k","l","m"])
smaller = pd.Series(["a","g","b","k"])
res = pd.Series(larger.index, larger.values).reindex(smaller.values, copy=True)
print(res)

finding the max of a column in an array

def maxvalues():
for n in range(1,15):
dummy=[]
for k in range(len(MotionsAndMoorings)):
dummy.append(MotionsAndMoorings[k][n])
max(dummy)
L = [x + [max(dummy)]] ## to be corrected (adding columns with value max(dummy))
## suggest code to add new row to L and for next function call, it should save values here.
i have an array of size (k x n) and i need to pick the max values of the first column in that array. Please suggest if there is a simpler way other than what i tried? and my main aim is to append it to L in columns rather than rows. If i just append, it is adding values at the end. I would like to this to be done in columns for row 0 in L, because i'll call this function again and add a new row to L and do the same. Please suggest.
General suggestions for your code
First of all it's not very handy to access globals in a function. It works but it's not considered good style. So instead of using:
def maxvalues():
do_something_with(MotionsAndMoorings)
you should do it with an argument:
def maxvalues(array):
do_something_with(array)
MotionsAndMoorings = something
maxvalues(MotionsAndMoorings) # pass it to the function.
The next strange this is you seem to exlude the first row of your array:
for n in range(1,15):
I think that's unintended. The first element of a list has the index 0 and not 1. So I guess you wanted to write:
for n in range(0,15):
or even better for arbitary lengths:
for n in range(len(array[0])): # I chose the first row length here not the number of columns
Alternatives to your iterations
But this would not be very intuitive because the max function already implements some very nice keyword (the key) so you don't need to iterate over the whole array:
import operator
column = 2
max(array, key=operator.itemgetter(column))[column]
this will return the row where the i-th element is maximal (you just define your wanted column as this element). But the maximum will return the whole row so you need to extract just the i-th element.
So to get a list of all your maximums for each column you could do:
[max(array, key=operator.itemgetter(column))[column] for column in range(len(array[0]))]
For your L I'm not sure what this is but for that you should probably also pass it as argument to the function:
def maxvalues(array, L): # another argument here
but since I don't know what x and L are supposed to be I'll not go further into that. But it looks like you want to make the columns of MotionsAndMoorings to rows and the rows to columns. If so you can just do it with:
dummy = [[MotionsAndMoorings[j][i] for j in range(len(MotionsAndMoorings))] for i in range(len(MotionsAndMoorings[0]))]
that's a list comprehension that converts a list like:
[[1, 2, 3], [4, 5, 6], [0, 2, 10], [0, 2, 10]]
to an "inverted" column/row list:
[[1, 4, 0, 0], [2, 5, 2, 2], [3, 6, 10, 10]]
Alternative packages
But like roadrunner66 already said sometimes it's easiest to use a library like numpy or pandas that already has very advanced and fast functions that do exactly what you want and are very easy to use.
For example you convert a python list to a numpy array simple by:
import numpy as np
Motions_numpy = np.array(MotionsAndMoorings)
you get the maximum of the columns by using:
maximums_columns = np.max(Motions_numpy, axis=0)
you don't even need to convert it to a np.array to use np.max or transpose it (make rows to columns and the colums to rows):
transposed = np.transpose(MotionsAndMoorings)
I hope this answer is not to unstructured. Some parts are suggestions to your function and some are alternatives. You should pick the parts that you need and if you have any trouble with it, just leave a comment or ask another question. :-)
An example with a random input array, showing that you can take the max in either axis easily with one command.
import numpy as np
aa= np.random.random([4,3])
print aa
print
print np.max(aa,axis=0)
print
print np.max(aa,axis=1)
Output:
[[ 0.51972266 0.35930957 0.60381998]
[ 0.34577217 0.27908173 0.52146593]
[ 0.12101346 0.52268843 0.41704152]
[ 0.24181773 0.40747905 0.14980534]]
[ 0.51972266 0.52268843 0.60381998]
[ 0.60381998 0.52146593 0.52268843 0.40747905]

Python Pandas: How to make a column row dependent on it's previous rows, possibly with a function?

I am trying to calculate column B in dependence of previous data of Column A and B. A simple function example would be
e.g. B(n) = A(n-1) + B(n-1),
where n is the index of the Pandas dataframe. I do not need necessarily to use the dataframe index.
In this example, I start with B(1) = 0 and add the A rows in consecutive fashion.
n A(n) B(n)
----------------
1 1 0
2 0 1
3 2 1
4 9 3
An example of this data structure would be defined in Pandas as
d = {'A' : pd.Series([1, 0, 2, 9],),
'B' : pd.Series([0, float("nan"), float("nan"), float("nan")])}
df = pd.DataFrame(d)
Update
Both Henry Cutchers' and Jakob's answer work well.
As your example problem can be reduced to be dependent on B[0] and A[n] only:
a possible simple solution could look like
import pandas as pd
import numpy as np
d = {'A' : pd.Series([1, 0, 2, 9],),
'B' : pd.Series([0, float("nan"), float("nan"), float("nan")])}
df = pd.DataFrame(d)
for i in range(1,len(df.A)):
df.B[i] = df.B[0] + np.sum(df.A[:i])
df
which results in the data frame
If you face a similar iterative dependency you should be able to construct a similar approach suiting your needs.
Have you thought about using Cython http://www.cython.org ? It will interoperate with pandas -- same data structures, etc (as pandas is written in cython). It looks to me like you'll need the ability to iterate across your dataframe in arbitrary ways (not knowing more about your problem, that's all I can say), and yet need speed. Cython compiles to C.
I could forsee a loop of the form:
import numpy
import pandas
import datetime
dates = pandas.date_range('20130101',periods=6)
myDataFrame = pandas.DataFrame(numpy.arange(12).reshape((6,2)),index=dates,columns=list('ab'))
a=myDataFrame["a"]
b=myDataFrame["b"]
print a
print b
out=numpy.empty_like(a.values)
out[0] = 0
#this loop will work but be slow...
for i in range(1, a.shape[0]):
out[i] = a[i-1] + b[i-1]
myDataFrame['c'] = pandas.Series(out, index=myDataFrame.index)
print myDataFrame
But that's going to be slow.

Categories