FOUND SOLUTION: I needed to change datatype for dataframe:
for p in periods:
df['Probability{}'.format(p)] = 0
for p in periods:
df['Probability{}'.format(p)] = float(0)
Alternatively do as in approved answer below.
I am asserting new values for cells as floats but they are set as integers and I don't get why.
It is a part of a data mining project, which contains nested loops.
I am using Python 3.
I tried different modes of writing into a cell with pandas:
df.at[index,col] = float(val),
df.set_value[index,col,float(val)], and
df[col][index] = float(val) but none of them delivered a solution. The output I got was:
In: print(df[index][col])
Out: 0
In: print(val)
Out: 0.4774410939826658
Here is a simplified version of the loop
periods = [7,30,90,180]
for p in periods:
df['Probability{}'.format(p)] = 0
for i in range(len(df.index)):
for p in periods:
if i >= p - 1:
# Getting relevant data and computing value
vals = [df['Close'][j] for j in range(i - p, i)]
probability = (len([j for j in vals if j>0])/len(vals))
# Asserting value to cell in pd.dataframe
df.at[df.index[i], 'Probability{}'.format(p)] = float(probability)
I don't get why pandas.DataFrame are changing float to integer and rounds up or down. When I asserted values to cells in console directly I did not experience any problems.
Is there any work arounds or solutions to this problem?
I had no problem before nesting a for loop for periods to avoid hard coding a lot of trivial code.
NB: It also seems that if I factorize, e.g. with 100 * val = new_val, it do only factorize the rounded number. So if I multiplied 100*val = new_val = 0 because the number is rounded down to 0.
I also tried to change datatype for the dataframe:
df = df.apply(pd.to_numeric)
All the best.
Seems like a problem with incorrect data types in your dataframe. Your last attempt at converting the whole df was probably very close. Try and use
df['Close'] = pd.to_numeric(df['Close'], downcast="float")
I have some DataFrames with information about some elements, for instance:
my_df1=pd.DataFrame([[1,12],[1,15],[1,3],[1,6],[2,8],[2,1],[2,17]],columns=['Group','Value'])
my_df2=pd.DataFrame([[1,5],[1,7],[1,23],[2,6],[2,4]],columns=['Group','Value'])
I have used something like dfGroups = df.groupby('group').apply(my_agg).reset_index(), so now I have DataFrmaes with informations on groups of the previous elements, say
my_df1_Group=pd.DataFrame([[1,57],[2,63]],columns=['Group','Group_Value'])
my_df2_Group=pd.DataFrame([[1,38],[2,49]],columns=['Group','Group_Value'])
Now I want to clean my groups according to properties of their elements. Let's say that I want to discard groups containing an element with Value greater than 16. So in my_df1_Group, there should only be the first group left, while both groups qualify to stay in my_df2_Group.
As I don't know how to get my_df1_Group and my_df2_Group from my_df1 and my_df2 in Python (I know other languages where it would simply be name+"_Group" with name looping in [my_df1,my_df2], but how do you do that in Python?), I build a list of lists:
SampleList = [[my_df1,my_df1_Group],[my_df2,my_df2_Group]]
Then, I simply try this:
my_max=16
Bad=[]
for Sample in SampleList:
for n in Sample[1]['Group']:
df=Sample[0].loc[Sample[0]['Group']==n] #This is inelegant, but trying to work
#with Sample[1] in the for doesn't work
if (df['Value'].max()>my_max):
Bad.append(1)
else:
Bad.append(0)
Sample[1] = Sample[1].assign(Bad_Row=pd.Series(Bad))
Sample[1] = Sample[1].query('Bad_Row == 0')
Which runs without errors, but doesn't work. In particular, this doesn't add the column Bad_Row to my df, nor modifies my DataFrame (but the query runs smoothly even if Bad_Rowcolumn doesn't seem to exist...). On the other hand, if I run this technique manually on a df (i.e. not in a loop), it works.
How should I do?
Based on your comment below, I think you are wanting to check if a Group in your aggregated data frame has a Value in the input data greater than 16. One solution is to perform a row-wise calculation using a criterion of the input data. To accomplish this, my_func accepts a row from the aggregated data frame and the input data as a pandas groupby object. For each group in your grouped data frame, it will subset you initial data and use boolean logic to see if any of the 'Values' in your input data meet your specified criterion.
def my_func(row,grouped_df1):
if (grouped_df1.get_group(row['Group'])['Value']>16).any():
return 'Bad Row'
else:
return 'Good Row'
my_df1=pd.DataFrame([[1,12],[1,15],[1,3],[1,6],[2,8],[2,1],[2,17]],columns=['Group','Value'])
my_df1_Group=pd.DataFrame([[1,57],[2,63]],columns=['Group','Group_Value'])
grouped_df1 = my_df1.groupby('Group')
my_df1_Group['Bad_Row'] = my_df1_Group.apply(lambda x: my_func(x,grouped_df1), axis=1)
Returns:
Group Group_Value Bad_Row
0 1 57 Good Row
1 2 63 Bad Row
Based on dubbbdan idea, there is a code that works:
my_max=16
def my_func(row,grouped_df1):
if (grouped_df1.get_group(row['Group'])['Value']>my_max).any():
return 1
else:
return 0
SampleList = [[my_df1,my_df1_Group],[my_df2,my_df2_Group]]
for Sample in SampleList:
grouped_df = Sample[0].groupby('Group')
Sample[1]['Bad_Row'] = Sample[1].apply(lambda x: my_func(x,grouped_df), axis=1)
Sample[1].drop(Sample[1][Sample[1]['Bad_Row']!=0].index, inplace=True)
Sample[1].drop(['Bad_Row'], axis = 1, inplace = True)
I am quite new to pandas and I have a pandas dataframe of about 500,000 rows filled with numbers. I am using python 2.x and am currently defining and calling the method shown below on it. It sets a predicted value to be equal to the corresponding value in series 'B', if two adjacent values in series 'A' are the same. However, it is running extremely slowly, about 5 rows are outputted per second and I want to find a way accomplish the same result more quickly.
def myModel(df):
A_series = df['A']
B_series = df['B']
seriesLength = A_series.size
# Make a new empty column in the dataframe to hold the predicted values
df['predicted_series'] = np.nan
# Make a new empty column to store whether or not
# prediction matches predicted matches B
df['wrong_prediction'] = np.nan
prev_B = B_series[0]
for x in range(1, seriesLength):
prev_A = A_series[x-1]
prev_B = B_series[x-1]
#set the predicted value to equal B if A has two equal values in a row
if A_series[x] == prev_A:
if df['predicted_series'][x] > 0:
df['predicted_series'][x] = df[predicted_series'][x-1]
else:
df['predicted_series'][x] = B_series[x-1]
Is there a way to vectorize this or to just make it run faster? Under the current circumstances, it is projected to take many hours. Should it really be taking this long? It doesn't seem like 500,000 rows should be giving my program that much problem.
Something like this should work as you described:
df['predicted_series'] = np.where(A_series.shift() == A_series, B_series, df['predicted_series'])
df.loc[df.A.diff() == 0, 'predicted_series'] = df.B
This will get rid of the for loop and set predicted_series to the value of B when A is equal to previous A.
edit:
per your comment, change your initialization of predicted_series to be all NAN and then front fill the values:
df['predicted_series'] = np.nan
df.loc[df.A.diff() == 0, 'predicted_series'] = df.B
df.predicted_series = df.predicted_series.fillna(method='ffill')
For fastest speed modifying ayhans answer a bit will perform best:
df['predicted_series'] = np.where(df.A.shift() == df.A, df.B, df['predicted_series'].shift())
That will give you your forward filled values and run faster than my original recommendation
Solution
df.loc[df.A == df.A.shift()] = df.B.shift()
I'm working on a script that takes in an address and spits out two values: coordinates (as a list) and result (whether the geocoding was successful or not. This works fine, but since the data is returned as a list, I then have to assign new columns based on the indices of that list, which works but returns a warning:
A value is trying to be set on a copy of a slice from a DataFrame
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy.
EDIT: Just to be clear, I think I understand from that page that I should be using .loc to access the nested values. My question is more along the lines of generating two columns directly from a function as opposed to this workaround of having to dig the information out later.
I'd like to know the correct way to approach problems like these, as I actually have this problem twice in this project.
The actual specifics of the problem aren't important, so here's a simple example of how I've been approaching it:
def geo(address):
location = geocode(address)
result = location.result
coords = location.coords
return coords, result
df['output'] = df['address'].apply(geo)
Since this then yields a nested list into my df column, I then extract that into new columns as such:
df['coordinates'] = None
df['gps_status'] = None
for index, row in df.iterrows():
df['coordinates'][index] = df['output'][index][0]
df['gps_status'][index] = df['output'][index][1]
And again, I get the warning:
A value is trying to be set on a copy of a slice from a DataFrame
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
Any advice on the correct way to do this would be appreciated.
Usually you want to avoid iterrows() since it is faster to operate on an entire column at once. You can assign the result from output directly to a new column.
import pandas as pd
def geo(x):
return x*2, x*3
df = pd.DataFrame({'address':[1,2,3]})
output = df['address'].apply(geo)
df['a'] = [x[0] for x in output]
df['b'] = [x[1] for x in output]
gives you
address a b
0 1 2 3
1 2 4 6
2 3 6 9
with no copy warning.
Your function should return a Series:
def geo(address):
location = geocode(address)
result = location.result
coords = location.coords
return pd.Series([coords, result], ['coordinates', 'gps_status'])
df['output'] = df['address'].apply(geo)
That said, this may be better written as a merge.
i would like to create a pandas SparseDataFrame with the Dimonson 250.000 x 250.000. In the end my aim is to come up with a big adjacency matrix.
So far that is no problem to create that data frame:
df = SparseDataFrame(columns=arange(250000), index=arange(250000))
But when i try to update the DataFrame, i become massive memory/runtime problems:
index = 1000
col = 2000
value = 1
df.set_value(index, col, value)
I checked the source:
def set_value(self, index, col, value):
"""
Put single value at passed column and index
Parameters
----------
index : row label
col : column label
value : scalar value
Notes
-----
This method *always* returns a new object. It is currently not
particularly efficient (and potentially very expensive) but is provided
for API compatibility with DataFrame
...
The latter sentence describes the problem in this case using pandas? I really would like to keep on using pandas in this case, but its totally impossible in this case!
Does someone have an idea, how to solve this problem more efficiently?
My next idea is to work with something like nested lists/dicts or so...
thanks for your help!
Do it this way
df = pd.SparseDataFrame(columns=np.arange(250000), index=np.arange(250000))
s = df[2000].to_dense()
s[1000] = 1
df[2000] = s
In [11]: df.ix[1000,2000]
Out[11]: 1.0
So the procedure is to swap out the entire series at a time. The SDF will convert the passed in series to a SparseSeries. (you can do it yourself to see what they look like with s.to_sparse(). The SparseDataFrame is basically a dict of these SparseSeries, which themselves are immutable. Sparseness will have some changes in 0.12 to better support these types of operations (e.g. setting will work efficiently).