Add column to date time indexed Dataframe from regular data frame - python

I have two dataframes of the same length (39014 rows), one has datetime as its index, and the other one just a regular index. I need to copy one column into the other one but when the copy is being made it returns Nans. I did:
df_datetime["newcol"]=df_regular["col"]
If you check the column newcol in df_datetime it's a column full of Nan, even though the column col of df_regular has numbers. Why is this happening? How can I fix it? Thanks!
Also tried
pd.merge(df_datetime, df_regular[["col"]], left_index=True, right_index=True, how='left')
And the same happens

This is because the indices are not aligned.
When you assign a new column like that, df_datetime.loc[x, 'newcol'] will have the same value as df_regular.loc[x, 'newcol']
You need to replace the index of df_datetime with a datetime index.

What if you do this:
listOfColumn = list(df_regular["col"])
df_datetime["newcol"] = listOfColumn

As df_datetime and df_regular I prepared the following DataFrames:
N1
Dat
2019-09-01 120
2019-09-02 130
2019-09-03 140
2019-09-04 150
2019-09-05 160
and
col N2
0 23 19
1 26 32
2 48 61
3 51 53
4 62 60
Both with 5 rows.
If you want to "add" col column from df_regular to df_datetime
ignoring index values in both DataFrames, run:
df_datetime['newcol'] = df_regular.col.values
The result is:
N1 newcol
Dat
2019-09-01 120 23
2019-09-02 130 26
2019-09-03 140 48
2019-09-04 150 51
2019-09-05 160 62

Related

Insert one pyspark dataframe to another with replacement some rows

I have two pyspark dataframes that has some mutual key IDs with different other values. What I want to achieve is to inject one dataframe to another.
First Dataframe:
ID1
ID2
DATE
VAL1
VAL2
19
22
05-03-2012
311
622
20
30
05-03-2012
40
60
20
30
06-03-2012
70
120
20
30
07-03-2012
480
3
20
30
08-03-2012
49
98
Second Dataframe:
ID1
ID2
DATE
VAL1
VAL2
19
22
07-03-2012
311
622
20
30
06-03-2012
22
2
Final DF:
ID1
ID2
DATE
VAL1
VAL2
19
22
05-03-2012
311
622
19
22
07-03-2012
311
622
20
30
05-03-2012
40
60
20
30
06-03-2012
70
120
20
30
07-03-2012
480
3
20
30
08-03-2012
49
98
As you can see all the values absent in one of the dataframe is present in final dataframe and rows with the same ID1, ID2, DATE are taking from the first dataframe. These are simplified examples of dataframes, these are much more complicated, with different columns (I'll select the important ones) and hundred of thousands of rows.
I was experimenting with outer join, but after many tries I've lost any hope, so I'd be grateful for any help.
This should work -
Essentially, First do a left_anti join to extract only those rows that are absent from First Dataframe but present in the second dataframe then union them (i.e. append) in First Dataframe
Seq<String> colList = convertListToSeq(Stream.of("id1", "id2", "date").collect(Collectors.toList()));
// Only present in Right
Dataset<Row> missingInLeft = rightDF.join(leftDF, colList, "left_anti");
leftDF.union(missingInLeft).show(); // Left + Missing in left
Update:
Pyspark Code:
rightDF.union(rightDF.join(leftDF, ["id1", "id2", "date"], how='left_anti')).show()

How to drop identical columns in Pandas dataframe if first x rows of values are identical?

I'm working with a large dataset (921600 rows, 23 columns) with the occasional duplicate column (different column names however). I would like to remove the columns with identical values. However, 'df.T.drop_duplicates().T' and similar solutions simply take too long as they presumably are checking all 921600 rows. Is it possible to remove columns if just the first x amount of rows have identical values?
E.g.: Identify that 'channel2' and 'channel2-2' are duplicate by comparing the first x (say 10) rows instead of inspecting all million rows.
channel1 channel2 channel3 channel2-b
0 47 46 27 46
1 84 28 28 28
2 72 79 68 79
... ... ... ... ...
999997 4729 1957 2986 1957
999998 9918 1513 2957 1513
999999 1001 5883 7577 5883
Use DataFrame.duplicated with filter top values in DataFrame.head, filter rows by DataFrame.loc:
N = 2
df = df.loc[:, ~df.head(N).T.duplicated()]

Average certain columns based on values in other columns

I would like to average certain column values depending on whether a condition is met in another column. Specifically, if column 1 in the below dataframe is < 1700, I want to include the corresponding value in that row from column 51 in my average calculation. And if column 2 < 1700, I want to also include the value in that row from column 52 in my average calculation.
So, for row 0, the new calculated column for that row would be 64 (average of 65 & 63). For row 1, the average would be just 80 (column 51 value) since neither columns 2 nor 3 were less than 1700 and hence not included in the average calculation.
This is a simplified example as my actual dataframe has about 10 columns for conditions with 10 corresponding columns of values to average.
As a potential complexity, the column headers are numbers rather than traditional text labels and do not refer to the order of that column in the dataframe since I've excluded certain columns when I imported the csv file. In other words, column 51 isn't the 51st column in the dataframe.
When I run the below code I'm getting the following error:
ValueError: ("No axis named 1 for object type ",
'occurred at index 0')
Is there a more efficient way to code this and avoid this error? Thanks for your help!
import pandas as pd
import numpy as np
test_df = pd.DataFrame({1:[1600,1600,1600,1700,1800],2:[1500,2000,1400,1500,2000],
3:[2000,2000,2000,2000,2000],51:[65,80,75,80,75],52:[63,82,85,85,75],53:[83,80,75,76,78]})
test_df
1 2 3 51 52 53
0 1600 1500 2000 65 63 83
1 1600 2000 2000 80 82 80
2 1600 1400 2000 75 85 75
3 1700 1500 2000 80 85 76
4 1800 2000 2000 75 75 78
def calc_mean_based_on_conditions(row):
list_of_columns_to_average = []
for i in range(1,4):
if row[i] < 1700:
list_of_columns_to_average.append(i+50)
if not list_of_columns_to_average:
return np.nan
else:
return row[(list_of_columns_to_average)].mean(axis=1)
test_df['MeanValue'] = test_df.apply(calc_mean_based_on_conditions, axis=1)
Something very relevant (supporting int as column names)- https://github.com/theislab/anndata/issues/31
Due to this bug/issue, I converted the column names to type string:
test_df = pd.DataFrame({'1':[1600,1600,1600,1700,1800],'2':[1500,2000,1400,1500,2000],
'3':[2000,2000,2000,2000,2000],'51':[65,80,75,80,75],'52':[63,82,85,85,75],'53':
[83,80,75,76,78]})
Created a new dataframe - new_df to meet out requirements
new_df = test_df[['1', '2', '3']].where(test_df[['1','2','3']]<1700).notnull()
new_df now looks like this
1 2 3
0 True True False
1 True False False
2 True True False
3 False True False
4 False False False
Then simply rename the column and check using 'where'
new_df = new_df.rename(columns={"1": "51", "2":"52", "3":"53"})
test_df['mean_value'] = test_df[['51', '52', '53']].where(new_df).mean(axis=1)
This should give you the desired output -
1 2 3 51 52 53 mean_value
0 1600 1500 2000 65 63 83 64.0
1 1600 2000 2000 80 82 80 80.0
2 1600 1400 2000 75 85 75 80.0
3 1700 1500 2000 80 85 76 85.0
4 1800 2000 2000 75 75 78 NaN
I deleted my other answer because it was going down the wrong path. What you want to do is generate a mask of your conditional columns, then use that mask to apply a function to other columns. In this case, 1 corresponds to 51, 2 to 52, etc.
import pandas as pd
import numpy as np
test_df = pd.DataFrame({1:[1600,1600,1600,1700,1800],2:[1500,2000,1400,1500,2000],
3:[2000,2000,2000,2000,2000],51:[65,80,75,80,75],52:[63,82,85,85,75],53:[83,80,75,76,78]})
test_df
1 2 3 51 52 53
0 1600 1500 2000 65 63 83
1 1600 2000 2000 80 82 80
2 1600 1400 2000 75 85 75
3 1700 1500 2000 80 85 76
4 1800 2000 2000 75 75 78
# create dictionary to map columns to one another
l1=list(range(1,4))
l2=list(range(50,54))
d = {k:v for k,v in zip(l1,l2)}
d
{1: 51, 2: 52, 3: 53}
temp=test_df[l1] > 1700 # Subset initial dataframe, generate mask
for _, row in temp.iterrows(): #iterate through subsetted data
list_of_columns_for_mean=list() # list of columns for later computation
for k, v in d.items(): #iterate through each k:v and evaluate conditional for each row
if row[k]:
list_of_columns_for_mean.append(v)
# the rest should be pretty easy to figure out
This is not an elegant solution, but it is a solution. Unfortunately, I've run out of time to dedicate to it, but hopefully this gets you pointed in a better direction.
There is probably a better, vectorized way to do this, but you could do it without the function
import numpy as np
import pandas as pd
from collections import defaultdict
test_df = pd.DataFrame({1:[1600,1600,1600,1700,1800],2:[1500,2000,1400,1500,2000],
3:[2000,2000,2000,2000,2000],51:[65,80,75,80,75],52:[63,82,85,85,75],53:[83,80,75,76,78]})
# List of columns that you're applying the condition to
condition_cols = list(range(1,4))
# Get row and column indices where this condition is true
condition = np.where(test_df[condition_cols].lt(1700))
# make a dictionary mapping row to true columns
cond_map = defaultdict(list)
for r,c in zip(*condition):
cond_map[r].append(c)
# Get the means of true columns
means = []
for row in range(len(test_df)):
if row in cond_map:
temp = []
for col in cond_map[row]:
# Needs 51 because of Python indexing starting at zero + 50
temp.append(test_df.loc[row, col+51])
means.append(temp)
else:
# If the row has no true columns (i.e row 4)
means.append(np.nan)
test_df['Means'] = [np.mean(l) for l in means]
The issue is indexing true rows and columns in a vectorized way.

Pandas read_csv adds unnecessary " " to each row

I have a csv file
(I am showing the first three rows here)
HEIGHT,WEIGHT,AGE,GENDER,SMOKES,ALCOHOL,EXERCISE,TRT,PULSE1,PULSE2,YEAR
173,57,18,2,2,1,2,2,86,88,93
179,58,19,2,2,1,2,1,82,150,93
I am using pandas read_csv to read the file and put them into columns.
Here is my code:
import pandas as pd
import os
path='~/Desktop/pulse.csv'
path=os.path.expanduser(path)
my_data=pd.read_csv(path, index_col=False, header=None, quoting = 3, delimiter=',')
print my_data
The problem is the first and last columns have " before and after the values.
Additionally I can't get rid of the indexes.
It might be making some silly mistake but I thank you for your help in advance
Final solution - use replace with converting to ints and for remove " from columns names use strip:
df = pd.read_csv('pulse.csv', quoting=3)
df = df.replace('"','', regex=True).astype(int)
df.columns = df.columns.str.strip('"')
print (df.head())
HEIGHT WEIGHT AGE GENDER SMOKES ALCOHOL EXERCISE TRT PULSE1 \
0 173 57 18 2 2 1 2 2 86
1 179 58 19 2 2 1 2 1 82
2 167 62 18 2 2 1 1 1 96
3 195 84 18 1 2 1 1 2 71
4 173 64 18 2 2 1 3 2 90
PULSE2 YEAR
0 88 93
1 150 93
2 176 93
3 73 93
4 88 93
index_col=False means force not read first column to index, but dataframe always need some index, so is added default - 0,1,2.... So here can be omit.
header=None should be removed because it force dont read first row (header of csv) to columns of DataFrame. Then also first row of data is header and numeric values are converted to strings.
delimiter=',' should be removed too, because it is same as sep=',' what is default parameter.
#jezrael is right - a pandas dataframe will always add indices. It's necessary.
try something like df[0] = df[0].str.strip() replacing zero with the last column.
before you do so, convert your csv to a dataframe - pd.DataFrame.from_csv(path)

Replacing pandas Series column values with their own indices

I have a chronologically-sorted datetime Series(note the index values on the left-hand side)
9 1971-04-10
84 1971-05-18
2 1971-07-08
53 1971-07-11
28 1971-09-12
474 1972-01-01
153 1972-01-13
13 1972-01-26
129 1972-05-06
98 1972-05-13
111 1972-06-10
225 1972-06-15
For my purpose, only the sorted indices matter, so I would like to replace the datetime values with their indices in the original pandas Series (perhaps through reindexing) to return a new Series like this:
0 9
1 84
2 2
3 53
4 28
5 474
6 153
7 13
8 129
9 98
10 111
11 225
where the 'indices' on the left-hand-side are the new 'index' column and the 'indices' on the right are the original index column for datetime values.
What is the easier way to do this?
Thank you.
If you are okay with constructing a new object:
series = pd.Series(old_series.index, index=whateveryouwant)
Where specifying the new index is optional..
You can point your index to a list as follows
df.index = list(range(len(df))
where df is your dataframe

Categories