Manipulate string to drop columns on pandas - python

I'm trying to manipulate a list (type: string) to use that list to drop some columns from a dataframe.
Dataframe
The list is from a dataframe that I created a condition to return columns whose sums of all values ​​are zero:
Selecting the columns with sum = 0
condicao_CO8 = (ex8_centro_oeste.sum(axis = 0) == 0)
condicao_CO8 = condicao_CO8[condicao_CO8 == True]
condicao_CO8.to_csv('D:\Programas\CO_8.csv')
Importing the dataframe and turning it into a list:
CO8 = pd.read_csv(r'D:\Programas\CO_8.csv',
delimiter=','
)
CO8.rename(columns={'Unnamed: 0': 'Nulos'}, inplace = True)
CO8.rename(columns={'0': 'Info'}, inplace = True)
CO8.drop(columns = ['Info'], inplace = True)
CO8.columns
Images from the list:
List
Some itens from the list:
0 ABI - LETRAS
1 ADMINISTRAÇÃO PÚBLICA
2 AGRONOMIA
3 ALIMENTOS
4 ARQUIVOLOGIA
5 AUTOMAÇÃO INDUSTRIAL
6 BIOMEDICINA
7 BIOTECNOLOGIA
8 CIÊNCIAS - BIOLOGIA
9 CIÊNCIAS - QUÍMICA
10 CIÊNCIAS AGRÁRIAS
11 CIÊNCIAS BIOLÓGICAS
12 CIÊNCIAS BIOLÓGICAS E CONSERVAÇÃO
13 CIÊNCIAS DA COMPUTAÇÃO
14 CIÊNCIAS ECONÔMICAS
My goal is to transform the list so that I can drop the columns from that.
Transforming this list into this:
"ABI - LETRAS", "ADMINISTRAÇÃO PÚBLICA", "AGRONOMIA", "ALIMENTOS", "ARQUIVOLOGIA", "AUTOMAÇÃO INDUSTRIAL"...
for this I made the following code (unsuccessful)
list_CO8 = ('\",\" '.join(CO8['Nulos'].apply(str.upper).tolist()))
Please, can anyone help me?
I'm trying to get:
list = "ABI - LETRAS", "ADMINISTRAÇÃO PÚBLICA", "AGRONOMIA", "ALIMENTOS", "ARQUIVOLOGIA", "AUTOMAÇÃO INDUSTRIAL"...
To make:
ex8_centro_oeste.drop(columns=[list])
Dataset link: Link for Drive (8kb)

I read your question properly only now. I deleted my previous answer. You want to drop all the columns that sums to zero, if I am not mistaken.
import pandas as pd
data = pd.read_csv("./centro-oeste.csv")
import numpy as np
data.columns[np.sum(data) == 0] # data is the dataframe
# It out puts a list that you can use.

Related

Filtering function for pandas - VIewing NaN values within a column

Function I have created:
#Create a function that identifies blank values
def GPID_blank(df, variable):
df = df.loc[df['GPID'] == variable]
return df
Test:
variable = ''
test = GPID_blank(df, variable)
test
Goal: Create a function that can filter any dataframe column 'GPID' to see all of the rows where GPID has missing data.
I have tried running variable = 'NaN' and still no luck. However, I know the function works, as if I use a real-life variable "OH82CD85" the function filters my dataset accordingly.
Therefore, why doesn't it filter out the blank cells variable = 'NaN'? I know for my dataset, there are 5 rows with GPID missing data.
Example df:
df = pd.DataFrame({'Client': ['A','B','C'], 'GPID':['BRUNS2','OH82CD85','']})
Client GPID
0 A BRUNS2
1 B OH82CD85
2 C
Sample of GPID column:
0 OH82CD85
1 BW07TI20
2 OW36HW81
3 PE56TA73
4 CT46SX81
5 OD79AU80
6 GF46DB60
7 OL07ST01
8 VP38SM57
9 AH90AE61
10 PG86KO78
11 NaN
12 NaN
13 SO21GR72
14 DY85IN90
15 KW80CV02
16 CM15QP83
17 VC38FP82
18 DA36RX05
19 DD74HD38
You can't use == with NaN. NaN != NaN.
Instead, you can modify your function a little to check if the parameter is NaN using pd.isna() (or np.isnan()):
def GPID_blank(df, variable):
if pd.isna(variable):
return df.loc[df['GPID'].isna()]
else:
return df.loc[df['GPID'] == variable]
You can't really search for NaN values like an expression. Also, in your example dataframe, '' is not NaN, but is str, and can be searched like an expression.
Instead, you need to check when you want to filter for NaN, and filter differently:
def GPID_blank(df, variable):
if pd.isna(variable):
df = df.loc[df['GPID'].isna()]
else:
df = df.loc[df['GPID'] == variable]
return df
It's not working because with variable = 'NaN' you're looking for a string which content is 'NaN', not for missing values.
You can try:
import pandas as pd
def GPID_blank(df):
# filtered dataframe with NaN values in GPID column
blanks = df[df['GPID'].isnull()].copy()
return blanks
filtered_df = GPID_blank(df)

Run functions over many dataframes, add results to another dataframe, and dynamically name the resulting column with the name of the original df

I have many different tables that all have different column names and each refer to an outcome, like glucose, insulin, leptin etc (except keep in mind that the tables are all gigantic and messy with tons of other columns in them as well).
I am trying to generate a report that starts empty but then adds columns based on functions applied to each of the glucose, insulin, and leptin tables.
I have included a very simple example - ignore that the function makes little sense. The below code works, but I would like to, instead of copy + pasting final_report["outcome"] = over and over again, just run the find_result function over each of glucose, insulin, and leptin and add the "glucose_result", "insulin_result" and "leptin_result" to the final_report in one or a few lines.
Thanks in advance.
import pandas as pd
ids = [1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,4,4,4,4,4,4]
timepoint = [1,2,3,4,5,6,1,2,3,4,5,6,1,2,4,1,2,3,4,5,6]
outcome = [2,3,4,5,6,7,3,4,1,2,3,4,5,4,5,8,4,5,6,2,3]
glucose = pd.DataFrame({'id':ids,
'timepoint':timepoint,
'outcome':outcome})
insulin = pd.DataFrame({'id':ids,
'timepoint':timepoint,
'outcome':outcome})
leptin = pd.DataFrame({'id':ids,
'timepoint':timepoint,
'outcome':outcome})
ids = [1,2,3,4]
start = [1,1,1,1]
end = [6,6,6,6]
final_report = pd.DataFrame({'id':ids,
'start':start,
'end':end})
def find_result(subject, start, end, df):
df = df.loc[(df["id"] == subject) & (df["timepoint"] >= start) & (df["timepoint"] <= end)].sort_values(by = "timepoint")
return df["timepoint"].nunique()
final_report['glucose_result'] = final_report.apply(lambda x: find_result(x['id'], x['start'], x['end'], glucose), axis=1)
final_report['insulin_result'] = final_report.apply(lambda x: find_result(x['id'], x['start'], x['end'], insulin), axis=1)
final_report['leptin_result'] = final_report.apply(lambda x: find_result(x['id'], x['start'], x['end'], leptin), axis=1)
If you have to use this code structure, you can create a simple dictionary with your dataframes and their names and loop through them, creating new columns with programmatically assigned names:
input_dfs = {"glucose": glucose, "insulin": insulin, "leptin": leptin}
for name, df in input_dfs.items():
final_report[f"{name}_result"] = final_report.apply(
lambda x: find_result(x['id'], x['start'], x['end'], df),
axis=1
)
Output:
id start end glucose_result insulin_result leptin_result
0 1 1 6 6 6 6
1 2 1 6 6 6 6
2 3 1 6 3 3 3
3 4 1 6 6 6 6

Pandas DataFrame - duplicated() does not identify duplicate values

EDIT: I have stripped down the file to the bits that are problematic
raw_data = {"link":
['https://www.otodom.pl/oferta/mieszkanie-w-spokojnej-okolicy-gdansk-lostowice-ID43FLJ.html#cda8700ef5',
'https://www.otodom.pl/oferta/mieszkanie-w-spokojnej-okolicy-gdansk-lostowice-ID43FLH.html#cda8700ef5',
'https://www.otodom.pl/oferta/mieszkanie-w-spokojnej-okolicy-gdansk-lostowice-ID43FLj.html#cda8700ef5',
'https://www.otodom.pl/oferta/mieszkanie-w-spokojnej-okolicy-gdansk-lostowice-ID43FLh.html#cda8700ef5',
'https://www.otodom.pl/oferta/zielony-widok-mieszkanie-3m04-ID43EWU.html#9dca9667c3',
'https://www.otodom.pl/oferta/zielony-widok-mieszkanie-3m04-ID43EWu.html#9dca9667c3',
'https://www.otodom.pl/oferta/nowoczesne-osiedle-gotowe-do-konca-roku-bazantow-ID43vQM.html#af24036d28',
'https://www.otodom.pl/oferta/nowoczesne-osiedle-gotowe-do-konca-roku-bazantow-ID43vQJ.html#af24036d28',
'https://www.otodom.pl/oferta/nowoczesne-osiedle-gotowe-do-konca-roku-bazantow-ID43vQm.html#af24036d28',
'https://www.otodom.pl/oferta/nowoczesne-osiedle-gotowe-do-konca-roku-bazantow-ID43vQj.html#af24036d28',
'https://www.otodom.pl/oferta/mieszkanie-56-m-warszawa-ID43sWY.html#2d0084b7ea',
'https://www.otodom.pl/oferta/mieszkanie-56-m-warszawa-ID43sWy.html#2d0084b7ea',
'https://www.otodom.pl/oferta/idealny-2pok-apartament-0-pcc-widok-na-park-ID43q4X.html#64f19d3152',
'https://www.otodom.pl/oferta/idealny-2pok-apartament-0-pcc-widok-na-park-ID43q4x.html#64f19d3152']}
df = pd.DataFrame(raw_data, columns = ["link"])
#duplicate check #1
a = print(df.iloc[12][0])
b = print(df.iloc[13][0])
if a == b:
print("equal")
#duplicate check #2
df.duplicated()
For the first test I get the following output implying there is a duplicate
https://www.otodom.pl/oferta/idealny-2pok-apartament-0-pcc-widok-na-park-ID43q4X.html#64f19d3152
https://www.otodom.pl/oferta/idealny-2pok-apartament-0-pcc-widok-na-park-ID43q4x.html#64f19d3152
equal
For the second test it seems there are no duplicates
0 False
1 False
2 False
3 False
4 False
5 False
6 False
7 False
8 False
9 False
10 False
11 False
12 False
13 False
dtype: bool
Original post:
Trying to identify duplicate values from the "Link" column of attached file:
data file
import pandas as pd
data = pd.read_csv(r"...\consolidated.csv", sep=",")
df = pd.DataFrame(data)
del df['Unnamed: 0']
duplicate_rows = df[df.duplicated(["Link"], keep="first")]
pd.DataFrame(duplicate_rows)
#a = print(df.iloc[42657][15])
#b = print(df.iloc[42676][15])
#if a == b:
# print("equal")
Used the code above, but the answer I keep getting is that there are no duplicates. Checked it through Excel and there should be seven duplicate instances. Even selected specific cells to do a quick check (the part marked with #s), and the values have been identified as equal. Yet duplicated does not capture them
I have been scratching my head for a good hour, and still no idea what I'm missing - help appreciated!
I had the same problem and converting the columns of the dataframe to "str" helped.
eg.
df['link'] = df['link'].astype(str)
duplicate_rows = df[df.duplicated(["link"], keep="first")]
First, you don't need df = pd.DataFrame(data), as data = pd.read_csv(r"...\consolidated.csv", sep=",") already returns a Dataframe.
As for the deletion of duplicates, check the drop_duplicates method in the Documentation
Hope this helps.

Vectorized operation on pandas dataframe

I currently have the following code which goes through each row of a dataframe and assigns the prior row value for a certain cell to the current row of a different cell.
Basically what im doing is finding out what 'yesterdays' value for a certain metric is compared to today. As you would expect this is quite slow (especially since I am working with dataframes that have hundreds of thousands of lines).
for index, row in symbol_df.iterrows():
if index != 0:
symbol_df.loc[index, 'yesterday_sma_20'] = symbol_df.loc[index-1]['sma_20']
symbol_df.loc[index, 'yesterday_roc_20'] = symbol_df.loc[index-1]['roc_20']
symbol_df.loc[index, 'yesterday_roc_100'] = symbol_df.loc[index-1]['roc_100']
symbol_df.loc[index, 'yesterday_atr_10'] = symbol_df.loc[index-1]['atr_10']
symbol_df.loc[index, 'yesterday_vsma_20'] = symbol_df.loc[index-1]['vsma_20']
Is there a way to turn this into a vectorized operation? Or really just any way to speed it up instead of having to iterate through each row individually?
I might be overlooking something, but I think using .shift() should do it.
import pandas as pd
df = pd.read_csv('test.csv')
print df
# Date SMA_20 ROC_20
# 0 7/22/2015 0.754889 0.807870
# 1 7/23/2015 0.376448 0.791365
# 2 7/22/2015 0.527232 0.407420
# 3 7/24/2015 0.616281 0.027188
# 4 7/22/2015 0.126556 0.274681
# 5 7/25/2015 0.570008 0.864057
# 6 7/22/2015 0.632057 0.746988
# 7 7/26/2015 0.373405 0.883944
# 8 7/22/2015 0.775591 0.453368
# 9 7/27/2015 0.678638 0.313374
df['y_SMA_20'] = df['SMA_20'].shift()
df['y_ROC_20'] = df['ROC_20'].shift()
print df
# Date SMA_20 ROC_20 y_SMA_20 y_ROC_20
# 0 7/22/2015 0.754889 0.807870 NaN NaN
# 1 7/23/2015 0.376448 0.791365 0.754889 0.807870
# 2 7/22/2015 0.527232 0.407420 0.376448 0.791365
# 3 7/24/2015 0.616281 0.027188 0.527232 0.407420
# 4 7/22/2015 0.126556 0.274681 0.616281 0.027188
# 5 7/25/2015 0.570008 0.864057 0.126556 0.274681
# 6 7/22/2015 0.632057 0.746988 0.570008 0.864057
# 7 7/26/2015 0.373405 0.883944 0.632057 0.746988
# 8 7/22/2015 0.775591 0.453368 0.373405 0.883944
# 9 7/27/2015 0.678638 0.313374 0.775591 0.453368

Pandas compare each row to reference row - certain columns only

I have the following Pandas Dataframe**** in Python.
Temp_Fact Oscillops_read A B C D E F G H I J
0 A Today 0.710213 0.222015 0.814710 0.597732 0.634099 0.338913 0.452534 0.698082 0.706486 0.433162
1 B Today 0.653489 0.452543 0.618755 0.555629 0.490342 0.280299 0.026055 0.138876 0.053148 0.899734
2 A Aactl 0.129211 0.579690 0.641324 0.615772 0.927384 0.199651 0.652395 0.249467 0.262301 0.049795
3 A DFE 0.743794 0.355085 0.637794 0.633634 0.810033 0.509244 0.470418 0.972145 0.647222 0.610636
4 C Real_Mt_Olv 0.724282 0.332965 0.063078 0.004550 0.585398 0.869376 0.232148 0.630162 0.102206 0.232981
5 E Q_Mont 0.221685 0.224834 0.110734 0.397999 0.814153 0.552924 0.981098 0.536750 0.251941 0.383994
6 D DFE 0.655386 0.561297 0.305310 0.140998 0.433054 0.118187 0.479206 0.556546 0.556017 0.025070
7 F Bryo 0.257884 0.228650 0.413149 0.285651 0.814095 0.275627 0.775620 0.392448 0.827725 0.935581
8 C Aactl 0.017388 0.133848 0.939049 0.159416 0.923788 0.375638 0.331078 0.939089 0.098718 0.785569
9 C Today 0.197419 0.595253 0.574718 0.373899 0.363200 0.289378 0.698455 0.252657 0.357485 0.020484
10 C Pars 0.037771 0.683799 0.184114 0.545062 0.857000 0.295918 0.733196 0.613165 0.180642 0.254839
11 B Pars 0.637346 0.090000 0.848710 0.596883 0.027026 0.792180 0.843743 0.461608 0.552165 0.215250
12 B Pars 0.768422 0.017828 0.090141 0.108061 0.456734 0.803175 0.454479 0.501713 0.687016 0.625260
13 E Tomorrow 0.860112 0.532859 0.091641 0.768896 0.635966 0.007211 0.656367 0.053136 0.482367 0.680557
14 D DFE 0.801734 0.365921 0.243407 0.826373 0.904416 0.062448 0.801726 0.049983 0.433135 0.351150
15 F Q_Mont 0.360710 0.330745 0.598830 0.582379 0.828019 0.467044 0.287276 0.470980 0.355386 0.404299
16 D Last_Week 0.867126 0.600093 0.813257 0.005423 0.617543 0.657219 0.635255 0.314910 0.016516 0.689257
17 E Last_Week 0.551499 0.724981 0.821087 0.175279 0.301397 0.304105 0.379553 0.971244 0.558719 0.154240
18 F Bryo 0.511370 0.208831 0.260223 0.089106 0.121442 0.120513 0.099722 0.750769 0.860541 0.838855
19 E Bryo 0.323441 0.663328 0.951847 0.782042 0.909736 0.512978 0.999549 0.225423 0.789240 0.155898
20 C Tomorrow 0.267086 0.357918 0.562190 0.700404 0.961047 0.513091 0.779268 0.030190 0.460805 0.315814
21 B Tomorrow 0.951356 0.570077 0.867533 0.365708 0.791373 0.232377 0.478656 0.003857 0.805882 0.989754
22 F Today 0.963750 0.118826 0.264858 0.571066 0.761669 0.967419 0.565773 0.468971 0.466120 0.174815
23 B Last_Week 0.291186 0.126748 0.154725 0.527029 0.021485 0.224272 0.259218 0.052286 0.205569 0.617701
24 F Aactl 0.269308 0.655920 0.595518 0.404817 0.290342 0.447246 0.627082 0.306856 0.868357 0.979879
I also have a Series of values for each column:
df_base = df[df['Oscillops_read'] == 'Last_Week']
df_base_val = df_base.mean(axis=0)
As you can see, this is a Pandas Series and it is the average, for each column, over rows where Oscillops_read == 'Last_Week'. Here is the Series:
[0.56993702256121603, 0.48394061768804786, 0.59635616273775061, 0.23591030688019868, 0.31347492150330231, 0.39519847430740507, 0.42467546792253791, 0.4461465888887961, 0.26026797943899194, 0.48706569569369912]
I also have 2 lists:
1.
range_name_list = ['Base','Curnt','Prediction','Graph','Swg','Barometer_Output','Test_Cntr']
This list gives values that must be added to the dataframe df under certain conditions (described below).
2.
col_1 = list('DFA')
col_2 = list('ACEF')
col_3 = list('CEF')
col_4 = list('ABDF')
col_5 = list('DEF')
col_6 = list('AC')
col_7 = list('ABCDE')
These are lists of column names. These columns from df must be compared to the average Series above. So for example, for the 6th list col_6, columns A and C from each row of the dataframe df must be compared to columns A and C of the Series.
Problem:
As I mentioned above, I need to compare specific columns from the dataframe df to the base Series df_base_val. The columns to be compared are listed in col_1, col_2, col_3, ..., col_7. Here is what I need to do:
if a row for the dataframe column names listed in col_1 (eg. if a row for columns A and C) is greater than the base Series df_base_val in those 2 columns then for that row, in a new column Range, enter the 6th value from the list range_name_list.
Example:
eg. use col_6 - this is the 6th list and it has column names A and C.
Step 1: For row 1 of df, columns A and C are greater than
df_base_val[A] and df_base_val[C] respectively.
Step 2: Thus, for row 1, in a new column Range, enter the 6th element from the list range_name_list - the 6th element is Barometer_Output.
Example Output:
After doing this, the 1st row becomes:
0 A Today 0.710213 0.222015 0.814710 0.597732 0.634099 0.338913 0.452534 0.698082 0.706486 0.433162 'Barometer_Output'
Now, if this row were NOT to be greater than the Series in columns A and C and is not greater than the Series in columns from col_1, col_2, etc. then the Range column must be assigned the value 'Not_in_Range'. In that case, this row would become:
0 A Today 0.710213 0.222015 0.814710 0.597732 0.634099 0.338913 0.452534 0.698082 0.706486 0.433162 'Not_in_Range'
Simplification and Question:
In this example:
I only compared the 1st row to the base series. I need to compare
all rows of df individually to the base series and add an appropriate value.
I only used the 6th list of columns - this was col_6. Similarly, I need to go through each list of column names - col_1, col_2, ...., col_7.
If the row being compared is not greater than any of the lists col_1 to col_7, in the specified columns, then the column Range must be assigned the value 'Not_in_Range'.
Is there a way to do this? Maybe using loops?
**** to create the above dataframe, select it from above and copy. Then use the following code:
import pandas as pd
df = pd.read_clipboard()
print df
EDIT:
If multiple conditions are met, I would need that they all be listed. i.e. if the row belongs to 'Swg' and 'Curnt', then I would need to list both of these in the Range column, or to create separate Range columns, or just Python lists, for each matching result. Range1 would list 'Swg' and column Range2 would list 'Curnt', etc.
For starters I would create a dictionary with your condition sets where the keys can be used as indices for your range_name_list list:
conditions = {0: list('DFA'),
1: list('ACEF'),
2: list('CEF'),
3: list('ABDF'),
4: list('DEF'),
5: list('AC'),
6: list('ABCDE')}
The following code will then accomplish what I understand to be your task:
# Create your Range column to be filled in later.
df['Range'] = '|'
for index, row in df.iterrows():
for ix, list in conditions.iteritems():
# Create a list of the outcomes of checking whether the
# value for each condition column is greater than the
# df_base_val average.
truths = [row[column] > df_base_val[column] for column in list]
# See if all checks evaluated to True
if sum(truths) == len(truths):
# If so, set the 'Range' column's value for the current row
# to the appropriate 'range_name'
df.ix[index, 'Range'] = df.ix[index, 'Range'] + range_name_list[ix] + "|"
# Fill in all rows where no conditions were met with 'Not_in_Range'
df['Range'][df['Range'] == '|'] = 'Not_in_Range'
try this code:
df = pd.read_csv(BytesIO(txt), delim_whitespace=True)
df_base = df[df['Oscillops_read'] == 'Last_Week']
df_base_val = df_base.mean(axis=0)
columns = ['DFA', 'ACEF', 'CEF', 'ABDF', 'DEF', 'AC', 'ABCDE']
range_name_list = ['Base','Curnt','Prediction','Graph','Swg','Barometer_Output','Test_Cntr']
ranges = pd.Series(["NOT_IN_RANGE" for _ in range(df.shape[0])], index=df.index)
for name, cols in zip(range_name_list, columns):
cols = list(cols)
idx = df.index[(df[cols] > df_base_val[cols]).all(axis=1)]
ranges[idx] = name
print ranges
But I don't know what you want if there are multiple range match with one row.

Categories