How to convert if/else to np.where in pandas - python
My code is below
apply pd.to_numeric to the columns where supposed to int or float but coming as object. Can we convert more into pandas way like applying np.where
if df.dtypes.all() == 'object':
df=df.apply(pd.to_numeric,errors='coerce').fillna(df)
else:
df = df
A simple one liner is assign with selest_dtypes which will reassign existing columns
df.assign(**df.select_dtypes('O').apply(pd.to_numeric,errors='coerce').fillna(df))
np.where:
df[:] = (np.where(df.dtypes=='object',
df.apply(pd.to_numeric,errors='coerce').fillna(df),df)
Example (check Price column) :
d = {'CusID': {0: 1, 1: 2, 2: 3},
'Name': {0: 'Paul', 1: 'Mark', 2: 'Bill'},
'Shop': {0: 'Pascal', 1: 'Casio', 2: 'Nike'},
'Price': {0: '24000', 1: 'a', 2: '900'}}
df = pd.DataFrame(d)
print(df)
CusID Name Shop Price
0 1 Paul Pascal 24000
1 2 Mark Casio a
2 3 Bill Nike 900
df.to_dict()
{'CusID': {0: 1, 1: 2, 2: 3},
'Name': {0: 'Paul', 1: 'Mark', 2: 'Bill'},
'Shop': {0: 'Pascal', 1: 'Casio', 2: 'Nike'},
'Price': {0: '24000', 1: 'a', 2: '900'}}
(df.assign(**df.select_dtypes('O').apply(pd.to_numeric,errors='coerce')
.fillna(df)).to_dict())
{'CusID': {0: 1, 1: 2, 2: 3},
'Name': {0: 'Paul', 1: 'Mark', 2: 'Bill'},
'Shop': {0: 'Pascal', 1: 'Casio', 2: 'Nike'},
'Price': {0: 24000.0, 1: 'a', 2: 900.0}}
Equivalent of your if/else is df.mask
df_out = df.mask(df.dtypes =='O', df.apply(pd.to_numeric, errors='coerce')
.fillna(df))
Related
Custom function to replace missing values in dataframe with median located in pivot table
I am attempting to write a function to replace missing values in the 'total_income' column with the median 'total_income' provided by the pivot table, using the row's 'education' and 'income_type' to index the pivot table. I want to populate using these medians so that the values are as optimal as they can be. Here is what I am testing: This is the first 5 rows of the dataframe as a dictionary: {'index': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4}, 'children': {0: 1, 1: 1, 2: 0, 3: 3, 4: 0}, 'days_employed': {0: 8437.673027760233, 1: 4024.803753850451, 2: 5623.422610230956, 3: 4124.747206540018, 4: 340266.07204682194}, 'dob_years': {0: 42, 1: 36, 2: 33, 3: 32, 4: 53}, 'education': {0: "bachelor's degree", 1: 'secondary education', 2: 'secondary education', 3: 'secondary education', 4: 'secondary education'}, 'education_id': {0: 0, 1: 1, 2: 1, 3: 1, 4: 1}, 'family_status': {0: 'married', 1: 'married', 2: 'married', 3: 'married', 4: 'civil partnership'}, 'family_status_id': {0: 0, 1: 0, 2: 0, 3: 0, 4: 1}, 'gender': {0: 'F', 1: 'F', 2: 'M', 3: 'M', 4: 'F'}, 'income_type': {0: 'employee', 1: 'employee', 2: 'employee', 3: 'employee', 4: 'retiree'}, 'debt': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, 'total_income': {0: 40620.102, 1: 17932.802, 2: 23341.752, 3: 42820.568, 4: 25378.572}, 'purpose': {0: 'purchase of the house', 1: 'car purchase', 2: 'purchase of the house', 3: 'supplementary education', 4: 'to have a wedding'}, 'age_group': {0: 'adult', 1: 'adult', 2: 'adult', 3: 'adult', 4: 'older adult'}} def fill_income(row): total_income = row['total_income'] age_group = row['age_group'] income_type = row['income_type'] education = row['education'] table = df.pivot_table(index=['age_group','income_type' ], columns='education', values='total_income', aggfunc='median') if total_income == 'NaN': if age_group =='adult': return table.loc[education, income_type] My desired output is the pivot table value (the median total_income) for the dataframe row's given education and income_type. When I test it, it returns 'None'. Thanks in advance for your time helping me with this problem!
I can't get pandas to union my dataframes properly
I try and concat or append (neither are working) 2 9-column dataframes together. But, instead of just doing a normal vertical stacking of them, pandas keeps trying to add 9 more empty columns as well. Do you know how to stop this? output looks like this: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,0,1,10,11,12,13,2,3,4,5,6,7,8,9 10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,,,, 10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.00,48,48,0,Ken,(903) 280-7878,UrTruckBroker ,,,,,,,,,,,,,, 10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. ,,,,,,,,,,,,,, 10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv ,,,,,,,,,,,,,, 10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency ,,,,,,,,,,,,,, ... ,,,,,,,,,,,,,,03/02/2021,Knapp,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Jackson,NE,Full,Flatbed / Step Deck,0.0,48.0,48.0 ,,,,,,,,,,,,,,03/02/2021,Knapp,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Sterling,IL,Full,Flatbed / Step Deck,0.0,48.0,48.0 ,,,,,,,,,,,,,,03/02/2021,Milwaukee,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Great Falls,MT,Full,Flatbed / Step Deck,0.0,45.0,48.0 ,,,,,,,,,,,,,,03/02/2021,Algoma,0.0,Dispatch,(763) 432-3680,Fuze Logistics Services USA ,WI,Pamplico,SC,Full,Flatbed / Step Deck,0.0,48.0,48.0 code is a web request to get data, which I save to dataframe, which is then concat-ed with another dataframe that comes from a CSV. I then save all of this back to that csv: this_csv = 'freights_trulos.csv' try: old_df = pd.read_csv(this_csv) except BaseException as e: print(e) old_df = pd.DataFrame() state, equip = 'DE', 'Flat' url = "https://backend-a.trulos.com/load-table/grab_loads.php?state=%s&equipment=%s" % (state, equip) payload = {} headers = { ... } response = requests.request("GET", url, headers=headers, data=payload) # print(response.text) parsed = json.loads(response.content) data = [r[0:13] + [r[-4].split('<br/>')[-2].split('>')[-1]] for r in parsed] df = pd.DataFrame(data=data) if not old_df.empty: # concatenate old and new and remove duplicates # df.reset_index(drop=True, inplace=True) # old_df.reset_index(drop=True, inplace=True) # df = pd.concat([old_df, df], ignore_index=True) <--- CONCAT HAS SAME ISSUES AS APPEND df = df.append(old_df, ignore_index=True) # remove duplicates on cols df.drop_duplicates() df.to_csv(this_csv, index=False) EDIT appended df's have had their types changed df.dtypes Out[2]: 0 object 1 object 2 object 3 object 4 object 5 object 6 object 7 object 8 object 9 object 10 object 11 object 12 object 13 object dtype: object old_df.dtypes Out[3]: 0 object 1 object 2 object 3 object 4 object 5 object 6 object 7 float64 8 int64 9 int64 10 int64 11 object 12 object 13 object dtype: object old_df to csv 0,1,2,3,4,5,6,7,8,9,10,11,12,13 10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.0,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.0,48,48,0,Ken,(903) 280-7878,UrTruckBroker 10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.0,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. 10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.0,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv 10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.0,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency new_df to csv 0,1,2,3,4,5,6,7,8,9,10,11,12,13 10/23/2020,New Castle,DE,Gary,IN,Full,Flatbed,0.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency 10/22/2020,Wilmington,DE,METHUEN,MA,Full,Flatbed / Step Deck,0.00,48,48,0,Ken,(903) 280-7878,UrTruckBroker 10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,47,1,0,Dispatch,(912) 748-3801,DSV Road Inc. 10/23/2020,WILMINGTON,DE,METHUEN,MA,Full,Flatbed w/Tarps,0.00,48,1,0,Dispatch,(541) 826-4786,Sureway Transportation Co / Anderson Trucking Serv 10/30/2020,New Castle,DE,Gary,IN,Full,Flatbed,945.00,46,48,0,Dispatch,(800) 488-1860,Meadow Lark Agency
I guess the problem could be how you read the data if I copy your sample data to excel and split by comma and then import to pandas, all is fine. Also if I split on comma AND whitespaces, I have +9 additional columns. So you could try debugging by replacing all whitespaces before creating your dataframe. I also used your sample data and it workend just fine for me if I initialize it like this: import pandas as pd df_new = pd.DataFrame({'0': {0: '10/23/2020', 1: '10/22/2020', 2: '10/23/2020', 3: '10/23/2020', 4: '10/30/2020'}, '1': {0: 'New_Castle', 1: 'Wilmington', 2: 'WILMINGTON', 3: 'WILMINGTON', 4: 'New_Castle'}, '2': {0: 'DE', 1: 'DE', 2: 'DE', 3: 'DE', 4: 'DE'}, '3': {0: 'Gary', 1: 'METHUEN', 2: 'METHUEN', 3: 'METHUEN', 4: 'Gary'}, '4': {0: 'IN', 1: 'MA', 2: 'MA', 3: 'MA', 4: 'IN'}, '5': {0: 'Full', 1: 'Full', 2: 'Full', 3: 'Full', 4: 'Full'}, '6': {0: 'Flatbed', 1: 'Flatbed_/_Step_Deck', 2: 'Flatbed_w/Tarps', 3: 'Flatbed_w/Tarps', 4: 'Flatbed'}, '7': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 945.0}, '8': {0: 46, 1: 48, 2: 47, 3: 48, 4: 46}, '9': {0: 48, 1: 48, 2: 1, 3: 1, 4: 48}, '10': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, '11': {0: 'Dispatch', 1: 'Ken', 2: 'Dispatch', 3: 'Dispatch', 4: 'Dispatch'}, '12': {0: '(800)_488-1860', 1: '(903)_280-7878', 2: '(912)_748-3801', 3: '(541)_826-4786', 4: '(800)_488-1860'}, '13': {0: 'Meadow_Lark_Agency_', 1: 'UrTruckBroker_', 2: 'DSV_Road_Inc._', 3: 'Sureway_Transportation_Co_/_Anderson_Trucking_Serv_', 4: 'Meadow_Lark_Agency_'}}) df_new = pd.DataFrame({'0': {0: '10/23/2020', 1: '10/22/2020', 2: '10/23/2020', 3: '10/23/2020', 4: '10/30/2020'}, '1': {0: 'New_Castle', 1: 'Wilmington', 2: 'WILMINGTON', 3: 'WILMINGTON', 4: 'New_Castle'}, '2': {0: 'DE', 1: 'DE', 2: 'DE', 3: 'DE', 4: 'DE'}, '3': {0: 'Gary', 1: 'METHUEN', 2: 'METHUEN', 3: 'METHUEN', 4: 'Gary'}, '4': {0: 'IN', 1: 'MA', 2: 'MA', 3: 'MA', 4: 'IN'}, '5': {0: 'Full', 1: 'Full', 2: 'Full', 3: 'Full', 4: 'Full'}, '6': {0: 'Flatbed', 1: 'Flatbed_/_Step_Deck', 2: 'Flatbed_w/Tarps', 3: 'Flatbed_w/Tarps', 4: 'Flatbed'}, '7': {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 945.0}, '8': {0: 46, 1: 48, 2: 47, 3: 48, 4: 46}, '9': {0: 48, 1: 48, 2: 1, 3: 1, 4: 48}, '10': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0}, '11': {0: 'Dispatch', 1: 'Ken', 2: 'Dispatch', 3: 'Dispatch', 4: 'Dispatch'}, '12': {0: '(800)_488-1860', 1: '(903)_280-7878', 2: '(912)_748-3801', 3: '(541)_826-4786', 4: '(800)_488-1860'}, '13': {0: 'Meadow_Lark_Agency_', 1: 'UrTruckBroker_', 2: 'DSV_Road_Inc._', 3: 'Sureway_Transportation_Co_/_Anderson_Trucking_Serv_', 4: 'Meadow_Lark_Agency_'}}) df_new.append(df_old, ignore_index=True) #OR pd.concat([df_new, df_old])
How to convert this pandas dataframe from a tall to a wide representation, dropping a column
I have the following dataframe: df = pd.DataFrame({'Variable': {0: 'Abs', 1: 'rho0', 2: 'cp', 3: 'K0'}, 'Value': {0: 0.585, 1: 8220.000, 2: 435.000, 3: 11.400}, 'Description': {0: 'foo', 1: 'foo', 2: 'foo', 3: 'foo'}}) I would like to reshape it like this: df2 = pd.DataFrame({'Abs': {0: 0.585}, 'rho0': {0: 8220.000}, 'cp': {0: 435.000}, 'K0': {0: 11.400}}) How can I do it? df3 = df.pivot_table(columns='Variable', values='Value') print(df3) Variable Abs K0 cp rho0 Value 0.585 11.4 435.0 8220.0 gets very close to what I was looking for, but I'd rather do without the first column Variable, if at all possible.
You can try renaming the axis() df3 = df.pivot_table(values='Value', columns='Variable').rename_axis(None, axis=1) additionally if you want to reset the index df3 = df.pivot_table( columns='Variable').rename_axis(None, axis=1).reset_index().drop('index',axis=1) df3.to_dict() # Output {'Abs': {0: 0.585}, 'K0': {0: 11.4}, 'cp': {0: 435.0}, 'rho0': {0: 8220.0}}
Select the row values of dataframe if row name is present in column name of another dataframe in pandas
If I have df1 df1 = pd.DataFrame({'Col_Name': {0: 'A', 1: 'b', 2: 'c'}, 'X': {0: 12, 1: 23, 2: 223}, 'Z': {0: 42, 1: 33, 2: 28 }}) and df2 df2 = pd.DataFrame({'Col': {0: 'Y', 1: 'X', 2: 'Z'}, 'Low1': {0: 0, 1: 0, 2: 0}, 'High1': {0: 10, 1: 10, 2: 630}, 'Low2': {0: 10, 1: 10, 2: 630}, 'High2': {0: 50, 1: 50, 2: 3000}, 'Low3': {0: 50, 1: 50, 2: 3000}, 'High3': {0: 100, 1: 100, 2: 8500}, 'Low4': {0: 100, 1: 100, 2: 8500}, 'High4': {0: 'np.inf', 1: 'np.inf', 2: 'np.inf'}}) Select the row values of df2 if row is present in column name of df1. Expected Output: df3 df3 = pd.DataFrame({'Col': {0: 'X', 1: 'Z'}, 'Low1': {0: 0, 1: 0}, 'High1': {0: 10, 1: 630}, 'Low2': {0: 10, 1: 630}, 'High2': {0: 50, 1: 3000}, 'Low3': {0: 50, 1: 3000}, 'High3': {0: 100, 1: 8500}, 'Low4': {0: 100, 1: 8500}, 'High4': {0: 'np.inf', 1: 'np.inf'}}) How to do it?
You can pass a boolean list to select the rows of df2 that you want. This list can be created by looking at each value in the Col column and asking if the value is in the columns of df1 df3 = df2[[col in df1.columns for col in df2['Col']]]
you can drop the non-relevant col and use the other columns... df3 = df2[df2['Col'].isin(list(df1.drop('Col_Name',axis=1).columns))]
Convert dataframe to dictionary but not to take column name as keys in python
I have a daframe given below. I want to convert it into dictionary. But I Don't want column name as keys. data = {'0':[0.039169993,0.023344912], '1':[0.17865846,0.01093025],'2':[0.039170124,0.023344917], '3':[0.17865846,0.01093025],'4':[0.039170124,0.023344917]} df= pd.DataFrame(data) 0.0 1.0 2.0 3.0 4.0 0 0.039169993 0.17865846 0.039170124 0.17865846 0.039170124 1 0.023344912 0.01093025 0.023344917 0.01093025 0.023344917 **Desired Result**: {{0: 0.039169993, 1:0.023344912}, {0: 0.17865846, 1:0.01093025}, {0: 0.039170124, 1:0.023344917}, {0: 0.17865846, 1:0.01093025}, {0:0.039170124, 1:0.023344917}} MyAttempt: df.to_dict() {'0': {0: 0.039169993, 1: 0.023344912}, '1': {0: 0.17865846, 1: 0.01093025}, '2': {0: 0.039170124, 1: 0.023344917}, '3': {0: 0.17865846, 1: 0.01093025}, '4': {0: 0.039170124, 1: 0.023344917}} I dont want column name as keys. Is it possible to do?
You can use transpose or T and .to_dict(orient='records') to obtain the desired output like: df.T.to_dict(orient='records')
The desired result has the format of a set of dictionaries, but you cannot have a set of dictionaries because dictionaries are not hashable, however you could have a list. import pandas as pd data = {'0': [0.039169993, 0.023344912], '1': [0.17865846, 0.01093025], '2': [0.039170124, 0.023344917], '3': [0.17865846, 0.01093025], '4': [0.039170124, 0.023344917]} df = pd.DataFrame(data) result = list(df.to_dict().values()) print(result) Output [{0: 0.039170124, 1: 0.023344917}, {0: 0.039169993, 1: 0.023344912}, {0: 0.17865846, 1: 0.01093025}, {0: 0.17865846, 1: 0.01093025}, {0: 0.039170124, 1: 0.023344917}]
You can use this: df.T.to_dict(orient='records') [{0: 0.039169993, 1: 0.023344911999999999}, {0: 0.17865845999999999, 1: 0.010930250000000001}, {0: 0.039170124000000001, 1: 0.023344917}, {0: 0.17865845999999999, 1: 0.010930250000000001}, {0: 0.039170124000000001, 1: 0.023344917}]