Remove special characters in pandas dataframe - python

This seems like an inherently simple task but I am finding it very difficult to remove the '' from my entire data frame and return the numeric values in each column, including the numbers that did not have ''. The dateframe includes hundreds of more columns and looks like this in short:
Time A1 A2
2.0002546296 1499 1592
2.0006712963 1252 1459
2.0902546296 1731 2223
2.0906828704 1691 1904
2.1742245370 2364 3121
2.1764699074 2096 1942
2.7654050926 *7639* *8196*
2.7658564815 *7088* *7542*
2.9048958333 *8736* *8459*
2.9053125000 *7778* *7704*
2.9807175926 *6612* *6593*
3.0585763889 *8520* *9122*
I have not written it to iterate over every column in df yet but as far as the first column goes I have come up with this
df['A1'].str.replace('*','').astype(float)
which yields
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 NaN
9 NaN
10 NaN
11 NaN
12 NaN
13 NaN
14 NaN
15 NaN
16 NaN
17 NaN
18 NaN
19 7639.0
20 7088.0
21 8736.0
22 7778.0
23 6612.0
24 8520.0
Is there a very easy way to just remove the '*' in the dataframe in pandas?

use replace which applies on whole dataframe :
df
Out[14]:
Time A1 A2
0 2.000255 1499 1592
1 2.176470 2096 1942
2 2.765405 *7639* *8196*
3 2.765856 *7088* *7542*
4 2.904896 *8736* *8459*
5 2.905312 *7778* *7704*
6 2.980718 *6612* *6593*
7 3.058576 *8520* *9122*
df=df.replace('\*','',regex=True).astype(float)
df
Out[16]:
Time A1 A2
0 2.000255 1499 1592
1 2.176470 2096 1942
2 2.765405 7639 8196
3 2.765856 7088 7542
4 2.904896 8736 8459
5 2.905312 7778 7704
6 2.980718 6612 6593
7 3.058576 8520 9122

I found this to be a simple approach - Use replace to retain only the digits (and dot and minus sign).
This would remove characters, alphabets or anything that is not defined in to_replace attribute.
So, the solution is:
df['A1'].replace(regex=True, inplace=True, to_replace=r'[^0-9.\-]', value=r'']
df['A1'] = df['A1'].astype(float64)

There is another solution which uses map and strip functions.
You can see the below link:
Pandas DataFrame: remove unwanted parts from strings in a column.
df =
Time A1 A2
0 2.0 1258 *1364*
1 2.1 *1254* 2002
2 2.2 1520 3364
3 2.3 *300* *10056*
cols = ['A1', 'A2']
for col in cols:
df[col] = df[col].map(lambda x: str(x).lstrip('*').rstrip('*')).astype(float)
df =
Time A1 A2
0 2.0 1258 1364
1 2.1 1254 2002
2 2.2 1520 3364
3 2.3 300 10056
The parsing procedure only be applied on the desired columns.

I found the answer of CuriousCoder so brief and useful but there must be a ')' instead of ']'
So it should be:
df['A1'].replace(regex=True, inplace=True, to_replace=r'[^0-9.\-]',
value=r''] df['A1'] = df['A1'].astype(float64)

Related

How to shift location of columns based on a condition of cells of other columns in python

I need a bit of help with python. Here is what I want to achieve.
I have a dataset that looks like below:
import pandas as pd
# define data
data = {'A': [55, "g", 35, 10,'pj'], 'B': [454, 27, 895, 3545,34],
'C': [4, 786, 7, 3, 896],
'Phone Number': [123456789, 7, 3456789012, 4567890123, 1],'another_col':[None,234567890,None,None,215478565]}
pd.DataFrame(data)
A B C Phone Number another_col
0 55 454 4 123456789 None
1 g 27 786 7 234567890.0
2 35 895 7 3456789012 None
3 10 3545 3 4567890123 None
4 pj 34 896 1 215478565.0
I have extracted this data from pdf and unfortunately it adds some random strings as shown above in the dataframe. I want to check if any of the cells in any of the columns contain strings or none-numeric value. If so, then delete the string and shift the entire row to the left. Finally, the desired output is as shown below:
A B C Phone Number another_col
0 55 454 4 1.234568e+08 None
1 27 786 7 2.345679e+08 None
2 35 895 7 3.456789e+09 None
3 10 3545 3 4.567890e+09 None
4 34 896 1 2.15478565+8 None
I would really appreciate your help.
One way is to use to_numeric to coerce each value to numeric values, then shifting each row leftward using dropna:
out = (df.apply(pd.to_numeric, errors='coerce')
.apply(lambda x: pd.Series(x.dropna().tolist(), index=df.columns.drop('another_col')), axis=1))
Output:
A B C Phone Number
0 55.0 454.0 4.0 1.234568e+08
1 27.0 786.0 7.0 2.345679e+08
2 35.0 895.0 7.0 3.456789e+09
3 10.0 3545.0 3.0 4.567890e+09
4 34.0 896.0 1.0 2.154786e+08
You can create boolean mask, shift and pd.concat:
m=pd.to_numeric(df['A'], errors='coerce').isna()
pd.concat([df.loc[~m], df.loc[m].shift(-1, axis=1)]).sort_index()
Output:
A B C Phone Number another_col
0 55 454 4 1.234568e+08 NaN
1 27 786 7 2.345679e+08 NaN
2 35 895 7 3.456789e+09 NaN
3 10 3545 3 4.567890e+09 NaN
4 34 896 1 2.154786e+08 NaN

Positional indexing with NA values

I need to index the dataframe from positional index, but I got NA values in previous operation and I wanna preserve it. How could I achieve this?
df1
NaN
1
NaN
NaN
NaN
6
df2
0 10
1 15
2 13
3 15
4 16
5 17
6 17
7 18
8 10
df3
0 15
1 17
The output I want
NaN
15
NaN
NaN
NaN
17
df2.iloc(df1)
IndexError: indices are out-of-bounds
.iloc method in this case drive to a unbound error, I think .iloc is not available here. df3 is another output generated by .loc, but I don't know how to add NaN between them. If you can achieve output by using df1 and df3 is also ok
If df1 and df2 has same index values use for replace non missing values by values from another DataFrame DataFrame.mask with DataFrame.isna:
df1 = df2.mask(df1.isna())
print (df1)
col
0 NaN
1 15.0
2 NaN
3 NaN
4 NaN
5 17.0

Pandas how to add value to an existing data-frame by index

I have an example data frame let's call it df. I want to add more numbers to df but i don't want to start adding after NaN's which will be the index 7 i want to start adding from index 3.
year number letter
0 1945 10 a
1 1950 15 b
2 1955 20 c
3 1960 NaN NaN
4 1965 NaN Nan
5 1970 NaN Nan
6 1975 NaN Nan
Let's say we have a column like this:
number2
0 25
1 30
2 35
3 40
my target is to get a df like this
year number letter
0 1945 10 a
1 1950 15 b
2 1955 20 c
3 1960 25 NaN
4 1965 30 Nan
5 1970 35 Nan
6 1975 40 Nan
I hope I explained it well enough. Thank you for your support !
number2 = [25,30,35,40]
df.loc[df.number.isna(), 'number'] = number2
Result df:

I want to change the column value for a specific index

df = pd.read_csv('test.txt',dtype=str)
print(df)
HE WE
0 aa NaN
1 181 76
2 22 13
3 NaN NaN
I want to overwrite any of these data frames with the following indexes
dff = pd.DataFrame({'HE' : [100,30]},index=[1,2])
print(dff)
HE
1 100
2 30
for i in dff.index:
df._set_value(i,'HE',dff._get_value(i,'HE'))
print(df)
HE WE
0 aa NaN
1 100 76
2 30 13
3 NaN NaN
Is there a way to change it all at once without using 'for'?
Use DataFrame.update, (working inplace):
df.update(dff)
print (df)
HE WE
0 aa NaN
1 100 76.0
2 30 13.0
3 NaN NaN

How to change consecutive repeating values in pandas dataframe series to nan or 0?

I have a pandas dataframe created from measured numbers. When something goes wrong with the measurement, the last value is repeated. I would like to do two things:
1. Change all repeating values either to nan or 0.
2. Keep the first repeating value and change all other values nan or 0.
I have found solutions using "shift" but they drop repeating values. I do not want to drop repeating values.My data frame looks like this:
df = pd.DataFrame(np.random.randn(15, 3))
df.iloc[4:8,0]=40
df.iloc[12:15,1]=22
df.iloc[10:12,2]=0.23
giving a dataframe like this:
0 1 2
0 1.239916 1.109434 0.305490
1 0.248682 1.472628 0.630074
2 -0.028584 -1.116208 0.074299
3 -0.784692 -0.774261 -1.117499
4 40.000000 0.283084 -1.495734
5 40.000000 -0.074763 -0.840403
6 40.000000 0.709794 -1.000048
7 40.000000 0.920943 0.681230
8 -0.701831 0.547689 -0.128996
9 -0.455691 0.610016 0.420240
10 -0.856768 -1.039719 0.230000
11 1.187208 0.964340 0.230000
12 0.116258 22.000000 1.119744
13 -0.501180 22.000000 0.558941
14 0.551586 22.000000 -0.993749
what I would like to be able to do is write some code that would filter the data and give me a data frame like this:
0 1 2
0 1.239916 1.109434 0.305490
1 0.248682 1.472628 0.630074
2 -0.028584 -1.116208 0.074299
3 -0.784692 -0.774261 -1.117499
4 NaN 0.283084 -1.495734
5 NaN -0.074763 -0.840403
6 NaN 0.709794 -1.000048
7 NaN 0.920943 0.681230
8 -0.701831 0.547689 -0.128996
9 -0.455691 0.610016 0.420240
10 -0.856768 -1.039719 NaN
11 1.187208 0.964340 NaN
12 0.116258 NaN 1.119744
13 -0.501180 NaN 0.558941
14 0.551586 NaN -0.993749
or even better keep the first value and change the rest to NaN. Like this:
0 1 2
0 1.239916 1.109434 0.305490
1 0.248682 1.472628 0.630074
2 -0.028584 -1.116208 0.074299
3 -0.784692 -0.774261 -1.117499
4 40.000000 0.283084 -1.495734
5 NaN -0.074763 -0.840403
6 NaN 0.709794 -1.000048
7 NaN 0.920943 0.681230
8 -0.701831 0.547689 -0.128996
9 -0.455691 0.610016 0.420240
10 -0.856768 -1.039719 0.230000
11 1.187208 0.964340 NaN
12 0.116258 22.000000 1.119744
13 -0.501180 NaN 0.558941
14 0.551586 NaN -0.993749
using shift & mask:
df.shift(1) == df compares the next row to the current for consecutive duplicates.
df.mask(df.shift(1) == df)
# outputs
0 1 2
0 0.365329 0.153527 0.143244
1 0.688364 0.495755 1.065965
2 0.354180 -0.023518 3.338483
3 -0.106851 0.296802 -0.594785
4 40.000000 0.149378 1.507316
5 NaN -1.312952 0.225137
6 NaN -0.242527 -1.731890
7 NaN 0.798908 0.654434
8 2.226980 -1.117809 -1.172430
9 -1.228234 -3.129854 -1.101965
10 0.393293 1.682098 0.230000
11 -0.029907 -0.502333 NaN
12 0.107994 22.000000 0.354902
13 -0.478481 NaN 0.531017
14 -1.517769 NaN 1.552974
if you want to remove all the consecutive duplicates, test that the previous row is also the same as the current row
df.mask((df.shift(1) == df) | (df.shift(-1) == df))
Option 1
Specialized solution using diff. Get's at the final desired output.
df.mask(df.diff().eq(0))
0 1 2
0 1.239916 1.109434 0.305490
1 0.248682 1.472628 0.630074
2 -0.028584 -1.116208 0.074299
3 -0.784692 -0.774261 -1.117499
4 40.000000 0.283084 -1.495734
5 NaN -0.074763 -0.840403
6 NaN 0.709794 -1.000048
7 NaN 0.920943 0.681230
8 -0.701831 0.547689 -0.128996
9 -0.455691 0.610016 0.420240
10 -0.856768 -1.039719 0.230000
11 1.187208 0.964340 NaN
12 0.116258 22.000000 1.119744
13 -0.501180 NaN 0.558941
14 0.551586 NaN -0.993749

Categories