convert pandas datetime column yyyy-mm-dd to YYYYMMDD - python

I have a dateframe with datetime column in the format yyyy-mm-dd.
I would like to have it in interger format yyyymmdd . I keep throwing an error using this
x=dates.apply(dt.datetime.strftime('%Y%m%d')).astype(int)
TypeError: descriptor 'strftime' requires a 'datetime.date' object but received a 'str'
This doesn't not work as i tried to pass an array. I know that if I pass just on element it will convert, but how do I do it more pythonic? I did try using lambda but that didn't work either.

If your column is a string, you will need to first use `pd.to_datetime',
df['Date'] = pd.to_datetime(df['Date'])
Then, use .dt datetime accessor with strftime:
df = pd.DataFrame({'Date':pd.date_range('2017-01-01', periods = 60, freq='D')})
df.Date.dt.strftime('%Y%m%d').astype(int)
Or use lambda function:
df.Date.apply(lambda x: x.strftime('%Y%m%d')).astype(int)
Output:
0 20170101
1 20170102
2 20170103
3 20170104
4 20170105
5 20170106
6 20170107
7 20170108
8 20170109
9 20170110
10 20170111
11 20170112
12 20170113
13 20170114
14 20170115
15 20170116
16 20170117
17 20170118
18 20170119
19 20170120
20 20170121
21 20170122
22 20170123
23 20170124
24 20170125
25 20170126
26 20170127
27 20170128
28 20170129
29 20170130
30 20170131
31 20170201
32 20170202
33 20170203
34 20170204
35 20170205
36 20170206
37 20170207
38 20170208
39 20170209
40 20170210
41 20170211
42 20170212
43 20170213
44 20170214
45 20170215
46 20170216
47 20170217
48 20170218
49 20170219
50 20170220
51 20170221
52 20170222
53 20170223
54 20170224
55 20170225
56 20170226
57 20170227
58 20170228
59 20170301
Name: Date, dtype: int32

Related

How do I convert date (YYYY-MM-DD) to Month-YY and groupby on some other column to get minimum and maximum month?

I have created a data frame which has rolling quarter mapping using the code
abcd = pd.DataFrame()
abcd['Month'] = np.nan
abcd['Month'] = pd.date_range(start='2020-04-01', end='2022-04-01', freq = 'MS')
abcd['Time_1'] = np.arange(1, abcd.shape[0]+1)
abcd['Time_2'] = np.arange(0, abcd.shape[0])
abcd['Time_3'] = np.arange(-1, abcd.shape[0]-1)
db_nd_ad_unpivot = pd.melt(abcd, id_vars=['Month'],
value_vars=['Time_1', 'Time_2', 'Time_3',],
var_name='Time_name', value_name='Time')
abcd_map = db_nd_ad_unpivot[(db_nd_ad_unpivot['Time']>0)&(db_nd_ad_unpivot['Time']< abcd.shape[0]+1)]
abcd_map = abcd_map[['Month','Time']]
The output of the code looks like this:
Now, I have created an additional column name that gives me the name of the month and year in format Mon-YY using the code
abcd_map['Month'] = pd.to_datetime(abcd_map.Month)
# abcd_map['Month'] = abcd_map['Month'].astype(str)
abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y"))
Now I want to see for a specific time, what is the minimum and maximum in the month column. For eg. for time instance 17
,The simple groupby results as:
Time Period
17 Aug'21-Sept'21
The desired output is
Time Time_Period
17 Aug'21-Oct'21.
I think it is based on min and max of the column Month as by using the strftime function the column is getting converted in String/object type.
How about converting to string after finding the min and max
New_df = abcd_map.groupby('Time')['Month'].agg(['min', 'max']).apply(lambda x: x.dt.strftime("%b'%y")).agg(' '.join, axis=1).reset_index()
Do this:
abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')
abcd_map['Time_Period'] = abcd_map['Month_'] = pd.to_datetime(abcd_map['Month']).dt.strftime('%Y-%m')
abcd_map['Time_Period'] = abcd_map['Month'].apply(lambda x: x.strftime("%b'%y"))
df = abcd_map.groupby(['Time']).agg(
sum_col=('Time', np.sum),
first_date=('Time_Period', np.min),
last_date=('Time_Period', np.max)
).reset_index()
df['TimePeriod'] = df['first_date']+'-'+df['last_date']
df = df.drop(['first_date','last_date'], axis = 1)
df
which returns
Time sum_col TimePeriod
0 1 3 Apr'20-May'20
1 2 6 Jul'20-May'20
2 3 9 Aug'20-Jun'20
3 4 12 Aug'20-Sep'20
4 5 15 Aug'20-Sep'20
5 6 18 Nov'20-Sep'20
6 7 21 Dec'20-Oct'20
7 8 24 Dec'20-Nov'20
8 9 27 Dec'20-Jan'21
9 10 30 Feb'21-Mar'21
10 11 33 Apr'21-Mar'21
11 12 36 Apr'21-May'21
12 13 39 Apr'21-May'21
13 14 42 Jul'21-May'21
14 15 45 Aug'21-Jun'21
15 16 48 Aug'21-Sep'21
16 17 51 Aug'21-Sep'21
17 18 54 Nov'21-Sep'21
18 19 57 Dec'21-Oct'21
19 20 60 Dec'21-Nov'21
20 21 63 Dec'21-Jan'22
21 22 66 Feb'22-Mar'22
22 23 69 Apr'22-Mar'22
23 24 48 Apr'22-Mar'22
24 25 25 Apr'22-Apr'22

Random number to values columns

I have a large dataset with the columns 'group' and 'postcode'. An example of the df is given below:
Age
65+
16-25
16-25
26-39
40-64
65+
26-39
40-64
16-25
65+
I am trying to affect to each row value a random value with the code below
df['AGE'] = df['AGE'].replace({'65+': randint(65,100), '16-25': randint(16,25),
'26-39': randint(26,39), '40-64': randint(40,64)})
But what I'm getting are four random values to each of these values: {'65+', '16-25', '26-39', '40-64'} like so:
Age
73
23
23
34
42
73
34
42
23
73
Can someone please help me figure out what am I doing wrong by correcting my code?
You're generating the random numbers once and just replacing your column values.
If you want a different random number for each row, you need to call randint for each row. Try:
>>> df['AGE'].apply(lambda x: randint(int(x[:2]), 100 if x[-1]=="+" else int(x[-2:])))
0 82
1 23
2 18
3 27
4 45
5 83
6 38
7 64
8 17
9 93
Name: AGE, dtype: int64

Turn a 2 second order array into pandas dataframe

I have a data set as such of 2 order array, with arbitrary length. as shown below
[['15,39' '17,43']
['23,40' '18,44']
['28,41' '18,45']
['28,42' '27,46']
['34,43' '26,47']
.
.
.
]
I want to turn it into a panda dataframe as columns and rows, shown below
15 39 17 43
23 40 18 44
28 41 18 45
28 42 27 46
34 43 26 47
.
.
.
anyone has idea how to achieve it without saving the data out to files during process?
My strategy is defining a function first to deal with the comma and quotes. Keeping in mind that your data is already a 2 dimensional numpy array I define the following function:
def str_to_flt(lst):
tmp = np.array([[float(i.split(",")[0]),float(i.split(",")[1])] for i in lst])
return tmp
import pandas as pd
df = pd.DataFrame(np.concatenate((str_to_flt(data[:,0]), str_to_flt(data[:,1])), axis=1))
Your data:
from io import StringIO
s="""[['15,39' '17,43']
['23,40' '18,44']
['28,41' '18,45']
['28,42' '27,46']
['34,43' '26,47']]"""
df=pd.read_csv(StringIO(s),header=None)
You can do:
d={"\[\['":"","'\]\]":"","'\]\]'":"","'\]":"","\['":"","' '":','}
df=df.replace(d,regex=True)
df[[1.2,1.5]]=df.pop(1).str.extract(r"(\d+),(\d+)")
df=df.sort_index(axis=1)
output of df:
0.0 1.2 1.5 2.0
0 15 39 17 43
1 23 40 18 44
2 28 41 18 45
3 28 42 27 46
4 34 43 26 47
Ofcourse you can rename the name of columns according to your need by using columns attribute or rename() method and typecast data by using astype() method according to your need

Select/Group rows from a data frame with the nearest values for a specific column(s)

I have the two columns in a data frame (you can see a sample down below)
Usually in columns A & B I get 10 to 12 rows with similar values.
So for example: from index 1 to 10 and then from index 11 to 21.
I would like to group these values and get the mean and standard deviation of each group.
I found this following line code where I can get the index of the nearest value. but I don't know how to do this repetitively:
Index = df['A'].sub(df['A'][0]).abs().idxmin()
Anyone has any ideas on how to approach this problem?
A B
1 3652.194531 -1859.805238
2 3739.026566 -1881.965576
3 3742.095325 -1878.707674
4 3747.016899 -1878.728626
5 3746.214554 -1881.270329
6 3750.325368 -1882.915532
7 3748.086576 -1882.406672
8 3751.786422 -1886.489485
9 3755.448968 -1885.695822
10 3753.714126 -1883.504098
11 -337.969554 24.070990
12 -343.019575 23.438956
13 -344.788697 22.250254
14 -346.433460 21.912217
15 -343.228579 22.178519
16 -345.722368 23.037441
17 -345.923108 23.317620
18 -345.526633 21.416528
19 -347.555162 21.315934
20 -347.229210 21.565183
21 -344.575181 22.963298
22 23.611677 -8.499528
23 26.320500 -8.744512
24 24.374874 -10.717384
25 25.885272 -8.982414
26 24.448127 -9.002646
27 23.808744 -9.568390
28 24.717935 -8.491659
29 25.811393 -8.773649
30 25.084683 -8.245354
31 25.345618 -7.508419
32 23.286342 -10.695104
33 -3184.426285 -2533.374402
34 -3209.584366 -2553.310934
35 -3210.898611 -2555.938332
36 -3214.234899 -2558.244347
37 -3216.453616 -2561.863807
38 -3219.326197 -2558.739058
39 -3214.893325 -2560.505207
40 -3194.421934 -2550.186647
41 -3219.728445 -2562.472566
42 -3217.630380 -2562.132186
43 234.800448 -75.157523
44 236.661235 -72.617806
45 238.300501 -71.963103
46 239.127539 -72.797922
47 232.305335 -70.634125
48 238.452197 -73.914015
49 239.091210 -71.035163
50 239.855953 -73.961841
51 238.936811 -73.887023
52 238.621490 -73.171441
53 240.771812 -73.847028
54 -16.798565 4.421919
55 -15.952454 3.911043
56 -14.337879 4.236691
57 -17.465204 3.610884
58 -17.270147 4.407737
59 -15.347879 3.256489
60 -18.197750 3.906086
A simpler approach consist in grouping the values where the percentage change is not greater than a given threshold (let's say 0.5):
df['Group'] = (df.A.pct_change().abs()>0.5).cumsum()
df.groupby('Group').agg(['mean', 'std'])
Output:
A B
mean std mean std
Group
0 3738.590934 30.769420 -1880.148905 7.582856
1 -344.724684 2.666137 22.496995 0.921008
2 24.790470 0.994361 -9.020824 0.977809
3 -3210.159806 11.646589 -2555.676749 8.810481
4 237.902230 2.439297 -72.998817 1.366350
5 -16.481411 1.341379 3.964407 0.430576
Note: I have only used the "A" column, since the "B" column appears to follow the same pattern of consecutive nearest values. You can check if the identified groups are the same between columns with:
grps = (df[['A','B']].pct_change().abs()>1).cumsum()
grps.A.eq(grps.B).all()
I would say that if you know the length of each group/index set you want then you can first subset the column and row with :
df['A'].iloc[0:11].mean()
Then figure out a way to find standard deviation.

How to convert a column of comma separated strings into lists?

We have a dataframe DF_00:
CODE FACTORS
00 000049668192,000049083092,000049239900,000049304492,000049200300,000049066092
03 000049089310
08 000049239900,000049196700,000049387200
33 000049150097,000049015792
40 000049768051,000049768051,000049768051,000049768051
42 000049768051,000049768051,000049768051,000049768051
60 000049347300
61 000049089310
We need to obtain DF_01:
CODE FACTORS
00 ['000049668192','000049083092','000049239900','000049304492','000049200300','000049066092']
03 ['000049089310']
08 ['000049239900','000049196700','000049387200']
33 ['000049150097','000049015792']
40 ['000049768051','000049768051','000049768051','000049768051']
42 ['000049768051','000049768051','000049768051','000049768051']
60 ['000049347300']
61 ['000049089310']
What we need to do?
This should do:
df['FACTORS'] = df['FACTORS'].str.split(',').astype(str)
print(df)
CODE FACTORS
0 0 ['000049668192', '000049083092', '000049239900...
1 3 ['000049089310']
2 8 ['000049239900', '000049196700', '000049387200']
3 33 ['000049150097', '000049015792']
4 40 ['000049768051', '000049768051', '000049768051...
5 42 ['000049768051', '000049768051', '000049768051...
6 60 ['000049347300']
7 61 ['000049089310']

Categories