Get count or sum of values in dataframe of each day - python

how to calculate the sum of the values ​​(1) and the sum of the values ​​(0) contained in each date?
or
how to calculate the sum of the values ​​(1) divided by the sum of the values ​​(0) in each date.
sentiment_value = log10(count_of_(1)/count_of_(0)), this is the formula I am using for.
date new_sentiment
0 2017-04-28 1.0
1 2017-04-28 1.0
2 2017-04-28 1.0
3 2017-04-27 0.0
4 2017-04-27 1.0
5 2017-04-26 0.0
6 2017-04-26 1.0
7 2017-04-26 1.0
8 2017-04-26 0.0
9 2017-04-26 1.0
result_neg = date_df.appl

You need:
g = data.groupby(['date', 'new_sentiment']).size().unstack(fill_value=0).reset_index()
g['sentiment_value'] = np.log((g[1.0])/(g[0.0]))
Output:
new_sentiment date 0.0 1.0 sentiment_value
0 2017-04-26 2 3 0.405465
1 2017-04-27 1 1 0.000000
2 2017-04-28 0 3 inf

Related

Fill missing dates in a pandas DataFrame

I’ve a lot of DataFrames with 2 columns, like this:
Fecha
unidades
0
2020-01-01
2.0
84048
2020-09-01
4.0
149445
2020-10-01
11.0
532541
2020-11-01
4.0
660659
2020-12-01
2.0
1515682
2021-03-01
9.0
1563644
2021-04-01
2.0
1759823
2021-05-01
1.0
2226586
2021-07-01
1.0
As it can be seen, there are some months that are missing. Missing data depends on the DataFrame, I can have 2 months, 10, 100% complete, only one...I need to complete column "Fecha" with missing months (from 2020-01-01 to 2021-12-01) and when date is added into "Fecha", add "0" value to "unidades" column.
Each element in Fecha Column is a class 'pandas._libs.tslibs.timestamps.Timestamp
How could I fill the missing dates for each DataFrame??
You could create a date range and use "Fecha" column to set_index + reindex to add missing months. Then fillna + reset_index fetches the desired outcome:
df['Fecha'] = pd.to_datetime(df['Fecha'])
df = (df.set_index('Fecha')
.reindex(pd.date_range('2020-01-01', '2021-12-01', freq='MS'))
.rename_axis(['Fecha'])
.fillna(0)
.reset_index())
Output:
Fecha unidades
0 2020-01-01 2.0
1 2020-02-01 0.0
2 2020-03-01 0.0
3 2020-04-01 0.0
4 2020-05-01 0.0
5 2020-06-01 0.0
6 2020-07-01 0.0
7 2020-08-01 0.0
8 2020-09-01 4.0
9 2020-10-01 11.0
10 2020-11-01 4.0
11 2020-12-01 2.0
12 2021-01-01 0.0
13 2021-02-01 0.0
14 2021-03-01 9.0
15 2021-04-01 2.0
16 2021-05-01 1.0
17 2021-06-01 0.0
18 2021-07-01 1.0
19 2021-08-01 0.0
20 2021-09-01 0.0
21 2021-10-01 0.0
22 2021-11-01 0.0
23 2021-12-01 0.0

Pandas add missing weeks from range to dataframe

I am computing a DataFrame with weekly amounts and now I need to fill it with missing weeks from a provided date range.
This is how I'm generating the dataframe with the weekly amounts:
df['date'] = pd.to_datetime(df['date']) - timedelta(days=6)
weekly_data: pd.DataFrame = (df
.groupby([pd.Grouper(key='date', freq='W-SUN')])[data_type]
.sum()
.reset_index()
)
Which outputs:
date sum
0 2020-10-11 78
1 2020-10-18 673
If a date range is given as start='2020-08-30' and end='2020-10-30', then I would expect the following dataframe:
date sum
0 2020-08-30 0.0
1 2020-09-06 0.0
2 2020-09-13 0.0
3 2020-09-20 0.0
4 2020-09-27 0.0
5 2020-10-04 0.0
6 2020-10-11 78
7 2020-10-18 673
8 2020-10-25 0.0
So far, I have managed to just add the missing weeks and set the sum to 0, but it also replaces the existing values:
weekly_data = weekly_data.reindex(pd.date_range('2020-08-30', '2020-10-30', freq='W-SUN')).fillna(0)
Which outputs:
date sum
0 2020-08-30 0.0
1 2020-09-06 0.0
2 2020-09-13 0.0
3 2020-09-20 0.0
4 2020-09-27 0.0
5 2020-10-04 0.0
6 2020-10-11 0.0 # should be 78
7 2020-10-18 0.0 # should be 673
8 2020-10-25 0.0
Remove reset_index for DatetimeIndex, because reindex working with index and if RangeIndex get 0 values, because no match:
weekly_data = (df.groupby([pd.Grouper(key='date', freq='W-SUN')])[data_type]
.sum()
)
Then is possible use fill_value=0 parameter and last add reset_index:
r = pd.date_range('2020-08-30', '2020-10-30', freq='W-SUN', name='date')
weekly_data = weekly_data.reindex(r, fill_value=0).reset_index()
print (weekly_data)
date sum
0 2020-08-30 0
1 2020-09-06 0
2 2020-09-13 0
3 2020-09-20 0
4 2020-09-27 0
5 2020-10-04 0
6 2020-10-11 78
7 2020-10-18 673
8 2020-10-25 0

Subtract value in row based on condition in pandas

I need to subtract dates based on the progression of fault count.
Below is the table that has the two input columns Date and Fault_Count. The output columns I need are Option1 and Option2. The last two columns show the date difference calculations. Basically when the Fault_Count changes I need to count the number of days from when the Fault_Count changed to the initial start of fault count. For example the Fault_Count changed to 2 on 1/4/2020, I need to get the number of days from when the Fault_Count started at 0 and changed to 2 (i.e. 1/4/2020 - 1/1/2020 = 3).
Date Fault_Count Option1 Option2 Option1calc Option2calc
1/1/2020 0 0 0
1/2/2020 0 0 0
1/3/2020 0 0 0
1/4/2020 2 3 3 1/4/2020-1/1/2020 1/4/2020-1/1/2020
1/5/2020 2 0 0
1/6/2020 2 0 0
1/7/2020 4 3 3 1/7/2020-1/4/2020 1/7/2020-1/4/2020
1/8/2020 4 0 0
1/9/2020 5 2 2 1/9/2020-1/7/2020 1/9/2020-1/7/2020
1/10/2020 5 0 0
1/11/2020 0 2 -2 1/11/2020-1/9/2020 (1/11/2020-1/9/2020)*-1 as the fault resets
1/12/2020 1 1 1 1/12/2020-1/11/2020 1/12/2020-1/11/2020
Below is the code.
import pandas as pd
d = {'Date': ['1/1/2020', '1/2/2020', '1/3/2020', '1/4/2020', '1/5/2020', '1/6/2020', '1/7/2020', '1/8/2020', '1/9/2020', '1/10/2020', '1/11/2020', '1/12/2020'], 'Fault_Count' : [0, 0, 0, 2, 2, 2, 4, 4, 5, 5, 0, 1]}
df = pd.DataFrame(d)
df['Date'] = pd.to_datetime(df['Date'])
df['Fault_count_diff'] = df.Fault_Count.diff().fillna(0)
df['Cumlative_Sum'] = df.Fault_count_diff.cumsum()
I thought I could use cumulative sum and group by to get the groups and get the differences of the first value of groups. That's as far as I could get, also I noticed that using cumulative sum was not giving me ordered groups as some of the Fault_Count get reset.
Date Fault_Count Fault_count_diff Cumlative_Sum
0 2020-01-01 0 0.0 0.0
1 2020-01-02 0 0.0 0.0
2 2020-01-03 0 0.0 0.0
3 2020-01-04 2 2.0 2.0
4 2020-01-05 2 0.0 2.0
5 2020-01-06 2 0.0 2.0
6 2020-01-07 4 2.0 4.0
7 2020-01-08 4 0.0 4.0
8 2020-01-09 5 1.0 5.0
9 2020-01-10 5 0.0 5.0
10 2020-01-11 0 -5.0 0.0
11 2020-01-12 1 1.0 1.0
Desired output:
Date Fault_Count Option1 Option2
0 2020-01-01 0 0.0 0.0
1 2020-01-02 0 0.0 0.0
2 2020-01-03 0 0.0 0.0
3 2020-01-04 2 3.0 3.0
4 2020-01-05 2 0.0 0.0
5 2020-01-06 2 0.0 0.0
6 2020-01-07 4 3.0 3.0
7 2020-01-08 4 0.0 0.0
8 2020-01-09 5 2.0 2.0
9 2020-01-10 5 0.0 0.0
10 2020-01-11 0 2.0 -2.0
11 2020-01-12 1 1.0 1.0
Thanks for the help.
Use:
m1 = df['Fault_Count'].ne(df['Fault_Count'].shift(fill_value=0))
m2 = df['Fault_Count'].eq(0) & df['Fault_Count'].shift(fill_value=0).ne(0)
s = df['Date'].groupby(m1.cumsum()).transform('first')
df['Option1'] = df['Date'].sub(s.shift()).dt.days.where(m1, 0)
df['Option2'] = df['Option1'].where(~m2, df['Option1'].mul(-1))
Details:
Use Series.ne + Series.shift to create boolean mask m1 which represent the boundary condition when Fault_count changes, similarly use Series.eq + Series.shift and Series.ne to create a boolean mask m2 which represent the condition where Fault_count resets:
m1 m2
0 False False
1 False False
2 False False
3 True False
4 False False
5 False False
6 True False
7 False False
8 True False
9 False False
10 True True # --> Fault count reset
11 True False
Use Series.groupby on consecutive fault counts obtained using m1.cumsum and transform the Date column using groupby.first:
print(s)
0 2020-01-01
1 2020-01-01
2 2020-01-01
3 2020-01-04
4 2020-01-04
5 2020-01-04
6 2020-01-07
7 2020-01-07
8 2020-01-09
9 2020-01-09
10 2020-01-11
11 2020-01-12
Name: Date, dtype: datetime64[ns]
Use Series.sub to subtract Date for s shifted using Series.shift and use Series.where to fill 0 based on mask m2 and assign this to Option1. Similary we obtain Option2 from Option1 based on mask m2:
print(df)
Date Fault_Count Option1 Option2
0 2020-01-01 0 0.0 0.0
1 2020-01-02 0 0.0 0.0
2 2020-01-03 0 0.0 0.0
3 2020-01-04 2 3.0 3.0
4 2020-01-05 2 0.0 0.0
5 2020-01-06 2 0.0 0.0
6 2020-01-07 4 3.0 3.0
7 2020-01-08 4 0.0 0.0
8 2020-01-09 5 2.0 2.0
9 2020-01-10 5 0.0 0.0
10 2020-01-11 0 2.0 -2.0
11 2020-01-12 1 1.0 1.0
Instead of df['Fault_count_diff'] = ... and the next line, do:
df['cycle'] = (df.Fault_Count.diff() < 0).cumsum()
Then to get the dates in between each count change.
Option1. If all calendar dates are present in df:
ndays = df.groupby(['cycle', 'Fault_Count']).Date.size()
Option2. If there's the possibility of a date not showing up in df and you still want to get the calendar days between incidents:
ndays = df.groupby(['cycle', 'Fault_Count']).Date.min().diff().dropna()

astype() does not change floats

Even though this seems really simple, it drives me nuts. Why is .astype(int) not changing the floats to ints? Thank you
df_new = pd.crosstab(df["date"], df["place"]).reset_index()
places = ['cityA', "cityB", "cityC"]
df_new[places] = df_new[places].fillna(0).astype(int)
sums = df_new.select_dtypes(pd.np.number).sum().rename('total')
df_new = df_new.append(sums)
print(df_new)
Output:
place date cityA cityB cityC
0 2008-01-01 0.0 0.0 51.0
1 2009-06-01 0.0 618.0 0.0
2 2015-07-01 549.0 0.0 0.0
3 2016-01-01 41.0 0.0 0.0
4 2016-04-01 62.0 0.0 0.0
5 2017-01-01 800.0 0.0 0.0
6 2018-07-01 69.0 0.0 0.0
total NaT 1521.0 618.0 51.0
If there are NAs (which are floats in Pandas), the other values will be floats as well. See here.

Pandas DataFrame --> GroupBy --> MultiIndex Process

I'm trying to restructure a large DataFrame of the following form as a MultiIndex:
date store_nbr item_nbr units snowfall preciptotal event
0 2012-01-01 1 1 0 0.0 0.0 0.0
1 2012-01-01 1 2 0 0.0 0.0 0.0
2 2012-01-01 1 3 0 0.0 0.0 0.0
3 2012-01-01 1 4 0 0.0 0.0 0.0
4 2012-01-01 1 5 0 0.0 0.0 0.0
I want to group by store_nbr (1-45), within each store_nbr group by item_nbr (1-111) and then for the corresponding index pair (e.g., store_nbr=12, item_nbr=109), display the rows in chronological order, so that ordered rows will look like, for example:
store_nbr=12, item_nbr=109: date=2014-02-06, units=0, snowfall=...
date=2014-02-07, units=0, snowfall=...
date=2014-02-08, units=0, snowfall=...
... ...
store_nbr=12, item_nbr=110: date=2014-02-06, units=0, snowfall=...
date=2014-02-07, units=1, snowfall=...
date=2014-02-08, units=1, snowfall=...
...
It looks like some combination of groupby and set_index might be useful here, but I'm getting stuck after the following line:
grouped = stores.set_index(['store_nbr', 'item_nbr'])
This produces the following MultiIndex:
date units snowfall preciptotal event
store_nbr item_nbr
1 1 2012-01-01 0 0.0 0.0 0.0
2 2012-01-01 0 0.0 0.0 0.0
3 2012-01-01 0 0.0 0.0 0.0
4 2012-01-01 0 0.0 0.0 0.0
5 2012-01-01 0 0.0 0.0 0.0
Does anyone have any suggestions from here? Is there an easy way to do this by manipulating groupby objects?
You can sort your rows with:
df.sort_values(by='date')

Categories