I have a data set that contains hourly data of marketing campaigns. There are several campaigns and not all of them are active during the 24 hours of the day. My goal is to eliminate all rows of active hour campaigns where I don't have the 24 data rows of a single day.
The raw data contains a lot of information like this:
Original Data Set
I created a dummy variable with ones to be able to count single instance of rows. This is the code I applied to be able to see the results I want to get.
tmp = df.groupby(['id','date']).count()
tmp.query('Hour' > 23)
I get the following results:
Results of two lines of code
These results illustrate exactly the data that I want to keep in my data frame.
How can I eliminate the data per campaign per day that does not reach 24? The objective is not the count but the real data. Therefore ungrouped from what I present in the second picture.
I appreciate the guidance.
Use transform to broadcast the count over all rows of your dataframe the use loc as replacement of query:
out = df.loc[df.groupby(['id', 'date'])['Hour'].transform('count')
.loc[lambda x: x > 23].index]
drop the data you don't want before you do the groupby
you can use .loc or .drop, I am unfamiliar with .query
I have a datatable as,
DT_EX= dt.Frame({
'country':['a','a','a','a'],
'id':[3,3,3,3],
'shop':['dmart','dmart','dmart','dmart'],
'beef':[23,None,None,None],
'eggs':[None,33,None,None],
'fork':[None,None,10,None],
'veg':[None,None,None,40]})
It's output is as,
And I would like to convert it to a datatable which should not have NA's in columns as showed in this output,
Could you please explain how to do this operation(removing NA's) on py-datatable?. would dt.isna() be helpful in this case?.
One way around it would be to select the first three columns (they have no nulls) and extend it with the sum of the remaining columns : link
from datatable import f, first, sum
DT_EX[:,first(f[:3]).extend(sum(f[3:]))]
country id shop beef eggs fork veg
▪▪▪▪ ▪▪▪▪ ▪▪▪▪ ▪▪▪▪▪▪▪▪ ▪▪▪▪▪▪▪▪ ▪▪▪▪▪▪▪▪ ▪▪▪▪▪▪▪▪
0 a 3 dmart 23 33 10 40
UPDATE: simpler solution from another related question:
DT_EX[:, sum(f[3:]), f[:3])]
So i have one more subgroup of items and here is a new DT.
DT_EX= dt.Frame({
'country':['a','a','a','a','b','b','c','c'],
'id':[3,3,3,3,4,4,4,4],
'shop':['dmart','dmart','dmart','dmart','amzn','amzn','amzn','amzn'],
'beef':[23,None,None,None,93,None,None,None],
'eggs':[None,33,None,None,None,103,None,None],
'fork':[None,None,10,None,None,None,210,None],
'veg':[None,None,None,40,None,None,None,340]})
I have tried to appply the recommended logics on it as here in the attached screenshot,
In second code chunk it has summed up each column(beef,eggs,fork,veg)
and In the third code chunk, i did a grouping on first three columns, here it gives a correct output, but it's adding duplicate columns, and another observation is that its filling NA values with 0, it can be found on C observation.
would you have any other ideas/suggestions for it ?.
I have a dataframe with three columns
The first column has 3 unique values I used the below code to create unique dataframes, However I am unable to iterate over that dataframe and not sure how to use that to iterate.
df = pd.read_excel("input.xlsx")
unique_groups = list(df.iloc[:,0].unique()) ### lets assume Unique values are 0,1,2
mtlist = []
for index, value in enumerate(unique_groups):
globals()['df%s' % index] = df[df.iloc[:,0] == value]
mtlist.append('df%s' % index)
print(mtlist)
O/P
['df0', 'df1', 'df2']
for example lets say I want to find out the length of the first unique dataframe
if I manually type the name of the DF I get the correct output
len(df0)
O/P
35
But I am trying to automate the code so technically I want to find the length and itterate over that dataframe normally as i would by typing the name.
What I'm looking for is
if I try the below code
len('df%s' % 0)
I want to get the actual length of the dataframe instead of the length of the string.
Could someone please guide me how to do this?
I have also tried to create a Dictionary using the below code but I cant figure out how to iterate over the dictionary when the DF columns are more than two, where key would be the unique group and the value containes the two columns in same line.
df = pd.read_excel("input.xlsx")
unique_groups = list(df["Assignment Group"].unique())
length_of_unique_groups = len(unique_groups)
mtlist = []
df_dict = {name: df.loc[df['Assignment Group'] == name] for name in unique_groups}
Can someone please provide a better solution?
UPDATE
SAMPLE DATA
Assignment_group Description Document
Group A Text to be updated on the ticket 1 doc1.pdf
Group B Text to be updated on the ticket 2 doc2.pdf
Group A Text to be updated on the ticket 3 doc3.pdf
Group B Text to be updated on the ticket 4 doc4.pdf
Group A Text to be updated on the ticket 5 doc5.pdf
Group B Text to be updated on the ticket 6 doc6.pdf
Group C Text to be updated on the ticket 7 doc7.pdf
Group C Text to be updated on the ticket 8 doc8.pdf
Lets assume there are 100 rows of data
I'm trying to automate ServiceNow ticket creation with the above data.
So my end goal is GROUP A tickets should go to one group, however for each description an unique task has to be created, but we can club 10 task once and submit as one request so if I divide the df's into different df based on the Assignment_group it would be easier to iterate over(thats the only idea which i could think of)
For example lets say we have REQUEST001
within that request it will have multiple sub tasks such as STASK001,STASK002 ... STASK010.
hope this helps
Your problem is easily solved by groupby: one of the most useful tools in pandas. :
length_of_unique_groups = df.groupby('Assignment Group').size()
You can do all kind of operations (sum, count, std, etc) on your remaining columns, like getting the mean value of price for each group if that was a column.
I think you want to try something like len(eval('df%s' % 0))
I have two dataframes with different lengths(df,df1). They share one similar label "collo_number". I want to search the second dataframe for every collo_number in the first data frame. Problem is that the second date frame contains multiple rows for different dates for every collo_nummer. So i want to sum these dates and add this in a new column in the first database.
I now use a loop but it is rather slow and has to perform this operation for al 7 days in a week. Is there a way to get a better performance? I tried multiple solutions but keep getting the error that i cannot use the equal sign for two databases with different lenghts. Help would really be appreciated! Here is an example of what is working but with a rather bad performance.
df5=[df1.loc[(df1.index == nasa) & (df1.afleverdag == x1) & (df1.ind_init_actie=="N"), "aantal_colli"].sum() for nasa in df.collonr]
Your description is a bit vague (hence my comment). First what you good do is to select the rows of the dataframe that you want to search:
dftmp = df1[(df1.afleverdag==x1) & (df1.ind_init_actie=='N')]
so that you don't do this for every item in the loop.
Second, use .groupby.
newseries = dftmp['aantal_colli'].groupby(dftmp.index).sum()
newseries = newseries.ix[df.collonr.unique()]
I am working on a project in which I scraped NBA data from ESPN and created a DataFrame to store it. One of the columns of my DataFrame is Team. Certain players that have been traded within a season have a value such as LAL/LAC under team, rather than just having one team name like LAL. With these rows of data, I would like to make 2 entries instead of one. Both entries would have the same, original data, except for 1 of the entries the team name would be LAL and for the other entry the team name would be LAC. Some team abbreviations are 2 letters while others are 3 letters.
I have already managed to create a separate DataFrame with just these rows of data that have values in the form team1/team2. I figured a good way of getting the data the way I want it would be to first copy this DataFrame with the multiple team entries, and then with one DataFrame, keep everything in the Team column up until the /, and with the other, keep everything in the Team column after the slash. I'm not quite sure what the code would be for this in the context of a DataFrame. I tried the following but it is invalid syntax:
first_team = first_team['TEAM'].str[:first_team[first_team['TEAM'].index("/")]]
where first_team is my DataFrame with just the entries with multiple teams. Perhaps this can give you a better idea of what I'm trying to accomplish!
Thanks in advance!
You're probably better off using split first to separate the teams into columns (also see Pandas DataFrame, how do i split a column into two), something like this:
d = pd.DataFrame({'player':['jordan','johnson'],'team':['LAL/LAC','LAC']})
pd.concat([d, pd.DataFrame(d.team.str.split('/').tolist(), columns = ['team1','team2'])], axis = 1)
player team team1 team2
0 jordan LAL/LAC LAL LAC
1 johnson LAC LAC None
Then if you want separate rows, you can use append.