How to fill gaps in data based on previous values [duplicate] - python

This question already has answers here:
How to replace NaNs by preceding or next values in pandas DataFrame?
(10 answers)
Closed 3 years ago.
Table 1 represents the format of my raw data. The dataset was prepared in such a way that the name of a variable 1 is only mentioned for the first observation. I am exploring the dataset and would like to report the count of certain features grouped by the first variable. to achieve this I would have to transform my data into the second table (Output).
How can I achieve this with pandas?
1

The solution can be found in the pandas documentation under Upsampling. The method used is called ffill() and is used as such:
df.ffill()

Related

Count values in one column based on the categories of other column [duplicate]

This question already has answers here:
Python: get a frequency count based on two columns (variables) in pandas dataframe some row appears
(3 answers)
Closed last year.
I'm working on the following dataset:
and I want to count each value in the LearnCode column for each Age category, I've tried doing it using Groupby method but didn't manage to get it correctly, can anyone help on how to do it?
You can do this using a groupby on two columns
results = df.groupby(by=['Age', 'LearnCode']).count()
This outputs a count for each ['Age', 'LearnCode'] pair

How to transpose values in one column to columns using Pandas? [duplicate]

This question already has answers here:
How to pivot a dataframe in Pandas? [duplicate]
(2 answers)
Closed 1 year ago.
Hi there I have a data set look like df1 below and I want to make it look like df2 using pandas. I have tried to use pivot and transpose but can't wrap my head around how to do it. Appreciate any help!
This should do the job
df.pivot_table(index=["AssetID"], columns='MeterName', values='MeterValue')
index: Identifier
columns: row values that will become columns
values: values to put in those columns
I often have the same trouble:
https://towardsdatascience.com/reshape-pandas-dataframe-with-pivot-table-in-python-tutorial-and-visualization-2248c2012a31
This could help next time.

Python: Create seperate dataframe from a dataframe with one category of the category variable column [duplicate]

This question already has answers here:
How do I select rows from a DataFrame based on column values?
(16 answers)
Closed 2 years ago.
I have a dataframe like the one below having 3 types of status -'complete','start' and 'fail'. I want to create another dataframe from this keeping only the "fail" status entries with their corresponding level number.
Let's do this:
fail_df = df[df['status']=='fail']
or this with str.contains:
fail_df = df[df['status'].str.contains(r'fail',case=False)]
Both ways will give a new dataframe with only status being 'fail'. However, the str.contains method is more robust to typo's.

Difference between Series & Data Frame [duplicate]

This question already has answers here:
What is the difference between a pandas Series and a single-column DataFrame?
(6 answers)
Closed 3 years ago.
If we perform value_counts function on a column of a Data Frame, it gives us a Series which contains unique values' counts.
The type operation gives pandas.core.series.Series as a result. My question is that what is the basic difference between a Series & a Data Frame?
You can think of Series as a column in a DataFrame while the actual DataFrame is the table if you think of it in terms of sql

Pandas Categorical masking [duplicate]

This question already has answers here:
How do I select rows from a DataFrame based on column values?
(16 answers)
Closed 4 years ago.
I have what I believe is a simple question but I can't find what I'm looking for in the docs.
I have a dataframe with a Categorical column called mycol with categories a and b and would like to be mask a subset of the dataframe as follows:
df_a = df[df.mycol.equal('a')]
Currently I am doing:
df_a = df[df.mycol.cat.codes.values==df.mycol.cat.categories.to_list().index('a')]
which is obviously extremely verbose and inelegant. Since df.mycol has both the codes and the coded labels, it has all the information to perform this operation, so I'm wondering the best way to go about this...
df_a = df[df["mycol"]=='a']
I believe this should work, unless by 'mask' you mean you want to actually zero out the values that don't have a

Categories