I am working on a data cleaning project and in this, I have to remove some outliers of price_per_sqft.. So I used groupby function and by statistic, the formula creates a data frame without outliers and concat it with the output data frame...
But in the output this type of word returns with the location names so how can I get a clean location name instead of this..?
Code:
def remove_pps_outliers(df):
df_out = pd.DataFrame()
for key, subdf in df.groupby('location'):
m = np.mean(subdf.price_per_sqft)
st = np.std(subdf.price_per_sqft)
reduced_df = subdf[(subdf.price_per_sqft>(m-st)) & (subdf.price_per_sqft<=(m+st))]
df_out = pd.concat([df_out,reduced_df],ignore_index=True)
return df_out
df6 = remove_pps_outliers(df5)
df6.head()
Output:
enter image description here
How can I get the answer without "1st Phase" or "1st Block" keywords like this...
enter image description here
A rudimentary fix would be to just replace the characters you do not want. Luckily in this example, both '1st Phase ' and '1st Block ' contain 10 characters so you could use :
df6['location'] = df6['location'].str.slice_replace(0,10,'')
Related
I'm using numpy where with multiple conditions to assign a category based on a text string a transaction description.
Part of the code is below
`import numpy as np
conditions = [
df2['description'].str.contains('AEGON', na=False),
df2['description'].str.contains('IB/PVV', na=False),
df2['description'].str.contains('Picnic', na=False),
df2['description'].str.contains('Jumbo', na=False),
]
values = [
'Hypotheek',
'Hypotheek',
'Boodschappen',
'Boodschappen']
df2['Classificatie'] = np.select(conditions, values, default='unknown')
I have many conditions which - only partly shown here.
I want to create a table / dataframe in stead of including every seperate condition and value in the code. So for instance the following dataframe:
import pandas as pd
Conditions = {'Condition': ['AEGON','IB/PVV','Picnic','Jumbo'],
'Value': ['Hypotheek','Hypotheek','Boodschappen','Boodschappen']
}
df_conditions = pd.DataFrame(Conditions, columns= ['Condition','Value'])
How can I adjust the condition to look for (in the str.contains) a text string as listed in df_condictions['condition'] and to apply the Value column to df2['Classificatie']?
The values are already a list in the variable explorer, but I can't find a way to have the str.contains to look for a value in a list / dataframe.
Desired outcome:
In [3]: iwantthis
Out[3]:
Description Classificatie
0 groceries Jumbo on date boodschappen
1 mortgage payment Aegon. Hypotheek
2 transfer picnic. Boodschappen
The first column is the input data frame, te second column is what I'm looking for.
Please note that my current code already allows me to create this column, but I want to use another more automated way using de df_condtions table.
I'm not yet really familiair with Python and I can't find anything online.
Try:
import re
df_conditions["Condition"] = df_conditions["Condition"].str.lower()
df_conditions = df_conditions.set_index("Condition")
tmp = df["Description"].str.extract(
"(" + "|".join(re.escape(c) for c in df_conditions.index) + ")",
flags=re.I,
)
df["Classificatie"] = tmp[0].str.lower().map(df_conditions["Value"])
print(df)
Prints:
Description Classificatie
0 groceries Jumbo on date Boodschappen
1 mortgage payment Aegon. Hypotheek
2 transfer picnic. Boodschappen
I am doing some data mining. I have a database that looks like this (pulling out three lines):
100324822$10032482$1$PS$BENICAR$OLMESARTAN MEDOXOMIL$1$Oral$UNK$$$Y$$$$021286$$$TABLET$
1014687010$10146870$2$SS$BENICAR HCT$HYDROCHLOROTHIAZIDE\OLMESARTAN MEDOXOMIL$1$Oral$1/2 OF 40/25MG TABLET$$$Y$$$$$.5$DF$FILM-COATED TABLET$QD
115700162$11570016$5$C$Olmesartan$OLMESARTAN$1$Unknown$UNK$$$U$U$$$$$$$
My Code looks like this :
with open('DRUG20Q4.txt') as fileDrug20Q4:
drugTupleList20Q4 = [tuple(map(str, i.split('$'))) for i in fileDrug20Q4]
drug20Q4 = []
for entryDrugPrimaryID20Q4 in drugTupleList20Q4:
drug20Q4.append((entryDrugPrimaryID20Q4[0], entryDrugPrimaryID20Q4[3], entryDrugPrimaryID20Q4[5]))
fileDrug20Q4.close()
drugNameDataFrame20Q4 = pd.DataFrame(drug20Q4, columns = ['PrimaryID', 'Role', 'Drug Name']) drugNameDataFrame20Q4 = pd.DataFrame(drugNameDataFrame20Q4.loc[drugNameDataFrame20Q4['Drug Name'] == 'OLMESARTAN'])
Currently the code will pull only entries with the exact name "OLMESARTAN" out, how do I capture all the variations, for instance "OLMESARTAN MEDOXOMIL" etc? I can't simply list all the varieties as there's an infinite amount of variations, so I would need something that captures anything with the term "OLMESARTAN" within it.
Thanks!
You can use str.contains to get what you are looking for.
Here's an example (using some string I found in the documentation):
import pandas as pd
df = pd.DataFrame()
item = 'Return boolean Series or Index based on whether a given pattern or regex is contained within a string of a Series or Index.'
df['test'] = item.split(' ')
df[df['test'].str.contains('de')]
This outputs:
test
4 Index
22 Index.
I want to count the number of times a value in Child column appears in Parent column then display this count in new column renamed child count. See previews df below.
I have this done via VBA (COUNTIFS) but now need dynamic visualization and animated display with data fed from a dir. So I resorted to Python and Pandas and tried below code after searching and reading answers like: Countif in pandas with multiple conditions | Determine if value is in pandas column | Iterate over rows in Pandas df | many others...
but still can't get the expected preview as illustrated in image below.
Any help will be very much appreciated. Thanks in advance.
#import libraries
import pandas as pd
import numpy as np
import os
#get datasets
path_dataset = r'D:\Auto'
df_ns = pd.read_csv(os.path.join(path_dataset, 'Scripts', 'data.csv'), index_col = False, encoding = 'ISO-8859-1', engine = 'python')
#preview dataframe
df_ns
#tried
df_ns.groupby(['Child','Parent', 'Site Name']).size().reset_index(name='child count')
#preview output
df_ns.groupby(['Child','Parent', 'Site Name']).size().reset_index(name='child count')
preview dataframe
preview output
expected output
[Edited] My data
Child = ['Tkt01', 'Tkt02', 'Tkt03', 'Tkt04', 'Tkt05', 'Tkt06', 'Tkt07', 'Tkt08', 'Tkt09', 'Tkt10']
Parent = [' ', ' ', 'Tkt03',' ',' ', 'Tkt03',' ', 'Tkt03',' ',' ', 'Tkt06',' ',' ',' ',]
Site_Name =[Yaounde','Douala','Bamenda','Bafoussam','Kumba','Garoua','Maroua','Ngaoundere','Buea','Ebolowa']
I created a lookalike of your df.
Before
Try this code
df['Count'] = [len(df[df['parent'].str.contains(value)]) for index, value in enumerate(df['child'])]
#breaking it down as a line by line code
counts = []
for index, value in enumerate(df['child']):
found = df[df['parent'].str.contains(value)]
counts.append(len(found))
df['Count'] = counts
After
Hope this works for you.
Since I don't have access to your data, I cannot check the code I am giving you. I suggest you will have problems with nan values with this line but you can give it a try.:
df_ns['child_count'] = df_ns['Parent'].groupby(df_ns['Child']).value_counts()
I give a name to the new column and directly assign values to it through the groupby -> value_counts functions.
I am trying to create a function that splits text in a column of a dataframe and puts each half of the split into a different new column. I want to split the text right after a specific phrase (defined as "search_text" in the function "create_var") and then trim that text to a specified number of characters (defined as left_trim_number in the function). My function has worked in some cases but does not work in others.
Here is the basic structure of my dataframe, where "lst" is my list of text items and "cols" are the two columns of the original dataframe:
import pandas as pd
cols = ['page', 'text_i']
df1 = pd.DataFrame(lst, columns=cols)
Here is my function:
def create_var(varname, search_text, left_trim_number):
df1[['a',varname]] = df1['text_i'].str.split(search_text, expand=True)
df1[varname] = df1[varname].str[: left_trim_number ]
create_var('var1','I am looking for the text that follows this ',3)
In the cases where it doesnt work, I get this error (which I assume is related to pandas):
"ValueError: Columns must be same length as key"
Is there a better way of doing this?
You could try this:
import pandas as pd
df = pd.DataFrame({"text":["hello world", "a", "again hello world"]})
search_text = "hello "
parts = df['text'].str.partition(search_text)
df['a'] = parts[0] + parts[1]
df['var1'] = parts[2]
df['var1'] = df['var1'].str[:3]
print(df)
Output:
text a var1
0 hello world hello wor
1 a a
2 again hello world again hello wor
I have the following response from an API request:
<movies>
<movie>
<rating>5</rating>
<name>star wars</name>
</movie>
<movie>
<rating>8</rating>
<name>jurassic park</name>
</movie>
</movies>
is there a way to take this information and obtain the rating and name values and store inside a series in Pandas?
The end result would look like this:
Movie Rating
5 - star Wars
8 - Jurassic park
You'll notice I've taken taking each of the values found in my response and added them to the one column. I was looking to add the 5 concatenate '-' and the star wars together for example.
Is this what you are looking for? I have explained step-by-step in the code. There was one part I did not know how to do, but I researched and figured it out.
import pandas as pd
import numpy as np
df = pd.DataFrame({'Data' : ['<movies>','<movie>','<rating>5</rating>',
'<name>star wars</name>', '</movie>',
'<rating>8</rating>', '<name>jurassic park</name>',
'</movie>', '</movies>']})
#Filter for the relevant rows of data based upon the logic of the pattern. I have also
#done an optional reset of the index.
df = df.loc[df['Data'].str.contains('>.*<', regex=True)].reset_index(drop=True)
#For the rows we just filtered for, get rid of the irrelevant data with some regex
#string manipulation
df['Data'] = df['Data'].str.findall('>.*<').str[0].replace(['>','<'], '', regex=True)
#Use join with shift and add_suffix CREDIT to #joelostblom:
#https://stackoverflow.com/questions/47450259/merge-row-with-next-row-in-dataframe-
#pandas
df = df.add_suffix('1').join(df.shift(-1).add_suffix('2'))
#Filter for numeric rows only
df = df.loc[df['Data1'].str.isnumeric() == True]
#Combine Columns with desired format
df['Movie Rating'] = df['Data1'] + ' - ' + df['Data2']
#Filter for only relevant column and print dataframe
df = df[['Movie Rating']]
print(df)