How to check file is as per format in python - python

So i have excel sheet of data which have 20 something columns the customer have requirment that they want to know if any of column is missing from excel im using pandas for converting data into dataframes i used if statements for few columns but as its rigid soulution they want something better
any suggestion ? are there any libraries there?
Thanks
want to check if file have all required columns and display check file if there is some erorr

Here I created a dataframe, but you would be usingdf = pd.read_excel('myfile.xlsx)`
My dataframe has only the three following columns
data = {'Name':['Tom', 'Nick', 'Sarah', 'Jack'],
'Age':[20, 21, 19, 18],
'Sex':['M', 'M', 'F', 'M']}
df = pd.DataFrame(data)
I'll make a list then of required cols
REQUIRED_COLUMNS = [
'Name',
'Age',
'Occupation',
'Sex'
]
# I'll make the columns a set to avoid O^2 looping.
dfColumns = set(df.columns)
for col in REQUIRED_COLUMNS:
if col not in dfColumns:
print(f"Column '{col}' is missing.")
Et voilĂ 
>>> Column 'Occupation' is missing.

Related

add Columns and reorder them

first data frame :
Index([ 'AvailabilityZone', 'CreateTime', 'Encrypted', 'Size',
'SnapshotId', 'State', 'VolumeId', 'Iops', 'VolumeType',
'MultiAttachEnabled', 'KmsKeyId', 'instanceId', 'name','Attachments']
dtype='object')
Second data frame :
Index(['Attachments', 'AvailabilityZone', 'CreateTime', 'Size',
'SnapshotId', 'VolumeId', 'Iops', 'Tags', 'VolumeType',
'KmsKeyId', 'instanceId', 'name'],
dtype='object')
I am calling API to pull data but i am getting columns in different order and sometimes columns are present and sometimes columns are not present
Example : In first data frame i have 'MultiAttachEnabled' and 'State' but i second dataframe we don't have those columns. I want to change the order columns as well and remove some of the columns like Tags and Encrypted
In Final csv file i want to get :
Attachments,
AvailabilityZone ,
CreateTime,
KmsKeyId,
Size,
SnapshotId,
State,
VolumeId,
Iops,
VolumeType,
MultiAttachEnabled,
instanceId,
Throughput.
You can try the following where you add missing columns and order column name wise.
import numpy as np
# Required columns
columns = ['Attachments', 'AvailabilityZone', 'CreateTime', 'KmsKeyId', 'Size', 'SnapshotId', 'State', 'VolumeId', 'Iops', 'VolumeType', 'MultiAttachEnabled', 'instanceId', 'Throughput']
# Get missing columns
missing_columns = set(columns).difference(set(df.columns))
# Add missing columns
for i in missing_columns:
df[i] = np.nan
# Reorder column
df = df.reindex(sorted(df.columns), axis=1)

Faster way to iterate over columns in pandas

I have the following task.
I have this data:
import pandas
import numpy as np
data = {'name': ['Todd', 'Chris', 'Jackie', 'Ben', 'Richard', 'Susan', 'Joe', 'Rick'],
'phone': [912341.0, np.nan , 912343.0, np.nan, 912345.0, 912345.0, 912347.0, np.nan],
' email': ['todd#gmail.com', 'chris#gmail.com', np.nan, 'ben#gmail.com', np.nan ,np.nan , 'joe#gmail.com', 'rick#gmail.com'],
'most_visited_airport': ['Heathrow', 'Beijing', 'Heathrow', np.nan, 'Tokyo', 'Beijing', 'Tokyo', 'Heathrow'],
'most_visited_place': ['Turkey', 'Spain',np.nan , 'Germany', 'Germany', 'Spain',np.nan , 'Spain']
}
df = pandas.DataFrame(data)
What I have to do is for every feature column (most_visited_airport etc.) and its values (Heathrow, Beijing, Tokyo) I have to generate personal information and output it to a file.
E.g. If we look at most_visited_airport and Heathrow
I need to output three files containing the names, emails and phones of the people who visited the airport the most.
Currently, I have this code to do the operation for both columns and all the values:
columns_to_iterate = [ x for x in df.columns if 'most' in x]
for each in df[columns_to_iterate]:
values = df[each].dropna().unique()
for i in values:
df1 = df.loc[df[each]==i,'name']
df2 = df.loc[df[each]==i,' email']
df3 = df.loc[df[each]==i,'phone']
df1.to_csv(f'{each}_{i}_{df1.name}.csv')
df2.to_csv(f'{each}_{i}_{df2.name}.csv')
df3.to_csv(f'{each}_{i}_{df3.name}.csv')
Is it possible to do this in a more elegant and maybe faster way? Currently I have small dataset but not sure if this code will perform well with big data. My particular concern are the nested loops.
Thank you in advance!
You could replace the call to unique with a groupby, which would not only get the unique values, but split up the dataframe for you:
for column in df.filter(regex='^most'):
for key, group in df.groupby(column):
for attr in ('name', 'phone', 'email'):
group['name'].dropna().to_csv(f'{column}_{key}_{attr}.csv')
You can do it this way.
cols = df.filter(regex='most').columns.values
def func_current_cols_to_csv(most_col):
place = [i for i in df[most_col].dropna().unique().tolist()]
csv_cols = ['name', 'phone', ' email']
result = [df[df[most_col] == i][j].dropna().to_csv(f'{most_col}_{i}_{j}.csv', index=False) for i in place for j in
csv_cols]
return result
[func_current_cols_to_csv(i) for i in cols]
also in the options when writing to csv, you can leave the index, but do not forget to reset it before writing.

Using iterrows() to fill text into column

I want to use iterrows() to fill predetermined substrings (Name and Age) in the column 'Unique Code' with the values coming from the other two columns - 'Name'and 'Age'. However while the loop prints the correct output - 'Unique Code' values do not update?
lst = [['tom', 25, 'EVT-PS-Name-Age' ], ['krish', 30, 'EVT-PS-Name-Age'],
['nick', 26, 'EVT-PS-Name-Age'], ['juli', 22, 'EVT-PS-Name-Age']]
df = pd.DataFrame(lst, columns =['Name', 'Age', 'Unique Code'])
for index, row in df.iterrows():
row['Unique Code'] = str(row['Unique Code'])
row['Age'] = str(row['Age'])
row['Unique Code'] = row['Unique Code'].replace('Name', row['Name'])
row['Unique Code'] = row['Unique Code'].replace('Age', row['Age'])
print(row['Unique Code'])
df.head()
This is my intended outcome - thanks!
lst = [['tom', 25, 'EVT-PS-tom-25' ], ['krish', 30, 'EVT-PS-krish-30'],
['nick', 26, 'EVT-PS-nick-26'], ['juli', 22, 'EVT-PS-juli-22']]
df = pd.DataFrame(lst, columns =['Name', 'Age', 'Unique Code'])
If you want to use loop/iterrows in your code you can assign
using this snippet at the end of your for loop:df["Unique Code"][index] = row["Unique Code"]
As per why this does not work, The row variable defined by the loop here is a temporary one and does not affect the dataframe rows.

Convert panda dataframe group of values to multiple lists

I have pandas dataframe, where I listed items, and categorised them:
col_name |col_group
-------------------------
id | Metadata
listing_url | Metadata
scrape_id | Metadata
name | Text
summary | Text
space | Text
To reproduce:
import pandas
df = pandas.DataFrame([
['id','metadata'],
['listing_url','metadata'],
['scrape_id','metadata'],
['name','Text'],
['summary','Text'],
['space','Text']],
columns=['col_name', 'col_group'])
Can you suggest how I can convert this dataframe to multiple lists based on "col_group":
Metadata = ['id','listing_url','scraping_id]
Text = ['name','summary','space']
This is to allow me to pass these lists of columns to panda and drop columns.
I googled a lot and got stuck: all answers are about converting lists to df, not vice versa. Should I aim to convert into dictionary, or list of lists?
I have over 100 rows, belonging to 10 categories, so would like to avoid manual hard-coding.
I've try this code:
import pandas
df = pandas.DataFrame([
[1, 'url_a', 'scrap_a', 'name_a', 'summary_a', 'space_a'],
[2, 'url_b', 'scrap_b', 'name_b', 'summary_b', 'space_b'],
[3, 'url_c', 'scrap_c', 'name_c', 'summary_c', 'space_ac']],
columns=['id', 'listing_url', 'scrape_id', 'name', 'summary', 'space'])
print(df)
for row in df.iterrows():
print(row[1].to_list())
which give this answer:
[1, 'url_a', 'scrap_a', 'name_a', 'summary_a', 'space_a']
[2, 'url_b', 'scrap_b', 'name_b', 'summary_b', 'space_b']
[3, 'url_c', 'scrap_c', 'name_c', 'summary_c', 'space_ac']
You can use
for row in df[['name', 'summary', 'space']].iterrows():
to only iter over specific columns.
Like this:
In [245]: res = df.groupby('col_group', as_index=False)['Col_name'].apply(list)
In [248]: res.tolist()
Out[248]: [['id', 'listing_url', 'scrape_id'], ['name', 'summary', 'space']]
my_vars = df.groupby('col_group').agg(list)['col_name'].to_dict()
Output:
>>> my_vars
{'Text': ['name', 'summary', 'space'], 'metadata': ['id', 'listing_url', 'scrape_id']}
The recommended usage would be just my_vars['Text'] to access the Text, and etc. If you must have this as distinct names you can force it upon your target scope, e.g. globals:
globals().update(df.groupby('col_group').agg(list)['col_name'].to_dict())
Result:
>>> Text
['name', 'summary', 'space']
>>> metadata
['id', 'listing_url', 'scrape_id']
However I would advise against that as you might unwittingly overwrite some of your other objects, or they might not be in the proper scope you needed (e.g. locals).

Python pandas: appending information from a dictionary to rows while looping through dataframe

I would like to know a better way to append information to a dataframe while in a loop, specifically, to add COLUMNS of information to a dataframe from a dictionary. The code below technically works, but in subsequent analyses I would like to preserve the data classifications of numpy/pandas to be able to efficiently classify missing data or odd values as np.nan or null. Any tips would be great.
raw_data = {'first_name': ['John', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'age': [42, 17, 16, 24, '']}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'age'])
headers = df.columns.values
count = 0
adults = {'John':True,'Molly':False}
for index, row in df.iterrows():
count += 1
if str(row['first_name']) in adults:
adult = adults[str(row['first_name'])]
else:
adult = 'null'
headers = np.append(headers,'ADULT')
vals = np.append(row.values,adult)
if count == 1:
print ','.join(headers.tolist())
print str(vals.tolist()).replace('[','').replace(']','').replace("'","")
else:
print str(vals.tolist()).replace('[','').replace(']','').replace("'","")
Output:
first_name,last_name,age,ADULT
John, Miller, 42, True
Molly, Jacobson, 20, True
Tina, Ali, 16, NA
Jake, Milner, 24, NA
Amy, Cooze, , NA
Instead of loop, I think you can simply use lambda with if and else condition:
df['ADULT'] = df['first_name'].apply(lambda v: adults[v] if v in adults else np.nan)
print(df.to_csv(index=False, na_rep='NA'))
# Output is:
# first_name,last_name,age,ADULT
# John,Miller,42,True
# Molly,Jacobson,17,False
# Tina,Ali,16,NA
# Jake,Milner,24,NA
# Amy,Cooze,,NA
In above, adults[val] if val in adults else np.nan simply looks for if val i.e. first_name for each row is in dictionary, if it is then value is kept for new column else np.nan
You can use to_csv to print in above format, here without specifying filename, it converts to string with comma separated and na_rep specifies string to use for missing values.

Categories