Pandas - How to Extract Headers Which Are Contained Within Each Row? - python

I am a beginner with Pandas and I have a large dataset in an archaic format which I would like to wrangle into Pandas format. The data looks like this:
0 1 2 3 4 5 ...
0 ì 8=xx 9=00 35=8 49=YY 56=073 ...
1 8=xx 9=00 35=8 49=YY 56=073 34=10715 ...
2 8=xx 9=00 35=8 49=YY 56=073 34=10716 ...
...
The column headers are separated by "=" with header on the left and field on the right. Hence the data should look like this:
8 9 35 49 56 34 ...
0 xx 00 8 YY 073 107 ...
1 xx 00 8 YY 073 107 ...
2 xx 00 8 YY 073 107 ...
...
Each row has a different number of columns and there may be some repetition per row, for example, 8=xx may occur multiple times per row. I would like to create a new column (eg. 8_x, 8_y, ...) each time this happens. I have tried to formulate a for/iterrows() loop to iterate through each row but not sure how I can separate a string and set the header at one go.
I've tried to look for a similar issue on the site but no success so far. Any help is much appreciated!
Edit: Adding in the code I used to parse the initial raw data into the format in the first table.
import pandas as pd
df = pd.read_csv('File.dat', sep='\n',nrows = 2, header=None, encoding = "ANSI")
df = df[0].str.split('<SPECIAL CHAR.>', expand=True)

As mentioned above in one of the comments on the original post, the 'right' way to deal with this would be to parse the data before it's in a dataframe. That being said, once the data is in a dataframe you can use the following code:
rows = []
def parse_row(row):
d = {}
for item in row[1]:
if type(item) != str or "=" not in item:
continue # ignore this item
[col_name, val] = item.split("=")
if col_name in d:
inx = 0
while f"{col_name}_{inx}" in d:
inx += 1
col_name = f"{col_name}_{inx}"
print("new colname is {col_name}")
d[col_name] = val
return d
for row in df.iterrows():
rows.append(parse_row(row))
pd.DataFrame(rows)
I tested it with the following input:
0 1 2 3 4 5
0 ì 8=xx 9=00 35=8 49=YY 56=073
1 8=xx 9=00 35=8 49=YY 56=073 34=10715
2 8=xx 9=00 35=8 49=YY 8=zz 34=10716
This is the output:
8 9 35 49 56 34 8_0
0 xx 00 8 YY 073 NaN NaN
1 xx 00 8 YY 073 10715 NaN
2 xx 00 8 YY NaN 10716 zz

If the original .dat file is in a plain text format like one of the comment says it can be easily transformed into the CSV format:
Open the .dat file in your favorite text editor that support regular expressions.
Copy the first line and remove all occurrences of '=[^,]+' to create the header with column names.
From the 2nd line onward remove all occurrences of '[^,]=' to preserve the cell values.
Save the CSV file and open in Python with pd.read_csv(...).
This way every time you load the CSV chances are Pandas will guess the data format in each column correctly.

As mentioned above in one of the comments on the original post, the 'right' way to deal with this would be to parse the data before it's in a dataframe

Related

how to combine the first 2 column in pandas/python with n/a value

I have some questions about combining the first 2 columns in pandas/python with n/a value
long story: I need to read an excel and alter those changes. I can not change anything in excel, so any change has been done by python.
Here is the excel input
and the expected expect output will be
I manage to read it in, but when I try to combine the first 2 columns, I have some problems. since in excel, the first row is merged, so once it is read in. only one row has value, but the rest of the row is all N/A.
such as below:
Year number 2016
Month Jan
Month 2016-01
Grade 1 100
NaN 2 99
NaN 3 98
NaN 4 96
NaN 5 92
NaN Total 485
Is there any function that can easily help me to combine the first two columns and make it as below:
Year 2016
Month Jan
Month 2016-01
Grade 1 100
Grade 2 99
Grade 3 98
Grade 4 96
Grade 5 92
Grade Total 485
Anything will be really appreciated.
I searched and google the key word for so long but did not find any answer that fits my situation here.
d = '''
Year,number,2016
Month,,Jan
Month,,2016-01
Grade,1, 100
NaN,2, 99
NaN,3, 98
NaN,4, 96
NaN,5, 92
NaN,Total,485
'''
df = pd.read_csv(StringIO(d))
df
df['Year'] = df.Year.fillna(method='ffill')
df = df.fillna('') # skip this step if your data from excel does not have nan in col 2.
df['Year'] = df.Year + ' ' + df.number.astype('str')
df = df.drop('number',axis=1)
df

how to add a new column without opening a csv file

I scrapped data and exported as csv files.
For simplicity, the data look like below
(I intentionally put arbitrary variables to just illustrate an example):
id var1 var2 var3 ...
A 10 14 355 ...
B 35 56 22 ...
C 95 22 222 ...
D 44 55 222 ...
Since I collected the data daily, I saved my file name as city_20180814_result.csv
For example, if I collected the data in NYC at Aug 14th 2018, the corresponding file name is NYC_20180814_result.csv
Here, I want to add a new column, the date variable, into each csv file.
The desired example is going to be like the one below. To be specific, I want to add a date (YYYYMMDD as a format) column to each csv file and the values are going to be the date when the data were collected. For example, the below example csv file was generated on Aug 14th 2018, then the updated data will look like below:
id date var1 var2 var3 ...
A 20180814 10 14 355 ...
B 20180814 35 56 22 ...
C 20180814 95 22 222 ...
D 20180814 44 55 222 ...
The conventional way to do this is to open every csv file and manually add a new column, assign a corresponding date to all the rows, and repeat this step for all csv files. But there are too many to get this done. Is there any way to do this efficiently? Since I saved file names including the date, it would be a good idea to use this if it's possible. Any help/codes (by using python again or excel macro) would be appreciated.
My solution using python's pandas package:
import os
import re
import pandas as pd
FILE_PATTERN = re.compile(r'(.*)_(\d{8})_result.csv')
def addDate(file_dir):
csv_list = [csvfile for csvfile in os.listdir(file_dir) if re.fullmatch(FILE_PATTERN, csvfile)]
for csvname in csv_list:
date = re.fullmatch(FILE_PATTERN, csvname).group(2)
df = pd.read_csv(os.path.join(file_dir, csvname))
df.insert(loc=1, column='date', value=[date]*len(df))
df.to_csv(os.path.join(file_dir, csvname), index=False)
Sample input: NYC_20180814_result.csv in some_path:
A B C
0 0 1 2
1 3 4 5
2 6 7 8
Same csv after executing addDate(some_path):
A date B C
0 0 20180814 1 2
1 3 20180814 4 5
2 6 20180814 7 8
P.S. You'll not see the index column in your csv file.

Python processing CSV file really slow

So I am trying to open a CSV file, read its fields and based on that fix some other fields and then save that data back to csv. My problem is that the CSV file has 2 million rows. What would be the best way to speed this up.
The CSV file consists of
ID; DATE(d/m/y); SPECIAL_ID; DAY; MONTH; YEAR
I am counting how often a row with the same date appears on my record and then update SPECIAL_ID based on that data.
Based on my previous research I decided to use pandas. I'll be processing even bigger sets of data in future (1-2GB) - this one is around 119MB so it crucial I find a good fast solution.
My code goes as follows:
df = pd.read_csv(filename, delimiter=';')
df_fixed= pd.DataFrame(columns=stolpci) #when I process the row in df I append it do df_fixed
d = 31
m = 12
y = 100
s = (y,m,d)
list_dates= np.zeros(s) #3 dimensional array.
for index, row in df.iterrows():
# PROCESSING LOGIC GOES HERE
# IT CONSISTS OF FEW IF STATEMENTS
list_dates[row.DAY][row.MONTH][row.YEAR] += 1
row['special_id'] = list_dates[row.DAY][row.MONTH][row.YEAR]
df_fixed = df_fixed.append(row.to_frame().T)
df_fixed .to_csv(filename_fixed, sep=';', encoding='utf-8')
I tried to make a print for every thousand rows processed. At first, my script needs 3 seconds for 1000 rows, but the longer it runs the slower it gets.
at row 43000 it needs 29 seconds and so on...
Thanks for all future help :)
EDIT:
I am adding additional information about my CSV and exptected output
ID;SPECIAL_ID;sex;age;zone;key;day;month;year
2;13012016505__-;F;1;1001001;1001001_F_1;13;1;2016
3;25122013505__-;F;4;1001001;1001001_F_4;25;12;2013
4;24022012505__-;F;5;1001001;1001001_F_5;24;2;2012
5;09032012505__-;F;5;1001001;1001001_F_5;9;3;2012
6;21082011505__-;F;6;1001001;1001001_F_6;21;8;2011
7;16082011505__-;F;6;1001001;1001001_F_6;16;8;2011
8;21102011505__-;F;6;1001001;1001001_F_6;16;8;2011
I have to replace - in the special ID field to a proper number.
For example for a row with
ID = 2 the SPECIAL_ID will be
26022018505001 (- got replaced by 001) if someone else in the CSV shares the same DAY, MONTH, YEAR the __- will be replaced by 002 and so on...
So exptected output for above rows would be
ID;SPECIAL_ID;sex;age;zone;key;day;month;year
2;13012016505001;F;1;1001001;1001001_F_1;13;1;2016
3;25122013505001;F;4;1001001;1001001_F_4;25;12;2013
4;24022012505001;F;5;1001001;1001001_F_5;24;2;2012
5;09032012505001;F;5;1001001;1001001_F_5;9;3;2012
6;21082011505001;F;6;1001001;1001001_F_6;21;8;2011
7;16082011505001;F;6;1001001;1001001_F_6;16;8;2011
8;21102011505002;F;6;1001001;1001001_F_6;16;8;2011
EDIT:
I changed my code to something like this: I fill list of dicts with data and then convert that list do dataframe and save as csv. This will take around 30minutes to complete
list_popravljeni = []
df = pd.read_csv(filename, delimiter=';')
df_dates = df.groupby(by=['dan_roj', 'mesec_roj', 'leto_roj']).size().reset_index()
for index, row in df_dates.iterrows():
df_candidates= df.loc[(df['dan_roj'] == dan_roj) & (df['mesec_roj'] == mesec_roj) & (df['leto_roj'] == leto_roj) ]
for index, row in df_candidates.iterrows():
vrstica = {}
vrstica['ID'] = row['identifikator']
vrstica['SPECIAL_ID'] = row['emso'][0:11] + str(index).zfill(2)
vrstica['day'] = row['day']
vrstica['MONTH'] = row['MONTH']
vrstica['YEAR'] = row['YEAR']
list_popravljeni.append(vrstica)
pd.DataFrame(list_popravljeni, columns=list_popravljeni[0].keys())
I think this gives what you're looking for and avoids looping. Potentially it could be more efficient (I wasn't able to find a way to avoid creating counts). However, it should be much faster than your current approach.
df['counts'] = df.groupby(['year', 'month', 'day'])['SPECIAL_ID'].cumcount() + 1
df['counts'] = df['counts'].astype(str)
df['counts'] = df['counts'].str.zfill(3)
df['SPECIAL_ID'] = df['SPECIAL_ID'].str.slice(0, -3).str.cat(df['counts'])
I added a fake record at the end to confirm it does increment properly:
SPECIAL_ID sex age zone key day month year counts
0 13012016505001 F 1 1001001 1001001_F_1 13 1 2016 001
1 25122013505001 F 4 1001001 1001001_F_4 25 12 2013 001
2 24022012505001 F 5 1001001 1001001_F_5 24 2 2012 001
3 09032012505001 F 5 1001001 1001001_F_5 9 3 2012 001
4 21082011505001 F 6 1001001 1001001_F_6 21 8 2011 001
5 16082011505001 F 6 1001001 1001001_F_6 16 8 2011 001
6 21102011505002 F 6 1001001 1001001_F_6 16 8 2011 002
7 21102012505003 F 6 1001001 1001001_F_6 16 8 2011 003
If you want to get rid of counts, you just need:
df.drop('counts', inplace=True, axis=1)

Pandas read_csv adds unnecessary " " to each row

I have a csv file
(I am showing the first three rows here)
HEIGHT,WEIGHT,AGE,GENDER,SMOKES,ALCOHOL,EXERCISE,TRT,PULSE1,PULSE2,YEAR
173,57,18,2,2,1,2,2,86,88,93
179,58,19,2,2,1,2,1,82,150,93
I am using pandas read_csv to read the file and put them into columns.
Here is my code:
import pandas as pd
import os
path='~/Desktop/pulse.csv'
path=os.path.expanduser(path)
my_data=pd.read_csv(path, index_col=False, header=None, quoting = 3, delimiter=',')
print my_data
The problem is the first and last columns have " before and after the values.
Additionally I can't get rid of the indexes.
It might be making some silly mistake but I thank you for your help in advance
Final solution - use replace with converting to ints and for remove " from columns names use strip:
df = pd.read_csv('pulse.csv', quoting=3)
df = df.replace('"','', regex=True).astype(int)
df.columns = df.columns.str.strip('"')
print (df.head())
HEIGHT WEIGHT AGE GENDER SMOKES ALCOHOL EXERCISE TRT PULSE1 \
0 173 57 18 2 2 1 2 2 86
1 179 58 19 2 2 1 2 1 82
2 167 62 18 2 2 1 1 1 96
3 195 84 18 1 2 1 1 2 71
4 173 64 18 2 2 1 3 2 90
PULSE2 YEAR
0 88 93
1 150 93
2 176 93
3 73 93
4 88 93
index_col=False means force not read first column to index, but dataframe always need some index, so is added default - 0,1,2.... So here can be omit.
header=None should be removed because it force dont read first row (header of csv) to columns of DataFrame. Then also first row of data is header and numeric values are converted to strings.
delimiter=',' should be removed too, because it is same as sep=',' what is default parameter.
#jezrael is right - a pandas dataframe will always add indices. It's necessary.
try something like df[0] = df[0].str.strip() replacing zero with the last column.
before you do so, convert your csv to a dataframe - pd.DataFrame.from_csv(path)

match the dataframe with the list of columns names

I have two files, the first one contains the dataframe , without columns names:
2008-03-13 15 56 0 25
2008-03-14 10 32 27 45
2008-03-16 40 8 54 35
2008-03-18 40 8 63 30
2008-03-19 45 32 81 25
and another file, that contains the list of columns names (except of datetime column) in the following form:
output of file.read()
List(Group, Age, Income, Location)
In my real data, there are much more columns and columns names. Columns of dataframes are ordered as elements of list, i.e. the first column corresponds to Group, the third one to Income and the last one to Location, etc..
So my goal is to name the columns of my dataframe with the elements, containing in this file.
this operation will not work for obvious reasons (datetime columns are not contained in the list, and the list is not formatted in python form):
with open(file2) as f:
list_of_columns=f.read()
df=pd.read_csv(file1, sep='/t', names=list_of_columns)
and I already imagine the work of preprocessing with the removing the word List and () from the output of file2, and adding the column datetime in the head of the list, but if you have more elegant and quick solution, let me know!
you can do it this way:
import re
fn = r'D:\temp\.data\36972593_header.csv'
with open(fn) as f:
data = f.read()
# it will also tolerate if `List(...) is not in the first line`
cols = ['Date'] + re.sub(r'.*List\((.*)\).*', r'\1', data, flags=re.S|re.I|re.M).replace(' ', '').split(',')
fn = r'D:\temp\.data\36972593_data.csv'
# this will also parse `Date` column as `datetime`
df=pd.read_csv(fn, sep=r'\s+', names=cols, parse_dates=[0])
Result:
In [82]: df
Out[82]:
Date Group Age Income Location
0 2008-03-13 15 56 0 25
1 2008-03-14 10 32 27 45
2 2008-03-16 40 8 54 35
3 2008-03-18 40 8 63 30
4 2008-03-19 45 32 81 25
In [83]: df.dtypes
Out[83]:
Date datetime64[ns]
Group int64
Age int64
Income int64
Location int64
dtype: object
If the list of column names comes as a string in exactly this format, you could do:
with open(file2) as f:
list_of_columns=f.read()
list_of_columns = ['date'] + list_of_columns[5:-1].split(',')
list_of_columns = [l.strip() for l in list_of_columns] # remove leading/trailing whitespace
df=pd.read_csv(file1, sep='/t', names=list_of_columns)

Categories