Reading feature names from a csv file using pandas - python

I have a csv file looks like
F1 F2 F3
A1 2 4 2
A2 4 1 2
When I read the file using pandas, I see that the first column is unnamed.
import pandas as pd
df = pd.read_csv("data.csv")
features = df.columns
print( features )
Index(['Unnamed: 0', 'F1, 'F2, 'F3'])
In fact I want to get only F1, F2 and F3. I can fix that with some array manipulation. But I want to know if pandas has some builtin capabilities to do that. Any thought?
UPDATE:
Using index_col = False or None doesn't work either.

That unnamed is only because of index column being read, you can use the index_col = [0] argument in the read statement to resolve.
This picks the first column as index instead of a feature itself.
import pandas as pd
df = pd.read_csv("data.csv", index_col=[0])
features = df.columns
print( features )
Index(['F1', 'F2', 'F3'])

Related

Handle variable as file with pandas dataframe

I would like to create a pandas dataframe out of a list variable.
With pd.DataFrame() I am not able to declare delimiter which leads to just one column per list entry.
If I use pd.read_csv() instead, I of course receive the following error
ValueError: Invalid file path or buffer object type: <class 'list'>
If there a way to use pd.read_csv() with my list and not first save the list to a csv and read the csv file in a second step?
I also tried pd.read_table() which also need a file or buffer object.
Example data (seperated by tab stops):
Col1 Col2 Col3
12 Info1 34.1
15 Info4 674.1
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
Current workaround:
with open(f'{filepath}tmp.csv', 'w', encoding='UTF8') as f:
[f.write(line + "\n") for line in consolidated_file]
df = pd.read_csv(f'{filepath}tmp.csv', sep='\t', index_col=1 )
import pandas as pd
df = pd.DataFrame([x.split('\t') for x in test])
print(df)
and you want header as your first row then
df.columns = df.iloc[0]
df = df[1:]
It seems simpler to convert it to nested list like in other answer
import pandas as pd
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
data = [line.split('\t') for line in test]
df = pd.DataFrame(data[1:], columns=data[0])
but you can also convert it back to single string (or get it directly form file on socket/network as single string) and then you can use io.BytesIO or io.StringIO to simulate file in memory.
import pandas as pd
import io
test = ["Col1\tCol2\tCol3", "12\tInfo1\t34.1","15\tInfo4\t674.1"]
single_string = "\n".join(test)
file_like_object = io.StringIO(single_string)
df = pd.read_csv(file_like_object, sep='\t')
or shorter
df = pd.read_csv(io.StringIO("\n".join(test)), sep='\t')
This method is popular when you get data from network (socket, web API) as single string or data.

Import all excel sheets from a excel file into pandas [duplicate]

I am trying to read an excel file this way :
newFile = pd.ExcelFile(PATH\FileName.xlsx)
ParsedData = pd.io.parsers.ExcelFile.parse(newFile)
which throws an error that says two arguments expected, I don't know what the second argument is and also what I am trying to achieve here is to convert an Excel file to a DataFrame, Am I doing it the right way? or is there any other way to do this using pandas?
Close: first you call ExcelFile, but then you call the .parse method and pass it the sheet name.
>>> xl = pd.ExcelFile("dummydata.xlsx")
>>> xl.sheet_names
[u'Sheet1', u'Sheet2', u'Sheet3']
>>> df = xl.parse("Sheet1")
>>> df.head()
Tid dummy1 dummy2 dummy3 dummy4 dummy5 \
0 2006-09-01 00:00:00 0 5.894611 0.605211 3.842871 8.265307
1 2006-09-01 01:00:00 0 5.712107 0.605211 3.416617 8.301360
2 2006-09-01 02:00:00 0 5.105300 0.605211 3.090865 8.335395
3 2006-09-01 03:00:00 0 4.098209 0.605211 3.198452 8.170187
4 2006-09-01 04:00:00 0 3.338196 0.605211 2.970015 7.765058
dummy6 dummy7 dummy8 dummy9
0 0.623354 0 2.579108 2.681728
1 0.554211 0 7.210000 3.028614
2 0.567841 0 6.940000 3.644147
3 0.581470 0 6.630000 4.016155
4 0.595100 0 6.350000 3.974442
What you're doing is calling the method which lives on the class itself, rather than the instance, which is okay (although not very idiomatic), but if you're doing that you would also need to pass the sheet name:
>>> parsed = pd.io.parsers.ExcelFile.parse(xl, "Sheet1")
>>> parsed.columns
Index([u'Tid', u'dummy1', u'dummy2', u'dummy3', u'dummy4', u'dummy5', u'dummy6', u'dummy7', u'dummy8', u'dummy9'], dtype=object)
This is much simple and easy way.
import pandas
df = pandas.read_excel(open('your_xls_xlsx_filename','rb'), sheetname='Sheet 1')
# or using sheet index starting 0
df = pandas.read_excel(open('your_xls_xlsx_filename','rb'), sheetname=2)
Check out documentation full details.
FutureWarning: The sheetname keyword is deprecated for newer Pandas versions, use sheet_name instead.
Thought i should add here, that if you want to access rows or columns to loop through them, you do this:
import pandas as pd
# open the file
xlsx = pd.ExcelFile("PATH\FileName.xlsx")
# get the first sheet as an object
sheet1 = xlsx.parse(0)
# get the first column as a list you can loop through
# where the is 0 in the code below change to the row or column number you want
column = sheet1.icol(0).real
# get the first row as a list you can loop through
row = sheet1.irow(0).real
Edit:
The methods icol(i) and irow(i) are deprecated now. You can use sheet1.iloc[:,i] to get the i-th col and sheet1.iloc[i,:] to get the i-th row.
I think this should satisfy your need:
import pandas as pd
# Read the excel sheet to pandas dataframe
df = pd.read_excel("PATH\FileName.xlsx", sheet_name=0) #corrected argument name
Here is an updated method with syntax that is more common in python code. It also prevents you from opening the same file multiple times.
import pandas as pd
sheet1, sheet2 = None, None
with pd.ExcelFile("PATH\FileName.xlsx") as reader:
sheet1 = pd.read_excel(reader, sheet_name='Sheet1')
sheet2 = pd.read_excel(reader, sheet_name='Sheet2')
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html
You just need to feed the path to your file to pd.read_excel
import pandas as pd
file_path = "./my_excel.xlsx"
data_frame = pd.read_excel(file_path)
Checkout the documentation to explore parameters like skiprows to ignore rows when loading the excel
import pandas as pd
data = pd.read_excel (r'**YourPath**.xlsx')
print (data)
Loading an excel file without explicitly naming a sheet but instead giving the number of the sheet order (often one will simply load the first sheet) goes like:
import pandas as pd
myexcel = pd.ExcelFile("C:/filename.xlsx")
myexcel = myexcel.parse(myexcel.sheet_names[0])
Since .sheet_names returns a list of sheet names, it is easy to load one or more sheets by simply calling the list element(s).
All of these works for me
In [1]: import pandas as pd
In [2]: df = pd.read_excel('FileName.xlsx') # If there is only one sheet in the excel file
In [3]: df = pd.read_excel('FileName.xlsx', sheet_name=0)
In [4]: In [20]: df = pd.read_excel('FileName.xlsx', sheet_name='Sheet 1')
#load pandas library
import pandas as pd
#set path where the file is
path = "./myfile.xlsx"
#load the file into dataframe df
df = pd.read_excel(path)
#check the first 5 rows
df.head(5)

Having trouble removing headers when using pd.read_csv

I have a .csv that contains contains column headers and is displayed below. I need to suppress the column labeling when I ingest the file as a data frame.
date,color,id,zip,weight,height,locale
11/25/2013,Blue,122468,1417464,3546600,254,7
When I issue the following command:
df = pd.read_csv('c:/temp1/test_csv.csv', usecols=[4,5], names = ["zip","weight"], header = 0, nrows=10)
I get:
zip weight
0 1417464 3546600
I have tried various manipulations of header=True and header=0. If I don't use header=0, then the columns will all print out on top of the rows like so:
zip weight
height locale
0 1417464 3546600
I have tried skiprows= 0 and 1 but neither removes the headers. However, the command works by skipping the line specified.
I could really use some additional insight or a solve. Thanks in advance for any assistance you could provide.
Tiberius
Using the example of #jezrael, if you want to skip the header and suppress de column labeling:
import pandas as pd
import numpy as np
import io
temp=u"""date,color,id,zip,weight,height,locale
11/25/2013,Blue,122468,1417464,3546600,254,7"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), usecols=[4,5], header=None, skiprows=1)
print df
4 5
0 3546600 254
I'm not sure I entirely understand why you want to remove the headers, but you could comment out the header line as follows as long as you don't have any other rows that begin with 'd':
>>> df = pd.read_csv('test.csv', usecols=[3,4], header=None, comment='d') # comments out lines beginning with 'date,color' . . .
>>> df
3 4
0 1417464 3546600
It would be better to comment out the line in the csv file with the crosshatch character (#) and then use the same approach (again, as long as you have not commented out any other lines with a crosshatch):
>>> df = pd.read_csv('test.csv', usecols=[3,4], header=None, comment='#') # comments out lines with #
>>> df
3 4
0 1417464 3546600
I think you are right.
So you can change column names to a and b:
import pandas as pd
import numpy as np
import io
temp=u"""date,color,id,zip,weight,height,locale
11/25/2013,Blue,122468,1417464,3546600,254,7"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_csv(io.StringIO(temp), usecols=[4,5], names = ["a","b"], header = 0 , nrows=10)
print df
a b
0 3546600 254
Now these columns have new names instead of weight and height.
df = pd.read_csv(io.StringIO(temp), usecols=[4,5], header = 0 , nrows=10)
print df
weight height
0 3546600 254
You can check docs read_csv (bold by me):
header : int, list of ints, default ‘infer’
Row number(s) to use as the column names, and the start of the data. Defaults to 0 if no names passed, otherwise None. Explicitly pass header=0 to be able to replace existing names. The header can be a list of integers that specify row locations for a multi-index on the columns E.g. [0,1,3]. Intervening rows that are not specified will be skipped (e.g. 2 in this example are skipped). Note that this parameter ignores commented lines and empty lines if skip_blank_lines=True, so header=0 denotes the first line of data rather than the first line of the file.

python reading in multi-column tsv file with row numbers

What is the cleanest way of reading in a multi-column tsv file in python with headers, but where the first column has no header and instead contains the row numbers for each row?
This is apparently a common format from files coming from R data frames.
Example:
A B C
1 a1 b1 c1
2 a2 b2 c2
3 a3 b3 c3
Any ideas?
Depends on what you want to do with the data afterwards (and if the file is truly a tsv with a \t delimiter). If you just want it in a set of lists you can use the csv module like so:
import csv
with open("tsv.tsv") as tsvfile:
tsvreader = csv.reader(tsvfile, delimiter="\t")
for line in tsvreader:
print line[1:]
However I'd also recommend the DataFrame module from pandas for anything outside of simple python operations. It can be used as such:
from pandas import DataFrame
df = DataFrame.read_csv("tsv.tsv", sep="\t")
DataFrames allow for high level manipulation of data sets such as adding columns, finding averages, etc..
df = DataFrame.from_csv("tsv.tsv", sep="\t") is deprecated since version 0.21.0
df = pd.read_csv("tsv.tsv", sep="\t") is the way to go
How about using the following native Python codes:
with open('tsvfilename') as f:
lines = f.read().split('\n')[:-1]
for i, line in enumerate(lines):
if i == 0: # header
column_names = line.split()
# ...
else:
data = line.split();
# ...
Import Pandas library
import pandas as pd
data = pd.read_csv('/ABC/DEF/TSV.tsv', sep='\t')
DataFrame.from_csv("tsv.tsv", sep="\t")
is not working anymore.
Use
df.read_csv("tsv.tsv", sep="\t")
pandas.read_csv("file.tsv")
DataFrame.from_csv() doesn't work. DataFrame.read_csv() isn't right.

Reading an Excel file in python using pandas

I am trying to read an excel file this way :
newFile = pd.ExcelFile(PATH\FileName.xlsx)
ParsedData = pd.io.parsers.ExcelFile.parse(newFile)
which throws an error that says two arguments expected, I don't know what the second argument is and also what I am trying to achieve here is to convert an Excel file to a DataFrame, Am I doing it the right way? or is there any other way to do this using pandas?
Close: first you call ExcelFile, but then you call the .parse method and pass it the sheet name.
>>> xl = pd.ExcelFile("dummydata.xlsx")
>>> xl.sheet_names
[u'Sheet1', u'Sheet2', u'Sheet3']
>>> df = xl.parse("Sheet1")
>>> df.head()
Tid dummy1 dummy2 dummy3 dummy4 dummy5 \
0 2006-09-01 00:00:00 0 5.894611 0.605211 3.842871 8.265307
1 2006-09-01 01:00:00 0 5.712107 0.605211 3.416617 8.301360
2 2006-09-01 02:00:00 0 5.105300 0.605211 3.090865 8.335395
3 2006-09-01 03:00:00 0 4.098209 0.605211 3.198452 8.170187
4 2006-09-01 04:00:00 0 3.338196 0.605211 2.970015 7.765058
dummy6 dummy7 dummy8 dummy9
0 0.623354 0 2.579108 2.681728
1 0.554211 0 7.210000 3.028614
2 0.567841 0 6.940000 3.644147
3 0.581470 0 6.630000 4.016155
4 0.595100 0 6.350000 3.974442
What you're doing is calling the method which lives on the class itself, rather than the instance, which is okay (although not very idiomatic), but if you're doing that you would also need to pass the sheet name:
>>> parsed = pd.io.parsers.ExcelFile.parse(xl, "Sheet1")
>>> parsed.columns
Index([u'Tid', u'dummy1', u'dummy2', u'dummy3', u'dummy4', u'dummy5', u'dummy6', u'dummy7', u'dummy8', u'dummy9'], dtype=object)
This is much simple and easy way.
import pandas
df = pandas.read_excel(open('your_xls_xlsx_filename','rb'), sheetname='Sheet 1')
# or using sheet index starting 0
df = pandas.read_excel(open('your_xls_xlsx_filename','rb'), sheetname=2)
Check out documentation full details.
FutureWarning: The sheetname keyword is deprecated for newer Pandas versions, use sheet_name instead.
Thought i should add here, that if you want to access rows or columns to loop through them, you do this:
import pandas as pd
# open the file
xlsx = pd.ExcelFile("PATH\FileName.xlsx")
# get the first sheet as an object
sheet1 = xlsx.parse(0)
# get the first column as a list you can loop through
# where the is 0 in the code below change to the row or column number you want
column = sheet1.icol(0).real
# get the first row as a list you can loop through
row = sheet1.irow(0).real
Edit:
The methods icol(i) and irow(i) are deprecated now. You can use sheet1.iloc[:,i] to get the i-th col and sheet1.iloc[i,:] to get the i-th row.
I think this should satisfy your need:
import pandas as pd
# Read the excel sheet to pandas dataframe
df = pd.read_excel("PATH\FileName.xlsx", sheet_name=0) #corrected argument name
Here is an updated method with syntax that is more common in python code. It also prevents you from opening the same file multiple times.
import pandas as pd
sheet1, sheet2 = None, None
with pd.ExcelFile("PATH\FileName.xlsx") as reader:
sheet1 = pd.read_excel(reader, sheet_name='Sheet1')
sheet2 = pd.read_excel(reader, sheet_name='Sheet2')
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html
You just need to feed the path to your file to pd.read_excel
import pandas as pd
file_path = "./my_excel.xlsx"
data_frame = pd.read_excel(file_path)
Checkout the documentation to explore parameters like skiprows to ignore rows when loading the excel
import pandas as pd
data = pd.read_excel (r'**YourPath**.xlsx')
print (data)
Loading an excel file without explicitly naming a sheet but instead giving the number of the sheet order (often one will simply load the first sheet) goes like:
import pandas as pd
myexcel = pd.ExcelFile("C:/filename.xlsx")
myexcel = myexcel.parse(myexcel.sheet_names[0])
Since .sheet_names returns a list of sheet names, it is easy to load one or more sheets by simply calling the list element(s).
All of these works for me
In [1]: import pandas as pd
In [2]: df = pd.read_excel('FileName.xlsx') # If there is only one sheet in the excel file
In [3]: df = pd.read_excel('FileName.xlsx', sheet_name=0)
In [4]: In [20]: df = pd.read_excel('FileName.xlsx', sheet_name='Sheet 1')
#load pandas library
import pandas as pd
#set path where the file is
path = "./myfile.xlsx"
#load the file into dataframe df
df = pd.read_excel(path)
#check the first 5 rows
df.head(5)

Categories