I am starting working with pandas and I have encountered a problem.
I am working with different csvs which have the form:
10,152.46
12,124.67
11,150.1
20/21,126.7
37/38,128.8
39,6.19
40,6.8
35-36,9.7
27-31_32,11.3
To Import it, I run:
experimental = pd.read_csv(csv_file,usecols=[0,1]).dropna() --> works as expected
0
10
152.46
1
12
124.67
2
11
150.1
3
20/21
126.7
4
37/38
128.8
5
39
6.19
6
40
6.8
7
35-36
9.7
8
27-31_32
11.3
then, to easily combine it with other df
experimental = experimental.set_index(experimental.columns[0])
And here is where the problem starts. With some other files that look the same, there is no problem: no more index and the second column (10/12/11...) are set as index.
This would be the expected results, same as observed with other csv files
10
152.46
12
124.67
11
150.1
However, with others (like this), I get this type of df
152.46
10
12
124.67
11
150.1
...
I have tried reading as utf-8 or adding a header in the csv without success.
In the way I am presenting it, other files that look the same work.
Thanks
You are not setting a correct index: experimental.columns[0] returns the name of the first column. To set the first column as an index use
experimental.set_index(experimental.iloc[:, 0])
Alternatively, you can use index_col=0 in pd.read_csv to set the first column as the index upon reading:
experimental = pd.read_csv(csv_file, usecols=[0,1], index_col=0, header=None).dropna()
The header=None keyword indicates that your data does not have any column names and that the first row in your csv is the first data row.
I have a large excel file that I need to organize in a certain way (years of climate data), for the sake of understanding my problem, I made a this simple excel file for questions. The data looks similar to this:
(basically 4x4 data with an empty row between them) and I want to transform this data to look like:
(take each row of data transpose it and then add the second row to it with the Nanvalues) using pandas.
The problem that I faced. when reading the file using file = pd.read_csv("excel data.csv"):
my first row will be detected as a header.
the row that separate the data will be converted to NaN and will be confused with the actual NaN in my data
I tried different functions including reading/saving the file with no index (index = False) i also tried functions like file.iloc[0].values , file.shift(1) but I wasn't able to figure it out.
To summarize I want to be able to read the file using pandas then save it as 1 column that include all the data with no extra information or headers (sorry but I am new to pandas).
EDIT: This is how it looks in jupyter notebook.
For the first problem header = None worked.
I tried file.stack(dropna=False).reset_index()[0] but the results stayed the same as in the picture.
If you pass header = None in the read_csv function, it will not detect the first row as header, i.e. file = pd.read_csv("excel_data.csv", header=None)
For the second part, once you have the data into the dataframe, you could try this -
file.stack(dropna=False).reset_index()[0]
Trying to replicate the required results :
df = pd.DataFrame({0:[5.0,54.0,3.0,9.0], 1:[6.0,12.0,6.0,12.0], 2:[9.0,76.0,np.nan,41.0], 3:[8.0,2.0,12.0,100.0]})
df.loc[4] = ['','','','']
0 1 2 3
0 5 6 9 8
1 54 12 76 2
2 3 6 NaN 12
3 9 12 41 100
4
df = df.replace('',np.nan).dropna(how='all') #to remove blank rows
df.stack(dropna=False).reset_index()[0]
0 5.0
1 6.0
2 9.0
3 8.0
4 54.0
5 12.0
6 76.0
7 2.0
8 3.0
9 6.0
10 NaN
11 12.0
12 9.0
13 12.0
14 41.0
15 100.0
I wonder if pd.read_csv("excel data.csv", skip_blank_lines=True, header=None) will work?
Let's say I have a text file that looks like this:
Item,Date,Time,Location
1,01/01/2016,13:41,[45.2344:-78.25453]
2,01/03/2016,19:11,[43.3423:-79.23423,41.2342:-81242]
3,01/10/2016,01:27,[51.2344:-86.24432]
What I'd like to be able to do is read that in with pandas.read_csv, but the second row will throw an error. Here is the code I'm currently using:
import pandas as pd
df = pd.read_csv("path/to/file.txt", sep=",", dtype=str)
I've tried to set quotechar to "[", but that obviously just eats up the lines until the next open bracket and adding a closing bracket results in a "string of length 2 found" error. Any insight would be greatly appreciated. Thanks!
Update
There were three primary solutions that were offered: 1) Give a long range of names to the data frame to allow all data to be read in and then post-process the data, 2) Find values in square brackets and put quotes around it, or 3) replace the first n number of commas with semicolons.
Overall, I don't think option 3 is a viable solution in general (albeit just fine for my data) because a) what if I have quoted values in one column that contain commas, and b) what if my column with square brackets is not the last column? That leaves solutions 1 and 2. I think solution 2 is more readable, but solution 1 was more efficient, running in just 1.38 seconds, compared to solution 2, which ran in 3.02 seconds. The tests were run on a text file containing 18 columns and more than 208,000 rows.
We can use simple trick - quote balanced square brackets with double quotes:
import re
import six
import pandas as pd
data = """\
Item,Date,Time,Location,junk
1,01/01/2016,13:41,[45.2344:-78.25453],[aaaa,bbb]
2,01/03/2016,19:11,[43.3423:-79.23423,41.2342:-81242],[0,1,2,3]
3,01/10/2016,01:27,[51.2344:-86.24432],[12,13]
4,01/30/2016,05:55,[51.2344:-86.24432,41.2342:-81242,55.5555:-81242],[45,55,65]"""
print('{0:-^70}'.format('original data'))
print(data)
data = re.sub(r'(\[[^\]]*\])', r'"\1"', data, flags=re.M)
print('{0:-^70}'.format('quoted data'))
print(data)
df = pd.read_csv(six.StringIO(data))
print('{0:-^70}'.format('data frame'))
pd.set_option('display.expand_frame_repr', False)
print(df)
Output:
----------------------------original data-----------------------------
Item,Date,Time,Location,junk
1,01/01/2016,13:41,[45.2344:-78.25453],[aaaa,bbb]
2,01/03/2016,19:11,[43.3423:-79.23423,41.2342:-81242],[0,1,2,3]
3,01/10/2016,01:27,[51.2344:-86.24432],[12,13]
4,01/30/2016,05:55,[51.2344:-86.24432,41.2342:-81242,55.5555:-81242],[45,55,65]
-----------------------------quoted data------------------------------
Item,Date,Time,Location,junk
1,01/01/2016,13:41,"[45.2344:-78.25453]","[aaaa,bbb]"
2,01/03/2016,19:11,"[43.3423:-79.23423,41.2342:-81242]","[0,1,2,3]"
3,01/10/2016,01:27,"[51.2344:-86.24432]","[12,13]"
4,01/30/2016,05:55,"[51.2344:-86.24432,41.2342:-81242,55.5555:-81242]","[45,55,65]"
------------------------------data frame------------------------------
Item Date Time Location junk
0 1 01/01/2016 13:41 [45.2344:-78.25453] [aaaa,bbb]
1 2 01/03/2016 19:11 [43.3423:-79.23423,41.2342:-81242] [0,1,2,3]
2 3 01/10/2016 01:27 [51.2344:-86.24432] [12,13]
3 4 01/30/2016 05:55 [51.2344:-86.24432,41.2342:-81242,55.5555:-81242] [45,55,65]
UPDATE: if you are sure that all square brackets are balances, we don't have to use RegEx's:
import io
import pandas as pd
with open('35948417.csv', 'r') as f:
fo = io.StringIO()
data = f.readlines()
fo.writelines(line.replace('[', '"[').replace(']', ']"') for line in data)
fo.seek(0)
df = pd.read_csv(fo)
print(df)
I can't think of a way to trick the CSV parser into accepting distinct open/close quote characters, but you can get away with a pretty simple preprocessing step:
import pandas as pd
import io
import re
# regular expression to capture contents of balanced brackets
location_regex = re.compile(r'\[([^\[\]]+)\]')
with open('path/to/file.txt', 'r') as fi:
# replaced brackets with quotes, pipe into file-like object
fo = io.StringIO()
fo.writelines(unicode(re.sub(location_regex, r'"\1"', line)) for line in fi)
# rewind file to the beginning
fo.seek(0)
# read transformed CSV into data frame
df = pd.read_csv(fo)
print df
This gives you a result like
Date_Time Item Location
0 2016-01-01 13:41:00 1 [45.2344:-78.25453]
1 2016-01-03 19:11:00 2 [43.3423:-79.23423, 41.2342:-81242]
2 2016-01-10 01:27:00 3 [51.2344:-86.24432]
Edit If memory is not an issue, then you are better off preprocessing the data in bulk rather than line by line, as is done in Max's answer.
# regular expression to capture contents of balanced brackets
location_regex = re.compile(r'\[([^\[\]]+)\]', flags=re.M)
with open('path/to/file.csv', 'r') as fi:
data = unicode(re.sub(location_regex, r'"\1"', fi.read()))
df = pd.read_csv(io.StringIO(data))
If you know ahead of time that the only brackets in the document are those surrounding the location coordinates, and that they are guaranteed to be balanced, then you can simplify it even further (Max suggests a line-by-line version of this, but I think the iteration is unnecessary):
with open('/path/to/file.csv', 'r') as fi:
data = unicode(fi.read().replace('[', '"').replace(']', '"')
df = pd.read_csv(io.StringIO(data))
Below are the timing results I got with a 200k-row by 3-column dataset. Each time is averaged over 10 trials.
data frame post-processing (jezrael's solution): 2.19s
line by line regex: 1.36s
bulk regex: 0.39s
bulk string replace: 0.14s
I think you can replace first 3 occurence of , in each line of file to ; and then use parameter sep=";" in read_csv:
import pandas as pd
import io
with open('file2.csv', 'r') as f:
lines = f.readlines()
fo = io.StringIO()
fo.writelines(u"" + line.replace(',',';', 3) for line in lines)
fo.seek(0)
df = pd.read_csv(fo, sep=';')
print df
Item Date Time Location
0 1 01/01/2016 13:41 [45.2344:-78.25453]
1 2 01/03/2016 19:11 [43.3423:-79.23423,41.2342:-81242]
2 3 01/10/2016 01:27 [51.2344:-86.24432]
Or can try this complicated approach, because main problem is, separator , between values in lists is same as separator of other column values.
So you need post - processing:
import pandas as pd
import io
temp=u"""Item,Date,Time,Location
1,01/01/2016,13:41,[45.2344:-78.25453]
2,01/03/2016,19:11,[43.3423:-79.23423,41.2342:-81242,41.2342:-81242]
3,01/10/2016,01:27,[51.2344:-86.24432]"""
#after testing replace io.StringIO(temp) to filename
#estimated max number of columns
df = pd.read_csv(io.StringIO(temp), names=range(10))
print df
0 1 2 3 4 \
0 Item Date Time Location NaN
1 1 01/01/2016 13:41 [45.2344:-78.25453] NaN
2 2 01/03/2016 19:11 [43.3423:-79.23423 41.2342:-81242
3 3 01/10/2016 01:27 [51.2344:-86.24432] NaN
5 6 7 8 9
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
2 41.2342:-81242] NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN
#remove column with all NaN
df = df.dropna(how='all', axis=1)
#first row get as columns names
df.columns = df.iloc[0,:]
#remove first row
df = df[1:]
#remove columns name
df.columns.name = None
#get position of column Location
print df.columns.get_loc('Location')
3
#df1 with Location values
df1 = df.iloc[:, df.columns.get_loc('Location'): ]
print df1
Location NaN NaN
1 [45.2344:-78.25453] NaN NaN
2 [43.3423:-79.23423 41.2342:-81242 41.2342:-81242]
3 [51.2344:-86.24432] NaN NaN
#combine values to one column
df['Location'] = df1.apply( lambda x : ', '.join([e for e in x if isinstance(e, basestring)]), axis=1)
#subset of desired columns
print df[['Item','Date','Time','Location']]
Item Date Time Location
1 1 01/01/2016 13:41 [45.2344:-78.25453]
2 2 01/03/2016 19:11 [43.3423:-79.23423, 41.2342:-81242, 41.2342:-8...
3 3 01/10/2016 01:27 [51.2344:-86.24432]
Hoping that this is an allowable SO question but I am hoping to get some advice on how to convert the below code which processes lines in a file to produce a dataframe into one that uses generators and yields because this implementation using list and append is far too slow.
Here is the solution I came up with but I was really hoping to avoid using very slow lists and append operation. I was hoping for cool generator and yield solution instead but not comfortable enough yet working with generators.
Sample lines in file:
"USNC3255","27","US","NC","LANDS END","72305006","KNJM","KNCA","KNKT","T72305006","","","NCC031","NCZ095","","545","28594","America/New_York","34.65266","-77.07661","7","RDU","893727","
"USNC3256","27","US","NC","LANDSDOWN","72314058","KEHO","KAKH","KIPJ","T72314058","","","NCC045","NCZ068","sc007","517","28150","America/New_York","35.29374","-81.46537","797","CLT","317845","
Current Solution:
def parse_file(filename):
newline = []
with open(filename, 'rb') as f:
reader = csv.reader(f, quoting=csv.QUOTE_NONE)
for row in reader:
newline.append([s.strip('"') for s in row[:-1]])
df = pd.DataFrame(newline)
df = df.applymap(lambda x: nan if len(x) == 0 else x).astype(object)
return df
df = parse_file(filename)
Output is just a dataframe with 23 columns and two rows if used against the sample lines above.
The only problem with your file is that each line ends with ,". This confuses the parser. If you can remove the trailing comma and quotation mark, you can use the regular parser.
import pandas as pd
from StringIO import StringIO
with open('example.txt') as myfile:
data = myfile.read().replace(',"\n', '\n')
pd.read_csv(StringIO(data), header=None)
This is what I get:
0 1 2 3 4 5 6 7 8 9 \
0 USNC3255 27 US NC LANDS END 72305006 KNJM KNCA KNKT T72305006
1 USNC3256 27 US NC LANDSDOWN 72314058 KEHO KAKH KIPJ T72314058
... 13 14 15 16 17 18 19 \
0 ... NCZ095 NaN 545 28594 America/New_York 34.65266 -77.07661
1 ... NCZ068 sc007 517 28150 America/New_York 35.29374 -81.46537
20 21 22
0 7 RDU 893727
1 797 CLT 317845
[2 rows x 23 columns]