I am starting working with pandas and I have encountered a problem.
I am working with different csvs which have the form:
10,152.46
12,124.67
11,150.1
20/21,126.7
37/38,128.8
39,6.19
40,6.8
35-36,9.7
27-31_32,11.3
To Import it, I run:
experimental = pd.read_csv(csv_file,usecols=[0,1]).dropna() --> works as expected
0
10
152.46
1
12
124.67
2
11
150.1
3
20/21
126.7
4
37/38
128.8
5
39
6.19
6
40
6.8
7
35-36
9.7
8
27-31_32
11.3
then, to easily combine it with other df
experimental = experimental.set_index(experimental.columns[0])
And here is where the problem starts. With some other files that look the same, there is no problem: no more index and the second column (10/12/11...) are set as index.
This would be the expected results, same as observed with other csv files
10
152.46
12
124.67
11
150.1
However, with others (like this), I get this type of df
152.46
10
12
124.67
11
150.1
...
I have tried reading as utf-8 or adding a header in the csv without success.
In the way I am presenting it, other files that look the same work.
Thanks
You are not setting a correct index: experimental.columns[0] returns the name of the first column. To set the first column as an index use
experimental.set_index(experimental.iloc[:, 0])
Alternatively, you can use index_col=0 in pd.read_csv to set the first column as the index upon reading:
experimental = pd.read_csv(csv_file, usecols=[0,1], index_col=0, header=None).dropna()
The header=None keyword indicates that your data does not have any column names and that the first row in your csv is the first data row.
Related
I have a small problem with reading the data from this source correctly. I tried to write:
path = 'http://archive.ics.uci.edu/ml/machine-learning-databases/image/segmentation.data'
df = pd.read_table(path)
And then I got something strange.
Then I wrote:
df = pd.read_table(path, sep=',', header=None)
and got an error: ParserError: Error tokenizing data. C error: Expected 1 fields in line 4, saw 19
Could you, please, help me to find the solution?
The file is basically a csv file so you can use read_csv. Use it in combination with skiprows=2 to skip the first non-relevant rows of the file.
import pandas as pd
path = 'http://archive.ics.uci.edu/ml/machine-learning-databases/image/segmentation.data'
df = pd.read_csv(path, skiprows=2, index_col=False)
Output df.head():
REGION-CENTROID-COL
REGION-CENTROID-ROW
REGION-PIXEL-COUNT
SHORT-LINE-DENSITY-5
SHORT-LINE-DENSITY-2
VEDGE-MEAN
VEDGE-SD
HEDGE-MEAN
HEDGE-SD
INTENSITY-MEAN
RAWRED-MEAN
RAWBLUE-MEAN
RAWGREEN-MEAN
EXRED-MEAN
EXBLUE-MEAN
EXGREEN-MEAN
VALUE-MEAN
SATURATION-MEAN
HUE-MEAN
0
BRICKFACE
140
125
9
0
0
0.277778
0.062963
0.666667
0.311111
6.18518
7.33333
7.66667
3.55556
3.44444
4.44444
-7.88889
7.77778
0.545635
1
BRICKFACE
188
133
9
0
0
0.333333
0.266667
0.5
0.0777777
6.66667
8.33333
7.77778
3.88889
5
3.33333
-8.33333
8.44444
0.53858
2
BRICKFACE
105
139
9
0
0
0.277778
0.107407
0.833333
0.522222
6.11111
7.55556
7.22222
3.55556
4.33333
3.33333
-7.66667
7.55556
0.532628
3
BRICKFACE
34
137
9
0
0
0.5
0.166667
1.11111
0.474074
5.85185
7.77778
6.44444
3.33333
5.77778
1.77778
-7.55556
7.77778
0.573633
4
BRICKFACE
39
111
9
0
0
0.722222
0.374074
0.888889
0.429629
6.03704
7
7.66667
3.44444
2.88889
4.88889
-7.77778
7.88889
0.562919
Can you give encoding like this:
path = 'http://archive.ics.uci.edu/ml/machine-learning-databases/image/segmentation.data'
df = pd.read_csv(path,encoding = 'utf8')
If it does not work, can you try other encodings?
The problem seems to be that the data file contains some meta information that Pandas cannot parse. You need to convert your file to a CSV before it can be read by pandas.
To do this, first download the file to your local machine at some location filepath and remove the lines starting with the ;;; and the empty lines. Then running a pd.read_table(filepath, sep='\t') or a pd.read_csv(filepath) should work as expected.
Note that the header argument does not refer to any generic header information that the file may contain. header lets pandas know whether the first line in your CSV contains the names of the columns (if header is True) or whether the actual data in the file starts from the first line (if header is False).
I have a large excel file that I need to organize in a certain way (years of climate data), for the sake of understanding my problem, I made a this simple excel file for questions. The data looks similar to this:
(basically 4x4 data with an empty row between them) and I want to transform this data to look like:
(take each row of data transpose it and then add the second row to it with the Nanvalues) using pandas.
The problem that I faced. when reading the file using file = pd.read_csv("excel data.csv"):
my first row will be detected as a header.
the row that separate the data will be converted to NaN and will be confused with the actual NaN in my data
I tried different functions including reading/saving the file with no index (index = False) i also tried functions like file.iloc[0].values , file.shift(1) but I wasn't able to figure it out.
To summarize I want to be able to read the file using pandas then save it as 1 column that include all the data with no extra information or headers (sorry but I am new to pandas).
EDIT: This is how it looks in jupyter notebook.
For the first problem header = None worked.
I tried file.stack(dropna=False).reset_index()[0] but the results stayed the same as in the picture.
If you pass header = None in the read_csv function, it will not detect the first row as header, i.e. file = pd.read_csv("excel_data.csv", header=None)
For the second part, once you have the data into the dataframe, you could try this -
file.stack(dropna=False).reset_index()[0]
Trying to replicate the required results :
df = pd.DataFrame({0:[5.0,54.0,3.0,9.0], 1:[6.0,12.0,6.0,12.0], 2:[9.0,76.0,np.nan,41.0], 3:[8.0,2.0,12.0,100.0]})
df.loc[4] = ['','','','']
0 1 2 3
0 5 6 9 8
1 54 12 76 2
2 3 6 NaN 12
3 9 12 41 100
4
df = df.replace('',np.nan).dropna(how='all') #to remove blank rows
df.stack(dropna=False).reset_index()[0]
0 5.0
1 6.0
2 9.0
3 8.0
4 54.0
5 12.0
6 76.0
7 2.0
8 3.0
9 6.0
10 NaN
11 12.0
12 9.0
13 12.0
14 41.0
15 100.0
I wonder if pd.read_csv("excel data.csv", skip_blank_lines=True, header=None) will work?
I have a big amount of data with me(93 files, ~150mb each). The data is a time series, i.e, information about a given set of coordinates(3.3 million latitude-longitude values) is recorded and stored everyday for 93 days, and the whole data is broken up into 93 files respectively. Example of two such files:
Day 1:
lon lat A B day1
68.4 8.4 NaN 20 20
68.4 8.5 16 20 18
68.6 8.4 NaN NaN NaN
.
.
Day 2:
lon lat C D day2
68.4 8.4 NaN NaN NaN
68.4 8.5 24 25 24.5
68.6 8.4 NaN NaN NaN
.
.
I am interested in understanding the nature of the missing data in the columns 'day1', 'day2', 'day3', etc. For example, if the values missing in the concerned columns are evenly distributed among all the set of coordinates then the data is probably missing at random, but if the missing values are concentrated more in a particular set of coordinates then my data will become biased. Consider the way my data is divided into multiple files of large sizes and isn't in a very standard form to operate on making it harder to use some tools.
I am looking for a diagnostic tool or visualization in python that can check/show how the missing data is distributed over the set of coordinates so I can impute/ignore it appropriately.
Thanks.
P.S: This is the first time I am handling missing data so it would be great to see if there exists a workflow which people who do similar kind of work follow.
Assuming that you read file and name it df. You can count amount of NaNs using:
df.isnull().sum()
It will return you amount of NaNs per column.
You could also use:
df.isnull().sum(axis=1).value_counts()
This on the other hand will sum number of NaNs per row and then calculate number of rows with no NaNs, 1 NaN, 2 NaNs and so on.
Regarding working with files of such size, to speed up loading data and processing it I recommend using Dask and change format of your files preferably to parquet so that you can read and write to it in parallel.
You could easily recreate function above in Dask like this:
from dask import dataframe as dd
dd.read_parquet(file_path).isnull().sum().compute()
Answering the comment question:
Use .loc to slice your dataframe, in code below I choose all rows : and two columns ['col1', 'col2'].
df.loc[:, ['col1', 'col2']].isnull().sum(axis=1).value_counts()
I have a Pandas dataframe that contains a column of float64 values:
tempDF = pd.DataFrame({ 'id': [12,12,12,12,45,45,45,51,51,51,51,51,51,76,76,76,91,91,91,91],
'measure': [3.2,4.2,6.8,5.6,3.1,4.8,8.8,3.0,1.9,2.1,2.4,3.5,4.2,5.2,4.3,3.6,5.2,7.1,6.5,7.3]})
I want to create a new column containing just the integer part. My first thought was to use .astype(int):
tempDF['int_measure'] = tempDF['measure'].astype(int)
This works fine but, as an extra complication, the column I have contains a missing value:
tempDF.ix[10,'measure'] = np.nan
This missing value causes the .astype(int) method to fail with:
ValueError: Cannot convert NA to integer
I thought I could round down the floats in the column of data. However, the .round(0) function will round to the nearest integer (higher or lower) rather than rounding down. I can't find a function equivalent to ".floor()" that will act on a column of a Pandas dataframe.
Any suggestions?
You could just apply numpy.floor;
import numpy as np
tempDF['int_measure'] = tempDF['measure'].apply(np.floor)
id measure int_measure
0 12 3.2 3
1 12 4.2 4
2 12 6.8 6
...
9 51 2.1 2
10 51 NaN NaN
11 51 3.5 3
...
19 91 7.3 7
You could also try:
df.apply(lambda s: s // 1)
Using np.floor is faster, however.
The answers here are pretty dated and as of pandas 0.25.2 (perhaps earlier) the error
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Which would be
df.iloc[:,0] = df.iloc[:,0].astype(int)
for one particular column.
I'm attempting to read in a flat-file to a DataFrame using pandas but can't seem to get the format right. My file has a variable number of fields represented per line and looks like this:
TIME=20131203004552049|CHAN=FCJNJKDCAAANPCKEAAAAAAAA|EVNT=NVOCinpt|MIME=application/synthesis+ssml|TXID=NUAN-20131203004552049-FCJNJKDCAAANPCKEAAAAAAAA-txt|TXSZ=1167|UCPU=31|SCPU=15
TIME=20131203004552049|CHAN=FCJNJKDCAAANPCKEAAAAAAAA|EVNT=NVOCsynd|INPT=1167|DURS=5120|RSTT=stop|UCPU=31|SCPU=15
TIME=20131203004552049|CHAN=FCJNJKDCAAANPCKEAAAAAAAA|EVNT=NVOClise|LUSED=0|LMAX=100|OMAX=95|LFEAT=tts|UCPU=0|SCPU=0
I have the field separator at |, I've pulled a list of all unique keys into keylist, and am trying to use the following to read in the data:
keylist = ['TIME',
'CHAN',
# [truncated]
'DURS',
'RSTT']
test_fp = 'c:\\temp\\test_output.txt'
df = pd.read_csv(test_fp, sep='|', names=keylist)
This incorrectly builds the DataFrame as I'm not specifying any way to recognize the key label in the line. I'm a little stuck and am not sure which way to research -- should I be using .read_json() for example?
Not sure if there's a slick way to do this. Sometimes when the data structure is different enough from the norm it's easiest to preprocess it on the Python side. Sure, it's not as fast, but since you could immediately save it in a more standard format it's usually not worth worrying about.
One way:
with open("wfield.txt") as fp:
rows = (dict(entry.split("=",1) for entry in row.strip().split("|")) for row in fp)
df = pd.DataFrame.from_dict(rows)
which produces
>>> df
CHAN DURS EVNT INPT LFEAT LMAX LUSED \
0 FCJNJKDCAAANPCKEAAAAAAAA NaN NVOCinpt NaN NaN NaN NaN
1 FCJNJKDCAAANPCKEAAAAAAAA 5120 NVOCsynd 1167 NaN NaN NaN
2 FCJNJKDCAAANPCKEAAAAAAAA NaN NVOClise NaN tts 100 0
MIME OMAX RSTT SCPU TIME \
0 application/synthesis+ssml NaN NaN 15 20131203004552049
1 NaN NaN stop 15 20131203004552049
2 NaN 95 NaN 0 20131203004552049
TXID TXSZ UCPU
0 NUAN-20131203004552049-FCJNJKDCAAANPCKEAAAAAAA... 1167 31
1 NaN NaN 31
2 NaN NaN 0
[3 rows x 15 columns]
After you've got this, you can reshape as needed. (I'm not sure if you wanted to combine rows with the same TIME & CHAN or not.)
Edit: if you're using an older version of pandas which doesn't support passing a generator to from_dict, you can built it from a list instead:
df = pd.DataFrame(list(rows))
but note that you haev have to convert columns to numerical columns from strings after the fact.