I have a large excel file that I need to organize in a certain way (years of climate data), for the sake of understanding my problem, I made a this simple excel file for questions. The data looks similar to this:
(basically 4x4 data with an empty row between them) and I want to transform this data to look like:
(take each row of data transpose it and then add the second row to it with the Nanvalues) using pandas.
The problem that I faced. when reading the file using file = pd.read_csv("excel data.csv"):
my first row will be detected as a header.
the row that separate the data will be converted to NaN and will be confused with the actual NaN in my data
I tried different functions including reading/saving the file with no index (index = False) i also tried functions like file.iloc[0].values , file.shift(1) but I wasn't able to figure it out.
To summarize I want to be able to read the file using pandas then save it as 1 column that include all the data with no extra information or headers (sorry but I am new to pandas).
EDIT: This is how it looks in jupyter notebook.
For the first problem header = None worked.
I tried file.stack(dropna=False).reset_index()[0] but the results stayed the same as in the picture.
If you pass header = None in the read_csv function, it will not detect the first row as header, i.e. file = pd.read_csv("excel_data.csv", header=None)
For the second part, once you have the data into the dataframe, you could try this -
file.stack(dropna=False).reset_index()[0]
Trying to replicate the required results :
df = pd.DataFrame({0:[5.0,54.0,3.0,9.0], 1:[6.0,12.0,6.0,12.0], 2:[9.0,76.0,np.nan,41.0], 3:[8.0,2.0,12.0,100.0]})
df.loc[4] = ['','','','']
0 1 2 3
0 5 6 9 8
1 54 12 76 2
2 3 6 NaN 12
3 9 12 41 100
4
df = df.replace('',np.nan).dropna(how='all') #to remove blank rows
df.stack(dropna=False).reset_index()[0]
0 5.0
1 6.0
2 9.0
3 8.0
4 54.0
5 12.0
6 76.0
7 2.0
8 3.0
9 6.0
10 NaN
11 12.0
12 9.0
13 12.0
14 41.0
15 100.0
I wonder if pd.read_csv("excel data.csv", skip_blank_lines=True, header=None) will work?
Related
I have a small problem with reading the data from this source correctly. I tried to write:
path = 'http://archive.ics.uci.edu/ml/machine-learning-databases/image/segmentation.data'
df = pd.read_table(path)
And then I got something strange.
Then I wrote:
df = pd.read_table(path, sep=',', header=None)
and got an error: ParserError: Error tokenizing data. C error: Expected 1 fields in line 4, saw 19
Could you, please, help me to find the solution?
The file is basically a csv file so you can use read_csv. Use it in combination with skiprows=2 to skip the first non-relevant rows of the file.
import pandas as pd
path = 'http://archive.ics.uci.edu/ml/machine-learning-databases/image/segmentation.data'
df = pd.read_csv(path, skiprows=2, index_col=False)
Output df.head():
REGION-CENTROID-COL
REGION-CENTROID-ROW
REGION-PIXEL-COUNT
SHORT-LINE-DENSITY-5
SHORT-LINE-DENSITY-2
VEDGE-MEAN
VEDGE-SD
HEDGE-MEAN
HEDGE-SD
INTENSITY-MEAN
RAWRED-MEAN
RAWBLUE-MEAN
RAWGREEN-MEAN
EXRED-MEAN
EXBLUE-MEAN
EXGREEN-MEAN
VALUE-MEAN
SATURATION-MEAN
HUE-MEAN
0
BRICKFACE
140
125
9
0
0
0.277778
0.062963
0.666667
0.311111
6.18518
7.33333
7.66667
3.55556
3.44444
4.44444
-7.88889
7.77778
0.545635
1
BRICKFACE
188
133
9
0
0
0.333333
0.266667
0.5
0.0777777
6.66667
8.33333
7.77778
3.88889
5
3.33333
-8.33333
8.44444
0.53858
2
BRICKFACE
105
139
9
0
0
0.277778
0.107407
0.833333
0.522222
6.11111
7.55556
7.22222
3.55556
4.33333
3.33333
-7.66667
7.55556
0.532628
3
BRICKFACE
34
137
9
0
0
0.5
0.166667
1.11111
0.474074
5.85185
7.77778
6.44444
3.33333
5.77778
1.77778
-7.55556
7.77778
0.573633
4
BRICKFACE
39
111
9
0
0
0.722222
0.374074
0.888889
0.429629
6.03704
7
7.66667
3.44444
2.88889
4.88889
-7.77778
7.88889
0.562919
Can you give encoding like this:
path = 'http://archive.ics.uci.edu/ml/machine-learning-databases/image/segmentation.data'
df = pd.read_csv(path,encoding = 'utf8')
If it does not work, can you try other encodings?
The problem seems to be that the data file contains some meta information that Pandas cannot parse. You need to convert your file to a CSV before it can be read by pandas.
To do this, first download the file to your local machine at some location filepath and remove the lines starting with the ;;; and the empty lines. Then running a pd.read_table(filepath, sep='\t') or a pd.read_csv(filepath) should work as expected.
Note that the header argument does not refer to any generic header information that the file may contain. header lets pandas know whether the first line in your CSV contains the names of the columns (if header is True) or whether the actual data in the file starts from the first line (if header is False).
I am starting working with pandas and I have encountered a problem.
I am working with different csvs which have the form:
10,152.46
12,124.67
11,150.1
20/21,126.7
37/38,128.8
39,6.19
40,6.8
35-36,9.7
27-31_32,11.3
To Import it, I run:
experimental = pd.read_csv(csv_file,usecols=[0,1]).dropna() --> works as expected
0
10
152.46
1
12
124.67
2
11
150.1
3
20/21
126.7
4
37/38
128.8
5
39
6.19
6
40
6.8
7
35-36
9.7
8
27-31_32
11.3
then, to easily combine it with other df
experimental = experimental.set_index(experimental.columns[0])
And here is where the problem starts. With some other files that look the same, there is no problem: no more index and the second column (10/12/11...) are set as index.
This would be the expected results, same as observed with other csv files
10
152.46
12
124.67
11
150.1
However, with others (like this), I get this type of df
152.46
10
12
124.67
11
150.1
...
I have tried reading as utf-8 or adding a header in the csv without success.
In the way I am presenting it, other files that look the same work.
Thanks
You are not setting a correct index: experimental.columns[0] returns the name of the first column. To set the first column as an index use
experimental.set_index(experimental.iloc[:, 0])
Alternatively, you can use index_col=0 in pd.read_csv to set the first column as the index upon reading:
experimental = pd.read_csv(csv_file, usecols=[0,1], index_col=0, header=None).dropna()
The header=None keyword indicates that your data does not have any column names and that the first row in your csv is the first data row.
Currently i'm working on a Livetiming-Software for a motorsport-application. Therefore i have to crawl a Livetiming-Webpage and copy the Data to a big Dataframe. This Dataframe is the source of several diagramms i want to make. To keep my Dataframe up to date, i have to crawl the webpage very often.
I can download the Data and save them as a Panda.Dataframe. But my Problem is step from the downloaded DataFrame to the Big Dataframe, that includes all the Data.
import pandas as pd
import numpy as np
df1= pd.DataFrame({'Pos':[1,2,3,4,5,6],'CLS':['V5','V5','V5','V4','V4','V4'],
'Nr.':['13','700','30','55','24','985'],
'Zeit':['1:30,000','1:45,000','1:50,000','1:25,333','1:13,366','1:17,000'],
'Laps':['1','1','1','1','1','1']})
df2= pd.DataFrame({'Pos':[1,2,3,4,5,6],'CLS':['V5','V5','V5','V4','V4','V4'],
'Nr.':['13','700','30','55','24','985'],
'Zeit':[np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,],
'Laps':['2','2','2','2','2','2']})
df3= pd.DataFrame({'Pos':[1,2,3,4,5,6],'CLS':['V5','V5','V5','V4','V4','V4'],
'Nr.':['13','700','30','55','24','985'],
'Zeit':['1:31,000','1:41,000','1:51,000','1:21,333','1:11,366','1:11,000'],
'Laps':['2','2','2','2','2','2']})
df1.set_index(['CLS','Nr.','Laps'],inplace=True)
df2.set_index(['CLS','Nr.','Laps'],inplace=True)
df3.set_index(['CLS','Nr.','Laps'],inplace=True)
df1 shows a Dataframe from previous laps.
df2 shows a Dataframe in the second lap. The Lap is not completed, so i have a nan.
df3 shows a Dataframe after the second lap is completed.
My target is to have just one row for each Lap per Car per Class.
Either i have the problem, that i have duplicates with incomplete Laps or all date get overwritten.
I hope that someone can help me with this problem.
Thank you so far.
MrCrunsh
If I understand your problem correctly, your issue is that you have overlapping data for the second lap: information while the lap is still in progress and information after it's over. If you want to put all the information for a given lap in one row, I'd suggest use multi-index columns or changing the column names to reflect the difference between measurements during and after laps.
df = pd.concat([df1, df3])
df = pd.concat([df, df2], axis=1, keys=['after', 'during'])
The result will look like this:
after during
Pos Zeit Pos Zeit
CLS Nr. Laps
V4 24 1 5 1:13,366 NaN NaN
2 5 1:11,366 5.0 NaN
55 1 4 1:25,333 NaN NaN
2 4 1:21,333 4.0 NaN
985 1 6 1:17,000 NaN NaN
2 6 1:11,000 6.0 NaN
V5 13 1 1 1:30,000 NaN NaN
2 1 1:31,000 1.0 NaN
30 1 3 1:50,000 NaN NaN
2 3 1:51,000 3.0 NaN
700 1 2 1:45,000 NaN NaN
2 2 1:41,000 2.0 NaN
I have a pandas Dataframe containing EOD financial data (OHLC) for analysis.
I'm using https://github.com/cirla/tulipy library to generate technical indicator values, that have a certain timeperiod as option. For Example. ADX with timeperiod=5 shows ADX for last 5 days.
Because of this timeperiod, the generated array with indicator values is always shorter in length than the Dataframe. Because the prices of first 5 days are used to generate ADX for day 6..
pdi14, mdi14 = ti.di(
high=highData, low=lowData, close=closeData, period=14)
df['mdi_14'] = mdi14
df['pdi_14'] = pdi14
>> ValueError: Length of values does not match length of index
Unfortunately, unlike TA-LIB for example, this tulip library does not provide NaN-values for these first couple of empty days...
Is there an easy way to prepend these NaN to the ndarray?
Or insert into df at a certain index & have it create NaN for the rows before it automatically?
Thanks in advance, I've been researching for days!
Maybe make the shift yourself in the code ?
period = 14
pdi14, mdi14 = ti.di(
high=highData, low=lowData, close=closeData, period=period
)
df['mdi_14'] = np.NAN
df['mdi_14'][period - 1:] = mdi14
I hope they will fill the first values with NAN in the lib in the future. It's dangerous to leave time series data like this without any label.
Full MCVE
df = pd.DataFrame(1, range(10), list('ABC'))
a = np.full((len(df) - 6, df.shape[1]), 2)
b = np.full((6, df.shape[1]), np.nan)
c = np.row_stack([b, a])
d = pd.DataFrame(c, df.index, df.columns)
d
A B C
0 NaN NaN NaN
1 NaN NaN NaN
2 NaN NaN NaN
3 NaN NaN NaN
4 NaN NaN NaN
5 NaN NaN NaN
6 2.0 2.0 2.0
7 2.0 2.0 2.0
8 2.0 2.0 2.0
9 2.0 2.0 2.0
The C version of the tulip library includes a start function for each indicator (reference: https://tulipindicators.org/usage) that can be used to determine the output length of an indicator given a set of input options. Unfortunately, it does not appear that the python bindings library, tulipy, includes this functionality. Instead you have to resort to dynamically reassigning your index values to align the output with the original DataFrame.
Here is an example that uses the price series from the tulipy docs:
#Create the dataframe with close prices
prices = pd.DataFrame(data={81.59, 81.06, 82.87, 83, 83.61, 83.15, 82.84, 83.99, 84.55,
84.36, 85.53, 86.54, 86.89, 87.77, 87.29}, columns=['close'])
#Compute the technical indicator using tulipy and save the result in a DataFrame
bbands = pd.DataFrame(data=np.transpose(ti.bbands(real = prices['close'].to_numpy(), period = 5, stddev = 2)))
#Dynamically realign the index; note from the tulip library documentation that the price/volume data is expected be ordered "oldest to newest (index 0 is oldest)"
bbands.index += prices.index.max() - bbands.index.max()
#Put the indicator values with the original DataFrame
prices[['BBANDS_5_2_low', 'BBANDS_5_2_mid', 'BBANDS_5_2_up']] = bbands
prices.head(15)
close BBANDS_5_2_low BBANDS_5_2_mid BBANDS_5_2_up
0 81.06 NaN NaN NaN
1 81.59 NaN NaN NaN
2 82.87 NaN NaN NaN
3 83.00 NaN NaN NaN
4 83.61 80.530042 82.426 84.321958
5 83.15 81.494061 82.844 84.193939
6 82.84 82.533343 83.094 83.654657
7 83.99 82.471983 83.318 84.164017
8 84.55 82.417750 83.628 84.838250
9 84.36 82.435203 83.778 85.120797
10 85.53 82.511331 84.254 85.996669
11 86.54 83.142618 84.994 86.845382
12 86.89 83.536488 85.574 87.611512
13 87.77 83.870324 86.218 88.565676
14 87.29 85.288871 86.804 88.319129
I'm attempting to read in a flat-file to a DataFrame using pandas but can't seem to get the format right. My file has a variable number of fields represented per line and looks like this:
TIME=20131203004552049|CHAN=FCJNJKDCAAANPCKEAAAAAAAA|EVNT=NVOCinpt|MIME=application/synthesis+ssml|TXID=NUAN-20131203004552049-FCJNJKDCAAANPCKEAAAAAAAA-txt|TXSZ=1167|UCPU=31|SCPU=15
TIME=20131203004552049|CHAN=FCJNJKDCAAANPCKEAAAAAAAA|EVNT=NVOCsynd|INPT=1167|DURS=5120|RSTT=stop|UCPU=31|SCPU=15
TIME=20131203004552049|CHAN=FCJNJKDCAAANPCKEAAAAAAAA|EVNT=NVOClise|LUSED=0|LMAX=100|OMAX=95|LFEAT=tts|UCPU=0|SCPU=0
I have the field separator at |, I've pulled a list of all unique keys into keylist, and am trying to use the following to read in the data:
keylist = ['TIME',
'CHAN',
# [truncated]
'DURS',
'RSTT']
test_fp = 'c:\\temp\\test_output.txt'
df = pd.read_csv(test_fp, sep='|', names=keylist)
This incorrectly builds the DataFrame as I'm not specifying any way to recognize the key label in the line. I'm a little stuck and am not sure which way to research -- should I be using .read_json() for example?
Not sure if there's a slick way to do this. Sometimes when the data structure is different enough from the norm it's easiest to preprocess it on the Python side. Sure, it's not as fast, but since you could immediately save it in a more standard format it's usually not worth worrying about.
One way:
with open("wfield.txt") as fp:
rows = (dict(entry.split("=",1) for entry in row.strip().split("|")) for row in fp)
df = pd.DataFrame.from_dict(rows)
which produces
>>> df
CHAN DURS EVNT INPT LFEAT LMAX LUSED \
0 FCJNJKDCAAANPCKEAAAAAAAA NaN NVOCinpt NaN NaN NaN NaN
1 FCJNJKDCAAANPCKEAAAAAAAA 5120 NVOCsynd 1167 NaN NaN NaN
2 FCJNJKDCAAANPCKEAAAAAAAA NaN NVOClise NaN tts 100 0
MIME OMAX RSTT SCPU TIME \
0 application/synthesis+ssml NaN NaN 15 20131203004552049
1 NaN NaN stop 15 20131203004552049
2 NaN 95 NaN 0 20131203004552049
TXID TXSZ UCPU
0 NUAN-20131203004552049-FCJNJKDCAAANPCKEAAAAAAA... 1167 31
1 NaN NaN 31
2 NaN NaN 0
[3 rows x 15 columns]
After you've got this, you can reshape as needed. (I'm not sure if you wanted to combine rows with the same TIME & CHAN or not.)
Edit: if you're using an older version of pandas which doesn't support passing a generator to from_dict, you can built it from a list instead:
df = pd.DataFrame(list(rows))
but note that you haev have to convert columns to numerical columns from strings after the fact.