I have a file that looks like this:
# Time Cm Cd Cl Cl(f) Cl(r) Cm Cd Cl Cl(f) Cl(r)
1.000000000000e+01 -5.743573465913e-01 -5.860160539688e-01 -1.339511756657e+00 -1.244113224920e+00 -9.539853173733e-02
2.000000000000e+01 6.491397073110e-02 1.320098727949e-02 6.147195262817e-01 3.722737338720e-01 2.424457924098e-01
3.000000000000e+01 3.554043329234e-02 4.296597501519e-01 7.901295853361e-01 4.306052259604e-01 3.595243593757e-01
Is there any way I can tell pandas that Time is the first column name?
I read it this way
dat = pd.read_csv('%sdt.dat'%s, delim_whitespace=True)
Which is somehow telling pandas that the first column is named #:
dat.columns
Index(['#', 'Time', 'Cm', 'Cd', 'Cl', 'Cl(f)', 'Cl(r)', 'Cm.1', 'Cd.1', 'Cl.1', 'Cl(f).1', 'Cl(r).1'],
dtype='object')
How can I tell pandas' read_csv to ignore the first two characters in the header or otherwise get the column names I want from read_csv?
Here is one potential work-around:
headers = pd.read_csv('%sdt.dat'%s, delim_whitespace=True, nrows=0).columns[1:]
dat = pd.read_csv('%sdt.dat'%s, delim_whitespace=True, header=None, skiprows=1, names=headers)
alternatively, you could fix the columns with some post-processing:
col_mapper = {old:new for old, new in zip(dat.columns, dat.columns[1:])}
dat = dat.iloc[:, :-1].rename(col_mapper, axis=1)
Instead of using any whitespace as a separator you can specify that there must be at least 2 whitespace characters since your data appears to be separated by multiple spaces. This will name the first column '# Time' and you can rename it afterwards to remove the '# ' prefix:
df = pd.read_csv('%sdt.dat'%s, sep='\s{2,}', engine='python')
print(df)
# Time Cm Cd Cl Cl(f) Cl(r) Cm.1 Cd.1 Cl.1 Cl(f).1 Cl(r).1
0 10.0 -0.574357 -0.586016 -1.339512 -1.244113 -0.095399 NaN NaN NaN NaN NaN
1 20.0 0.064914 0.013201 0.614720 0.372274 0.242446 NaN NaN NaN NaN NaN
2 30.0 0.035540 0.429660 0.790130 0.430605 0.359524 NaN NaN NaN NaN NaN
df.columns = ['Time'] + list(df.columns[1:])
print(df)
Time Cm Cd Cl Cl(f) Cl(r) Cm.1 Cd.1 Cl.1 Cl(f).1 Cl(r).1
0 10.0 -0.574357 -0.586016 -1.339512 -1.244113 -0.095399 NaN NaN NaN NaN NaN
1 20.0 0.064914 0.013201 0.614720 0.372274 0.242446 NaN NaN NaN NaN NaN
2 30.0 0.035540 0.429660 0.790130 0.430605 0.359524 NaN NaN NaN NaN NaN
Related
I have an excel sheet like this.
If I search using the below method I got only 1 row.
df4 = df.loc[(df['NAME '] == 'HIR')]
df4
But I want to get all rows connecting with this name (same for birthdate and place).
expected output:
How can I achieve this? how can I bind these things
You need to forward fill the data with ffill():
df = df.replace('', np.nan) # in case you don't have null values, but you have empty strings
df['NAME '] = df['NAME '].ffill()
df4 = df.loc[(df['NAME '] == 'HIR')]
df4
That will then bring up all of the rows when you use loc. You can do this on other columns as well.
First you need to remove those blank rows in your excel. then fill values by the previous value
import pandas as pd
df = pd.read_excel('so.xlsx')
df = df[~df['HOBBY'].isna()]
df[['SNO','NAME']] = df[['SNO','NAME']].ffill()
df
SNO NAME HOBBY COURSE BIRTHDATE PLACE
0 1.0 HIR DANCING BTECH 1990.0 USA
1 1.0 HIR MUSIC MTECH NaN NaN
2 1.0 HIR TRAVELLING AI NaN NaN
4 2.0 BH GAMES BTECH 1992.0 INDIA
5 2.0 BH BOOKS AI NaN NaN
6 2.0 BH SWIMMING NaN NaN NaN
7 2.0 BH MUSIC NaN NaN NaN
8 2.0 BH DANCING NaN NaN NaN
Basically, what I'm working with is a dataframe with all of the parking tickets given out in one year. Every ticket takes up its own row in the unaltered dataframe. What I want to do is group all the tickets by date so that I have 2 columns (date, and the amount of tickets issued on that day). Right now I can achieve that, however, the date is not considered a column by pandas.
import numpy as np
import matplotlib as mp
import pandas as pd
import matplotlib.pyplot as plt
df1 = pd.read_csv('C:/Users/brett/OneDrive/Data Science
Fundamentals/Parking_Tags_Data_2012.csv')
unnecessary_cols = ['tag_number_masked', 'infraction_code',
'infraction_description', 'set_fine_amount', 'time_of_infraction',
'location1', 'location2', 'location3', 'location4',
'province']
df1 = df1.drop (unnecessary_cols, 1)
df1 =
(df1.groupby('date_of_infraction').agg({'date_of_infraction':'count'}))
df1['frequency'] =
(df1.groupby('date_of_infraction').agg({'date_of_infraction':'count'}))
print (df1)
df1 = (df1.iloc[121:274])
The output is:
date_of_infraction date_of_infraction frequency
20120101 1059 NaN
20120102 2711 NaN
20120103 6889 NaN
20120104 8030 NaN
20120105 7991 NaN
20120106 8693 NaN
20120107 7237 NaN
20120108 5061 NaN
20120109 7974 NaN
20120110 8872 NaN
20120111 9110 NaN
20120112 8667 NaN
20120113 7247 NaN
20120114 7211 NaN
20120115 6116 NaN
20120116 9168 NaN
20120117 8973 NaN
20120118 9016 NaN
20120119 7998 NaN
20120120 8214 NaN
20120121 6400 NaN
20120122 6355 NaN
20120123 7777 NaN
20120124 8628 NaN
20120125 8527 NaN
20120126 8239 NaN
20120127 8667 NaN
20120128 7174 NaN
20120129 5378 NaN
20120130 7901 NaN
... ... ...
20121202 5342 NaN
20121203 7336 NaN
20121204 7258 NaN
20121205 8629 NaN
20121206 8893 NaN
20121207 8479 NaN
20121208 7680 NaN
20121209 5357 NaN
20121210 7589 NaN
20121211 8918 NaN
20121212 9149 NaN
20121213 7583 NaN
20121214 8329 NaN
20121215 7072 NaN
20121216 5614 NaN
20121217 8038 NaN
20121218 8194 NaN
20121219 6799 NaN
20121220 7102 NaN
20121221 7616 NaN
20121222 5575 NaN
20121223 4403 NaN
20121224 5492 NaN
20121225 673 NaN
20121226 1488 NaN
20121227 4428 NaN
20121228 5882 NaN
20121229 3858 NaN
20121230 3817 NaN
20121231 4530 NaN
Essentially, I want to move all the columns over by one to the right. Right now pandas only considers the last two columns as actual columns. I hope this made sense.
The count of infractions per date should be achievable with just one call to groupby. Try this:
import numpy as np
import pandas as pd
df1 = pd.read_csv('C:/Users/brett/OneDrive/Data Science
Fundamentals/Parking_Tags_Data_2012.csv')
unnecessary_cols = ['tag_number_masked', 'infraction_code',
'infraction_description', 'set_fine_amount', 'time_of_infraction',
'location1', 'location2', 'location3', 'location4',
'province']
df1 = df1.drop(unnecessary_cols, 1)
# reset_index() to move the dates into their own column
counts = df1.groupby('date_of_infraction').count().reset_index()
print(counts)
Note that any dates with zero tickets will not show up as 0; instead, they will simply be absent from counts.
If this doesn't work, it would be helpful for us to see the first few rows of df1 after you drop the unnecessary rows.
Try using as_index=False.
For example:
import numpy as np
import pandas as pd
data = {"date_of_infraction":["20120101", "20120101", "20120202", "20120202"],
"foo":np.random.random(4)}
df = pd.DataFrame(data)
df
date_of_infraction foo
0 20120101 0.681286
1 20120101 0.826723
2 20120202 0.669367
3 20120202 0.766019
(df.groupby("date_of_infraction", as_index=False) # <-- acts like reset_index()
.foo.count()
.rename(columns={"foo":"frequency"})
)
date_of_infraction frequency
0 20120101 2
1 20120202 2
I have gathered data from the penultimate worksheet in this Excel file along with all the data in the last Worksheet from "Maturity Years" of 5.5 onward. I have code that does this. However, I am now looking to restructure the dataframe such that it has the following columns and am struggling to do this:
My code is below.
import urllib2
import pandas as pd
import os
import xlrd
url = 'http://www.bankofengland.co.uk/statistics/Documents/yieldcurve/uknom05_mdaily.xls'
socket = urllib2.urlopen(url)
xd = pd.ExcelFile(socket)
#Had to do this based on actual sheet_names rather than index as there are some extra sheet names in xd.sheet_names
df1 = xd.parse('4. spot curve', header=None)
df1 = df1.loc[:, df1.loc[3, :] >= 5.5] #Assumes the maturity is always on the 4th line of the sheet
df2 = xd.parse('3. spot, short end', header=None)
bigdata = df1.append(df2,ignore_index = True)
Edit: The Dataframe currently looks as follows. The current Dataframe is pretty disorganized unfortunately:
0 1 2 3 4 5 6 \
0 NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN
2 Maturity NaN NaN NaN NaN NaN NaN
3 years: NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN
5 2005-01-03 00:00:00 NaN NaN NaN NaN NaN NaN
6 2005-01-04 00:00:00 NaN NaN NaN NaN NaN NaN
... ... ... .. .. ... ... ...
5410 2015-04-20 00:00:00 NaN NaN NaN NaN 0.367987 0.357069
5411 2015-04-21 00:00:00 NaN NaN NaN NaN 0.362478 0.352581
It has 5440 rows and 61 columns
However, I want the dataframe to be of the format:
I think Columns 1,2,3,4,5 and 6 contain Yield Curve Data. However, I am unsure where the data associated with "Maturity Years" is in the current DataFrame.
Date(which is the 2nd Column in the current Dataframe) Update time(which would just be a column with datetime.datetime.now()) Currency(which would just be a column with 'GBP') Maturity Date Yield Data from SpreadSheet
I use the pandas.io.excel.read_excel function to read xls from url. Here is one way to clean this UK yield curve dataset.
Note: executing the cubic spline interpolation via the apply function takes quite a mount of time (about 2 minutes in my PC). It interpolates from about 100 points to 300 points, row by row (2638 in total).
from pandas.io.excel import read_excel
import pandas as pd
import numpy as np
url = 'http://www.bankofengland.co.uk/statistics/Documents/yieldcurve/uknom05_mdaily.xls'
# check the sheet number, spot: 9/9, short end 7/9
spot_curve = read_excel(url, sheetname=8)
short_end_spot_curve = read_excel('uknom05_mdaily.xls', sheetname=6)
# preprocessing spot_curve
# ==============================================
# do a few inspection on the table
spot_curve.shape
spot_curve.iloc[:, 0]
spot_curve.iloc[:, -1]
spot_curve.iloc[0, :]
spot_curve.iloc[-1, :]
# do some cleaning, keep NaN for now, as forward fill NaN is not recommended for yield curve
spot_curve.columns = spot_curve.loc['years:']
spot_curve.columns.name = 'years'
valid_index = spot_curve.index[4:]
spot_curve = spot_curve.loc[valid_index]
# remove all maturities within 5 years as those are duplicated in short-end file
col_mask = spot_curve.columns.values > 5
spot_curve = spot_curve.iloc[:, col_mask]
# now spot_curve is ready, check it
spot_curve.head()
spot_curve.tail()
spot_curve.shape
spot_curve.shape
Out[184]: (2715, 40)
# preprocessing short end spot_curve
# ==============================================
short_end_spot_curve.columns = short_end_spot_curve.loc['years:']
short_end_spot_curve.columns.name = 'years'
valid_index = short_end_spot_curve.index[4:]
short_end_spot_curve = short_end_spot_curve.loc[valid_index]
short_end_spot_curve.head()
short_end_spot_curve.tail()
short_end_spot_curve.shape
short_end_spot_curve.shape
Out[185]: (2715, 60)
# merge these two, time index are identical
# ==============================================
combined_data = pd.concat([short_end_spot_curve, spot_curve], axis=1, join='outer')
# sort the maturity from short end to long end
combined_data.sort_index(axis=1, inplace=True)
combined_data.head()
combined_data.tail()
combined_data.shape
# deal with NaN: the most sound approach is fit the non-arbitrage NSS curve
# however, this is not currently supported in python.
# do a cubic spline instead
# ==============================================
# if more than half of the maturity points are NaN, then interpolation is likely to be unstable, so I'll remove all rows with NaNs count greater than 50
def filter_func(group):
return group.isnull().sum(axis=1) <= 50
combined_data = combined_data.groupby(level=0).filter(filter_func)
# no. of rows down from 2715 to 2628
combined_data.shape
combined_data.shape
Out[186]: (2628, 100)
from scipy.interpolate import interp1d
# mapping points, monthly frequency, 1 mon to 25 years
maturity = pd.Series((np.arange(12 * 25) + 1) / 12)
# do the interpolation day by day
key = lambda x: x.date
by_day = combined_data.groupby(level=0)
# write out apply function
def interpolate_maturities(group):
# transpose row vector to column vector and drops all nans
a = group.T.dropna().reset_index()
f = interp1d(a.iloc[:, 0], a.iloc[:, 1], kind='cubic', bounds_error=False, assume_sorted=True)
return pd.Series(maturity.apply(f).values, index=maturity.values)
# this may take a while .... apply provides flexibility but spead is not good
cleaned_spot_curve = by_day.apply(interpolate_maturities)
# a quick look on the data
cleaned_spot_curve.iloc[[1,1000, 2000], :].T.plot(title='Cross-Maturity Yield Curve')
cleaned_spot_curve.iloc[:, [23, 59, 119]].plot(title='Time-Series')
I've got an issue with Pandas not replacing certain bits of text correctly...
# Create blank column
csvdata["CTemp"] = ""
# Create a copy of the data in "CDPure"
dcol = csvdata.CDPure
# Fill "CTemp" with the data from "CDPure" and replace and/or remove certain parts
csvdata['CTemp'] = dcol.str.replace(" (AMI)", "").replace(" N/A", "Non")
But yet when i print it hasn't replaced any as seen below by running print csvdata[-50:].head(50)
Pole KI DE Score STAT CTemp
4429 NaN NaN NaN 42 NaN Data N/A
4430 NaN NaN NaN 23.43 NaN Data (AMI)
4431 NaN NaN NaN 7.05 NaN Data (AMI)
4432 NaN NaN NaN 9.78 NaN Data
4433 NaN NaN NaN 169.68 NaN Data (AMI)
4434 NaN NaN NaN 26.29 NaN Data N/A
4435 NaN NaN NaN 83.11 NaN Data N/A
NOTE: The CSV is rather big so I have to use pandas.set_option('display.max_columns', 250) to be able to print the above.
Anyone know how I can make it replace those parts correctly in pandas?
EDIT, I've tried .str.replace("", "") and tried just .replace("", "")
Example CSV:
No,CDPure,Blank
1,Data Test,
2,Test N/A,
3,Data N/A,
4,Test Data,
5,Bla,
5,Stack,
6,Over (AMI),
7,Flow (AMI),
8,Test (AMI),
9,Data,
10,Ryflex (AMI),
Example Code:
# Import pandas
import pandas
# Open csv (I have to keep it all as dtype object otherwise I can't do the rest of my script)
csvdata = pandas.read_csv('test.csv', dtype=object)
# Create blank column
csvdata["CTemp"] = ""
# Create a copy of the data in "CDPure"
dcol = csvdata.CDPure
# Fill "CTemp" with the data from "CDPure" and replace and/or remove certain parts
csvdata['CTemp'] = dcol.str.replace(" (AMI)", "").str.replace(" N/A", " Non")
# Print
print csvdata.head(11)
Output:
No CDPure Blank CTemp
0 1 Data Test NaN Data Test
1 2 Test N/A NaN Test Non
2 3 Data N/A NaN Data Non
3 4 Test Data NaN Test Data
4 5 Bla NaN Bla
5 5 Stack NaN Stack
6 6 Over (AMI) NaN Over (AMI)
7 7 Flow (AMI) NaN Flow (AMI)
8 8 Test (AMI) NaN Test (AMI)
9 9 Data NaN Data
10 10 Ryflex (AMI) NaN Ryflex (AMI)
str.replace interprets its argument as a regular expression, so you need to escape the parentheses using dcol.str.replace(r" \(AMI\)", "").str.replace(" N/A", "Non").
This does not appear to be adequately documented; the docs mention that split and replace "take regular expressions, too", but doesn't make it clear that they always interpret their argument as a regular expression.
I have 10 .csv files with two columns. For example
file1.csv
Bact1,[1821932:1822487](+)
Bact2,[555760:556294](+)
Bact3,[2901866:2902424](-)
Bact4,[1104980:1105544](+)
file2.csv
Bact1,[1973928:1975194](-)
Bact2,[972152:973499](+)
Bact3,[3001035:3002739](-)
Bact4,[3331158:3332481](+)
Bact5,[712517:713771](+)
Bact5,[1376120:1377386](-)
file3.csv
Bact6,[4045708:4047781](+)
and so on to file10.csv The Bact1 represents a bacterial species and all the numbers including the sign represents the position of a gene. Each file represents a different gene, and there are duplicates like in the case of file2.csv
I wanted to merge these files so that I have something like this
Bact1 [1821932:1822487](+) [1973928:1975194](-) NaN
Bact2 [555760:556294](+) [972152:973499](+) NaN
Bact3 [2901866:2902424](-) [3001035:3002739](-) NaN
Bact4 [1104980:1105544](+) [3331158:3332481](+) NaN
Bact5 NaN [712517:713771](+) NaN
Bact5 NaN [1376120:1377386](-) NaN
Bact6 NaN NaN [4045708:4047781](+)
I have tried to use pandas package in python, but seems like most of the functions are geared towards merging two dataframes, not more than two, or i am missing something.
I have just started programming in python last week (I normally use R), so getting stuck in what could be or atleast seems like a simple thing.
Right now i am using:
for x in range(1,10):
df[x]=pandas.read_csv("file%s.csv" % (x),header=None,index_col=[0])
df[x].columns=['gene%s' % (x)]
dfjoin={}
dfjoin=df[1].join([df[2],df[3],df[4],df[5],df[6],df[7],df[8],df[9],df[10]])
Result:
0 gene1 gene2 gene3
Starkeya-novella-DSM-506 NaN [728886:730173](+) [731445:732615](+)
Starkeya-novella-DSM-506 NaN [728886:730173](+) [9662:10994](+)
Starkeya-novella-DSM-506 NaN [728886:730173](+) [9662:10994](+)
Starkeya-novella-DSM-506 NaN [728886:730173](+) [9662:10994](+)
see gene2 and gene3, it has duplicated results copied.
Assuming you've read these in as DataFrames as follows:
In [11]: df1 = pd.read_csv('file1.csv', sep=',', header=None, index_col=[0], names=['bact', 'file1'])
In [12]: df1
Out[12]:
file1
bact
Bact1 [1821932:1822487](+)
Bact2 [555760:556294](+)
Bact3 [2901866:2902424](-)
Bact4 [1104980:1105544](+)
Then you can simply join them:
In [21]: df1.join([df2, df3])
Out[21]:
file1 file2 file3
bact
Bact1 [1821932:1822487](+) [1973928:1975194](-) NaN
Bact2 [555760:556294](+) [972152:973499](+) NaN
Bact3 [2901866:2902424](-) [3001035:3002739](-) NaN
Bact4 [1104980:1105544](+) [3331158:3332481](+) NaN
Bact5 NaN [712517:713771](+) NaN
Bact5 NaN [1376120:1377386](-) NaN
Bact6 NaN NaN [4045708:4047781](+)
I changed your example data a little, here is the code:
import pandas as pd
import io
data = {
"file1":"""Bact1,[1821932:1822487](+)
Bact2,[555760:556294](+)
Bact3,[2901866:2902424](-)
Bact4,[1104980:1105544](+)
Bact5,[1104981:1105544](+)
Bact5,[1104982:1105544](+)""",
"file2":"""Bact1,[1973928:1975194](-)
Bact2,[972152:973499](+)
Bact3,[3001035:3002739](-)
Bact4,[3331158:3332481](+)
Bact5,[712517:713771](+)
Bact5,[1376120:1377386](-)
Bact5,[1376121:1377386](-)""",
"file3":"""Bact4,[3331150:3332481](+)
Bact6,[4045708:4047781](+)"""}
def read_file(f):
s = pd.read_csv(f, header=None, index_col=0, squeeze=True)
return s.groupby(s.index).apply(lambda s:pd.Series(s.values))
series = {key:read_file(io.StringIO(unicode(text)))
for key, text in data.items()}
print pd.concat(series, axis=1)
output:
file1 file2 file3
0
Bact1 0 [1821932:1822487](+) [1973928:1975194](-) NaN
Bact2 0 [555760:556294](+) [972152:973499](+) NaN
Bact3 0 [2901866:2902424](-) [3001035:3002739](-) NaN
Bact4 0 [1104980:1105544](+) [3331158:3332481](+) [3331150:3332481](+)
Bact5 0 [1104981:1105544](+) [712517:713771](+) NaN
1 [1104982:1105544](+) [1376120:1377386](-) NaN
2 NaN [1376121:1377386](-) NaN
Bact6 0 NaN NaN [4045708:4047781](+)