I am working through ThinkStats, but decided to learn Pandas along the ways as well. So the code below reads in data from a file, does some checking and then appends the data to a list. I end up with several lists containing the data I need. The code below works (except for scrambling up the columns...)
My question is: What is the best way to build a dataframe from these lists? More generally, am I accomplishing my goal in the most efficient manner?
preglength = []
caseid = []
outcome = []
birthorder = []
finalweight = []
with open('2002FemPreg.dat') as f:
for line in f:
caseid.append(int(line[0:13].strip()))
preglength.append(int(line[274:276].strip()))
outcome.append(int(line[276].strip()))
try:
birthorder.append(int(line[277:279]))
except ValueError:
birthorder.append(np.nan)
finalweight.append(float(line[422:440].strip()))
c1 = pd.Series(caseid)
c2 = pd.Series(preglength)
c3 = pd.Series(outcome)
c4 = pd.Series(birthorder)
c5 = pd.Series(finalweight)
data = pd.DataFrame({'caseid': c1,'preglength': c2,'outcome': c3,'birthorder': c4,'weight': c5})
print(data.head())
I would probably use read_fwf:
>>> df = pd.read_fwf("2002FemPreg.dat",
... colspecs=[(0,13), (274, 276), (276, 277), (277, 279), (422, 440)],
... names=["caseid", "preglength", "outcome", "birthorder", "finalweight"])
>>> df.head()
caseid preglength outcome birthorder finalweight
0 1 39 1 1 6448.271112
1 1 39 1 2 6448.271112
2 2 39 1 1 12999.542264
3 2 39 1 2 12999.542264
4 2 39 1 3 12999.542264
[5 rows x 5 columns]
Related
My data frame contains 10,000,000 rows! After group by, ~ 9,000,000 sub-frames remain to loop through.
The code is:
data = read.csv('big.csv')
for id, new_df in data.groupby(level=0): # look at mini df and do some analysis
# some code for each of the small data frames
This is super inefficient, and the code has been running for 10+ hours now.
Is there a way to speed it up?
Full Code:
d = pd.DataFrame() # new df to populate
print 'Start of the loop'
for id, new_df in data.groupby(level=0):
c = [new_df.iloc[i:] for i in range(len(new_df.index))]
x = pd.concat(c, keys=new_df.index).reset_index(level=(2,3), drop=True).reset_index()
x = x.set_index(['level_0','level_1', x.groupby(['level_0','level_1']).cumcount()])
d = pd.concat([d, x])
To get the data:
data = pd.read_csv('https://raw.githubusercontent.com/skiler07/data/master/so_data.csv', index_col=0).set_index(['id','date'])
Note:
Most of id's will only have 1 date. This indicates only 1 visit. For id's with more visits, I would like to structure them in a 3d format e.g. store all of their visits in the 2nd dimension out of 3. The output is (id, visits, features)
Here is one way to speed that up. This adds the desired new rows in some code which processes the rows directly. This saves the overhead of constantly constructing small dataframes. Your sample of 100,000 rows runs in a couple of seconds on my machine. While your code with only 10,000 rows of your sample data takes > 100 seconds. This seems to represent a couple of orders of magnitude improvement.
Code:
def make_3d(csv_filename):
def make_3d_lines(a_df):
a_df['depth'] = 0
depth = 0
prev = None
accum = []
for row in a_df.values.tolist():
row[0] = 0
key = row[1]
if key == prev:
depth += 1
accum.append(row)
else:
if depth == 0:
yield row
else:
depth = 0
to_emit = []
for i in range(len(accum)):
date = accum[i][2]
for j, r in enumerate(accum[i:]):
to_emit.append(list(r))
to_emit[-1][0] = j
to_emit[-1][2] = date
for r in to_emit[1:]:
yield r
accum = [row]
prev = key
df_data = pd.read_csv('big-data.csv')
df_data.columns = ['depth'] + list(df_data.columns)[1:]
new_df = pd.DataFrame(
make_3d_lines(df_data.sort_values('id date'.split())),
columns=df_data.columns
).astype(dtype=df_data.dtypes.to_dict())
return new_df.set_index('id date'.split())
Test Code:
start_time = time.time()
df = make_3d('big-data.csv')
print(time.time() - start_time)
df = df.drop(columns=['feature%d' % i for i in range(3, 25)])
print(df[df['depth'] != 0].head(10))
Results:
1.7390995025634766
depth feature0 feature1 feature2
id date
207555809644681 20180104 1 0.03125 0.038623 0.008130
247833985674646 20180106 1 0.03125 0.004378 0.004065
252945024181083 20180107 1 0.03125 0.062836 0.065041
20180107 2 0.00000 0.001870 0.008130
20180109 1 0.00000 0.001870 0.008130
329567241731951 20180117 1 0.00000 0.041952 0.004065
20180117 2 0.03125 0.003101 0.004065
20180117 3 0.00000 0.030780 0.004065
20180118 1 0.03125 0.003101 0.004065
20180118 2 0.00000 0.030780 0.004065
I believe your approach for feature engineering could be done better, but I will stick to answering your question.
In Python, iterating over a Dictionary is way faster than iterating over a DataFrame
Here how I managed to process a huge pandas DataFrame (~100,000,000 rows):
# reset the Dataframe index to get level 0 back as a column in your dataset
df = data.reset_index() # the index will be (id, date)
# split the DataFrame based on id
# and store the splits as Dataframes in a dictionary using id as key
d = dict(tuple(df.groupby('id')))
# iterate over the Dictionary and process the values
for key, value in d.items():
pass # each value is a Dataframe
# concat the values and get the original (processed) Dataframe back
df2 = pd.concat(d.values(), ignore_index=True)
Modified #Stephen's code
def make_3d(dataset):
def make_3d_lines(a_df):
a_df['depth'] = 0 # sets all depth from (1 to n) to 0
depth = 1 # initiate from 1, so that the first loop is correct
prev = None
accum = [] # accumulates blocks of data belonging to given user
for row in a_df.values.tolist(): # for each row in our dataset
row[0] = 0 # NOT SURE
key = row[1] # this is the id of the row
if key == prev: # if this rows id matches previous row's id, append together
depth += 1
accum.append(row)
else: # else if this id is new, previous block is completed -> process it
if depth == 0: # previous id appeared only once -> get that row from accum
yield accum[0] # also remember that depth = 0
else: # process the block and emit each row
depth = 0
to_emit = [] # prepare to emit the list
for i in range(len(accum)): # for each unique day in the accumulated list
date = accum[i][2] # define date to be the first date it sees
for j, r in enumerate(accum[i:]):
to_emit.append(list(r))
to_emit[-1][0] = j # define the depth
to_emit[-1][2] = date # define the
for r in to_emit[0:]:
yield r
accum = [row]
prev = key
df_data = dataset.reset_index()
df_data.columns = ['depth'] + list(df_data.columns)[1:]
new_df = pd.DataFrame(
make_3d_lines(df_data.sort_values('id date'.split(), ascending=[True,False])),
columns=df_data.columns
).astype(dtype=df_data.dtypes.to_dict())
return new_df.set_index('id date'.split())
Testing:
t = pd.DataFrame(data={'id':[1,1,1,1,2,2,3,3,4,5], 'date':[20180311,20180310,20180210,20170505,20180312,20180311,20180312,20180311,20170501,20180304], 'feature':[10,20,45,1,14,15,20,20,13,11],'result':[1,1,0,0,0,0,1,0,1,1]})
t = t.reindex(columns=['id','date','feature','result'])
print t
id date feature result
0 1 20180311 10 1
1 1 20180310 20 1
2 1 20180210 45 0
3 1 20170505 1 0
4 2 20180312 14 0
5 2 20180311 15 0
6 3 20180312 20 1
7 3 20180311 20 0
8 4 20170501 13 1
9 5 20180304 11 1
Output
depth feature result
id date
1 20180311 0 10 1
20180311 1 20 1
20180311 2 45 0
20180311 3 1 0
20180310 0 20 1
20180310 1 45 0
20180310 2 1 0
20180210 0 45 0
20180210 1 1 0
20170505 0 1 0
2 20180312 0 14 0
20180312 1 15 0
20180311 0 15 0
3 20180312 0 20 1
20180312 1 20 0
20180311 0 20 0
4 20170501 0 13 1
With lengthy column names, DataFrames will display in a very messy form seemingly no matter what options are set.
Info: I'm in Jupyter QtConsole, pandas 0.20.1, with the following relevant options specified at startup:
pd.set_option('display.max_colwidth', 20)
pd.set_option('expand_frame_repr', False)
pd.set_option('display.max_rows', 25)
Question: how can I truncate the DataFrame if necessary rather than wrapping the columns to the next line, while keeping expand_frame_repr=False?
Here's an example. Again, the issue doesn't depend on the number of columns but length of the columns.
This will not cause an issue:
df = pd.DataFrame(np.random.randn(1000, 1000),
columns=['col' + str(i) for i in range(1000)])
As the output is perfectly readable and looks like:
The same DataFrame with long column names causes the issue I'm talking about:
df = pd.DataFrame(np.random.randn(1000, 1000),
columns=['very_long_col_name_'
+ str(i) for i in range(1000)])
Is there any way to conform the second output to be like the first that I'm missing? (Through specifying an option, not through using .iloc every time I want to view.)
Use max_columns
from string import ascii_letters
df = pd.DataFrame(np.random.randint(10, size=(5, 52)), columns=list(ascii_letters))
with pd.option_context(
'display.max_colwidth', 20,
'expand_frame_repr', False,
'display.max_rows', 25,
'display.max_columns', 5,
):
print(df.add_prefix('really_long_column_name_'))
really_long_column_name_a really_long_column_name_b ... really_long_column_name_Y really_long_column_name_Z
0 8 1 ... 1 9
1 8 5 ... 2 1
2 5 0 ... 9 9
3 6 8 ... 0 9
4 1 2 ... 7 1
[5 rows x 52 columns]
Another idea... Obviously not exactly what you want, but maybe you can twist it to your needs.
d1 = df.add_suffix('_really_long_column_name')
with pd.option_context('display.max_colwidth', 4, 'expand_frame_repr', False):
mw = pd.get_option('display.max_colwidth')
print(d1.rename(columns=lambda x: x[:mw-3] + '...' if len(x) > mw else x))
a... b... c... d... e... f... g... h... i... j... ... Q... R... S... T... U... V... W... X... Y... Z...
0 6 5 5 5 8 3 5 0 7 6 ... 9 0 6 9 6 8 4 0 6 7
1 0 5 4 7 2 5 4 3 8 7 ... 8 1 5 3 5 9 4 5 5 3
2 7 2 1 6 5 1 0 1 3 1 ... 6 7 0 9 9 5 2 8 2 2
3 1 8 7 1 4 5 5 8 8 3 ... 3 6 5 7 1 0 8 1 4 0
4 7 5 6 2 4 9 7 9 0 5 ... 6 8 1 6 3 5 4 2 3 2
Looks like it will need an enhancement. The relevant code in the repr function appears to be here:
max_rows = get_option("display.max_rows")
max_cols = get_option("display.max_columns")
show_dimensions = get_option("display.show_dimensions")
if get_option("display.expand_frame_repr"):
width, _ = console.get_console_size()
else:
width = None
self.to_string(buf=buf, max_rows=max_rows, max_cols=max_cols,
line_width=width, show_dimensions=show_dimensions)
So either you pass expand_frame_repr=True and it wraps on the line width, or you pass expand_frame_repr=False and it shouldn't. But it looks like there is a bug in the code (this should be pandas 0.20.3 iirc):
in pd.io.formats.format.DataFrameFormatter:
def _chk_truncate(self):
"""
Checks whether the frame should be truncated. If so, slices
the frame up.
"""
from pandas.core.reshape.concat import concat
# Column of which first element is used to determine width of a dot col
self.tr_size_col = -1
# Cut the data to the information actually printed
max_cols = self.max_cols
max_rows = self.max_rows
if max_cols == 0 or max_rows == 0: # assume we are in the terminal
# (why else = 0)
(w, h) = get_terminal_size()
self.w = w
self.h = h
if self.max_rows == 0:
dot_row = 1
prompt_row = 1
if self.show_dimensions:
show_dimension_rows = 3
n_add_rows = (self.header + dot_row + show_dimension_rows +
prompt_row)
# rows available to fill with actual data
max_rows_adj = self.h - n_add_rows
self.max_rows_adj = max_rows_adj
# Format only rows and columns that could potentially fit the
# screen
if max_cols == 0 and len(self.frame.columns) > w:
max_cols = w
if max_rows == 0 and len(self.frame) > h:
max_rows = h
Looks like it intended to do what you wanted, but was unfinished. It's checking max_cols against the number of columns, not the total width of the columns.
So you could either create a show_df function that would calculate the correct number of columns and show it in an option_context like pi2Squared's answer, or fix it here (and maybe submit a patch if you need it distributed).
As others have pointed out, Pandas itself seems to be bugged or badly designed here, so a workaround is required.
Most of the time this problem occurs with numerical columns, since numbers are relatively short. Pandas will split the column heading onto multiple lines if there are spaces in it, so you can "hack in" the correct behavior by inserting spaces into column headings for numerical columns when you display the dataframe. I have a one-liner to do this:
def colfix(df, L=5): return df.rename(columns=lambda x: ' '.join(x.replace('_', ' ')[i:i+L] for i in range(0,len(x),L)) if df[x].dtype in ['float64','int64'] else x )
do display your dataframe, simply type
colfix(your_df)
note that the renaming is not going to permanently change the dataframe, it will only add spaces to the names for the purposes of displaying it that one time.
Results (in a Jupyter Notebook):
With colfix:
Without:
I am new to Pandas. So I wonder whether there are some ways better to finish this task.
I have a data frame like the following format:
This is a DNA simulation data from molecular dynamics.
And the data set is here:BPdata.csv
So, here is in total 1000 Frames and my purpose is to get the average of each 10 Frames, So, in the end, I want the data to be like this:
Block Base1 Base2 Shear Stretch Stagger .....
1 1 66 XX XX XX
1 2 65 XX XX XX
... ... ... ... ... ...
1 33 34 XX XX XX
2 1 66 XX XX XX
2 2 65 XX XX XX
... ... ... ... ... ...
2 33 34 XX XX XX
3 1 66 XX XX XX
3 2 65 XX XX XX
... ... ... ... ... ...
3 33 34 XX XX XX
4 1 66 XX XX XX
4 2 65 XX XX XX
... ... ... ... ... ...
4 33 34 XX XX XX
Where Block 1 represents the mean of 1 ~ 10 Frames and 2 represents Frame 11 ~ 20.
Although, I think by carefully assign the index of each row I can finish these task, I wonder whether there is some convenient way to finish this task. I have checked some web pages about the groupby functions in pandas by it seems does not have this group each 10 row to get a block average function.
Thank you!
=============================== Update ==================================
Sorry for not be clear on the description of my purpose, and I have figured out a way to do the task and a sample output to better illustrated my purpose.
For double strand DNA, We know it is a double helix structure with AGCT, so Base1 means one base for DNA and Base2 means the complementary base of another strand. The two corresponding bases are linked together by hydrogen bonds.
like:
Base1 : AAAGGGCCCTTT
||||||||||||
Base2 : TTTCCCGGGAAA
So here in BPdata.csv each combination of Base1 and Base2 means a pair of DNA bases.
Here in BPdata.csv, this is a 33 base pair DNA simulated in different time frames noted as 1,2,3,4...1000.
Then I want to group each 10-time frames together, like 1~10,11~20,21~30...., and in each group, do the average for each Base pair.
And here is the data I figured out:
# -*- coding: utf-8 -*-
import pandas as pd
'''
Data Input
'''
# Import CSV data to Python
BPdata = pd.read_csv("BPdata.csv", delim_whitespace = True, skip_blank_lines = False)
BPdata.rename(columns={'#Frame':'Frame'}, inplace=True)
'''
Data Processing
'''
# constant block average parameters
Interval20ns = 10
IntervalInBPdata = 34
# BPdataBlockAverageSummary
LEN_BPdata = len(BPdata)
# For Frame 1
i = 1
indexStarting = 0
indexEnding = 0
indexStarting = indexEnding
indexEnding = Interval20ns * IntervalInBPdata * i - 1
GPtemp = BPdata.loc[indexStarting : indexEnding]
GPtemp['Frame'] = str(i)
BPdata_blockOF1K_mean = GPtemp.groupby(['Frame','Base1','Base2']).mean()
BPdata_blockOF1K_mean.loc[len(BPdata_blockOF1K_mean)] = str(i)
# For Frame 2 and so on
i = i + 1
indexStarting = indexEnding + 1
indexEnding = Interval20ns * IntervalInBPdata * i - 1
while ( indexEnding <= LEN_BPdata - 1):
GPtemp = BPdata.loc[indexStarting : indexEnding]
GPtemp['Frame'] = str(i)
meanTemp = GPtemp.groupby(['Frame','Base1','Base2']).mean()
meanTemp.loc[len(meanTemp)] = str(i)
BPdata_blockOF1K_mean = pd.concat([BPdata_blockOF1K_mean,meanTemp])
i = i + 1
indexStarting = indexEnding + 1
indexEnding = Interval20ns * IntervalInBPdata * i - 1
And the result is something like this, which is what I wanted:
And here is the sample output, BPdataresult.csv
But so far I got there warnings:
SettingWithCopyWarning: A value is trying to be set on a copy of a
slice from a DataFrame. Try using .loc[row_indexer,col_indexer] =
value instead
See the caveats in the documentation:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
GPtemp['Frame'] = str(i) /home/iphyer/Downloads/dataProcessing.py:62:
SettingWithCopyWarning: A value is trying to be set on a copy of a
slice from a DataFrame. Try using .loc[row_indexer,col_indexer] =
value instead
See the caveats in the documentation:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
GPtemp['Frame'] = str(i)
And Here I wonder:
Is this warning serious?
Due groupby function of Pandas, now the index of the data frame is a combination of (Frame,Base1,Base2), how can I separate them apart like the original form. Instead supplement #Frame to Block index.
Can I improve the code OR use some more Pandas way to do this task?
Best!
Grouping in pandas can be done in a variety of ways. One of those ways is to pass a series. So you could pass a series that has values for 10 row blocks. The solutions works as follows:
import pandas as pd
import numpy as np
#create datafram with 1000 rows
df = pd.DataFrame(np.random.rand(1000, 1)
#create series for grouping
groups_of_ten = pd.Series(np.repeat(range(int(len(df)/10)), 10))
#group the data
grouped = df.groupby(groups_of_ten)
#aggregate
grouped.agg('mean')
The grouping series looks like this on the inside:
In [21]: groups_of_ten.head(20)
Out[21]:
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
10 1
11 1
12 1
13 1
14 1
15 1
16 1
17 1
18 1
19 1
I am trying to extend my current pattern to accommodate extra conditions of +- a percentage of the last value rather than strict does it match previous value.
data = np.array([[2,30],[2,900],[2,30],[2,30],[2,30],[2,1560],[2,30],
[2,300],[2,30],[2,450]])
df = pd.DataFrame(data)
df.columns = ['id','interval']
UPDATE 2 (id fix): Updated Data 2 with more data:
data2 = np.array([[2,30],[2,900],[2,30],[2,29],[2,31],[2,30],[2,29],[2,31],[2,1560],[2,30],[2,300],[2,30],[2,450], [3,40],[3,900],[3,40],[3,39],[3,41], [3,40],[3,39],[3,41] ,[3,1560],[3,40],[3,300],[3,40],[3,450]])
df2 = pd.DataFrame(data2)
df2.columns = ['id','interval']
for i, g in df.groupby([(df.interval != df.interval.shift()).cumsum()]):
if len(g.interval.tolist())>=3:
print(g.interval.tolist())
results in [30,30,30]
however I really want to catch near number conditions say when a number is +-10% of the previous number.
so looking at df2 I would like to pickup the series [30,29,31]
for i, g in df2.groupby([(df2.interval != <???+- 10% magic ???>).cumsum()]):
if len(g.interval.tolist())>=3:
print(g.interval.tolist())
UPDATE: Here is the end of line processing code where I store the gathered lists into a dictionary with the ID as the key
leak_intervals = {}
final_leak_intervals = {}
serials = []
for i, g in df.groupby([(df.interval != df.interval.shift()).cumsum()]):
if len(g.interval.tolist()) >= 3:
print(g.interval.tolist())
serial = g.id.values[0]
if serial not in serials:
serials.append(serial)
if serial not in leak_intervals:
leak_intervals[serial] = g.interval.tolist()
else:
leak_intervals[serial] = leak_intervals[serial] + (g.interval.tolist())
UPDATE:
In [116]: df2.groupby(df2.interval.pct_change().abs().gt(0.1).cumsum()) \
.filter(lambda x: len(x) >= 3)
Out[116]:
id interval
2 2 30
3 2 29
4 2 31
5 2 30
6 2 29
7 2 31
15 3 40
16 3 39
17 2 41
18 2 40
19 2 39
20 2 41
I created a pandas dataframe out of some StackOverFlow posts. Used lxml.eTree to separate the code_blocks and the text_blocks. Below code shows the basic outline :
import lxml.etree
a1 = tokensentRDD.map(lambda (a,b): (a,''.join(map(str,b))))
a2 = a1.map(lambda (a,b): (a, b.replace("<", "<")))
a3 = a2.map(lambda (a,b): (a, b.replace(">", ">")))
def parsefunc (x):
html = lxml.etree.HTML(x)
code_block = html.xpath('//code/text()')
text_block = html.xpath('// /text()')
a4 = code_block
a5 = len(code_block)
a6 = text_block
a7 = len(text_block)
a8 = ''.join(map(str,text_block)).split(' ')
a9 = len(a8)
a10 = nltk.word_tokenize(''.join(map(str,text_block)))
numOfI = 0
numOfQue = 0
numOfExclam = 0
for x in a10:
if x == 'I':
numOfI +=1
elif x == '?':
numOfQue +=1
elif x == '!':
numOfExclam
return (a4,a5,a6,a7,a9,numOfI,numOfQue, numOfExclam)
a11 = a3.take(6)
a12 = map(lambda (a,b): (a, parsefunc(b)), a11)
columns = ['code_block', 'len_code', 'text_block', 'len_text', 'words#text_block', 'numOfI', 'numOfQ', 'numOfExclam']
index = map(lambda x:x[0], a12)
data = map(lambda x:x[1], a12)
df = pd.DataFrame(data = data, columns = columns, index = index)
df.index.name = 'Id'
df
code_block len_code text_block len_text words#text_block numOfI numOfQ numOfExclam
Id
4 [decimal 3 [I want to use a track-bar to change a form's ... 18 72 5 1 0
6 [div, ] 5 [I have an absolutely positioned , div, conta... 22 96 4 4 0
9 [DateTime] 1 [Given a , DateTime, representing a person's ... 4 21 2 2 0
11 [DateTime] 1 [Given a specific , DateTime, value, how do I... 12 24 2 1 0
I need to create a Spark DataFrame on order to apply machine learning algorithms on the output. I tried:
sqlContext.createDataFrame(df).show()
The error I receive is:
TypeError: not supported type: <class 'lxml.etree._ElementStringResult'>
Can someone tell me a proper way to convert a Pandas DataFrame into A Spark DataFrame?
Your problem is not related to Pandas. Both code_block (a4) and text_block (a6) contain lxml specific objects which cannot be encoded using SparkSQL types. Converting these to strings should be just enough.
a4 = [str(x) for x in code_block]
a6 = [str(x) for x in text_block]