Initial Question:
I'm looping through a couple of thousand pickle files with Python Pandas DataFrames in it which vary in the number of rows (between aprox. 600 and 1300) but not in the number of collumns (636 to be exact). Then I transform them (exactly the same tranformations to each) and append them to a csv file using the DataFrame.to_csv() method.
The to_csv code excerpt:
if picklefile == '0000.p':
dftemp.to_csv(finalnormCSVFile)
else:
dftemp.to_csv(finalnormCSVFile, mode='a', header=False)
What bothers me is that it starts off pretty fast but performance decreases exponentially, I kept a processing time log:
start: 2015-03-24 03:26:36.958058
2015-03-24 03:26:36.958058
count = 0
time: 0:00:00
2015-03-24 03:30:53.254755
count = 100
time: 0:04:16.296697
2015-03-24 03:39:16.149883
count = 200
time: 0:08:22.895128
2015-03-24 03:51:12.247342
count = 300
time: 0:11:56.097459
2015-03-24 04:06:45.099034
count = 400
time: 0:15:32.851692
2015-03-24 04:26:09.411652
count = 500
time: 0:19:24.312618
2015-03-24 04:49:14.519529
count = 600
time: 0:23:05.107877
2015-03-24 05:16:30.175175
count = 700
time: 0:27:15.655646
2015-03-24 05:47:04.792289
count = 800
time: 0:30:34.617114
2015-03-24 06:21:35.137891
count = 900
time: 0:34:30.345602
2015-03-24 06:59:53.313468
count = 1000
time: 0:38:18.175577
2015-03-24 07:39:29.805270
count = 1100
time: 0:39:36.491802
2015-03-24 08:20:30.852613
count = 1200
time: 0:41:01.047343
2015-03-24 09:04:14.613948
count = 1300
time: 0:43:43.761335
2015-03-24 09:51:45.502538
count = 1400
time: 0:47:30.888590
2015-03-24 11:09:48.366950
count = 1500
time: 1:18:02.864412
2015-03-24 13:02:33.152289
count = 1600
time: 1:52:44.785339
2015-03-24 15:30:58.534493
count = 1700
time: 2:28:25.382204
2015-03-24 18:09:40.391639
count = 1800
time: 2:38:41.857146
2015-03-24 21:03:19.204587
count = 1900
time: 2:53:38.812948
2015-03-25 00:00:05.855970
count = 2000
time: 2:56:46.651383
2015-03-25 03:53:05.020944
count = 2100
time: 3:52:59.164974
2015-03-25 05:02:16.534149
count = 2200
time: 1:09:11.513205
2015-03-25 06:07:32.446801
count = 2300
time: 1:05:15.912652
2015-03-25 07:13:45.075216
count = 2400
time: 1:06:12.628415
2015-03-25 08:20:17.927286
count = 2500
time: 1:06:32.852070
2015-03-25 09:27:20.676520
count = 2600
time: 1:07:02.749234
2015-03-25 10:35:01.657199
count = 2700
time: 1:07:40.980679
2015-03-25 11:43:20.788178
count = 2800
time: 1:08:19.130979
2015-03-25 12:53:57.734390
count = 2900
time: 1:10:36.946212
2015-03-25 14:07:20.936314
count = 3000
time: 1:13:23.201924
2015-03-25 15:22:47.076786
count = 3100
time: 1:15:26.140472
2015-03-25 19:51:10.776342
count = 3200
time: 4:28:23.699556
2015-03-26 03:06:47.372698
count = 3300
time: 7:15:36.596356
count = 3324
end of cycle: 2015-03-26 03:59:54.161842
end: 2015-03-26 03:59:54.161842
total duration: 2 days, 0:33:17.203784
Update #1:
I did as you suggested #Alexander but it has certainly to do with the to_csv() mehod:
start: 2015-03-26 05:18:25.948410
2015-03-26 05:18:25.948410
count = 0
time: 0:00:00
2015-03-26 05:20:30.425041
count = 100
time: 0:02:04.476631
2015-03-26 05:22:27.680582
count = 200
time: 0:01:57.255541
2015-03-26 05:24:26.012598
count = 300
time: 0:01:58.332016
2015-03-26 05:26:16.542835
count = 400
time: 0:01:50.530237
2015-03-26 05:27:58.063196
count = 500
time: 0:01:41.520361
2015-03-26 05:29:45.769580
count = 600
time: 0:01:47.706384
2015-03-26 05:31:44.537213
count = 700
time: 0:01:58.767633
2015-03-26 05:33:41.591837
count = 800
time: 0:01:57.054624
2015-03-26 05:35:43.963843
count = 900
time: 0:02:02.372006
2015-03-26 05:37:46.171643
count = 1000
time: 0:02:02.207800
2015-03-26 05:38:36.493399
count = 1100
time: 0:00:50.321756
2015-03-26 05:39:42.123395
count = 1200
time: 0:01:05.629996
2015-03-26 05:41:13.122048
count = 1300
time: 0:01:30.998653
2015-03-26 05:42:41.885513
count = 1400
time: 0:01:28.763465
2015-03-26 05:44:20.937519
count = 1500
time: 0:01:39.052006
2015-03-26 05:46:16.012842
count = 1600
time: 0:01:55.075323
2015-03-26 05:48:14.727444
count = 1700
time: 0:01:58.714602
2015-03-26 05:50:15.792909
count = 1800
time: 0:02:01.065465
2015-03-26 05:51:48.228601
count = 1900
time: 0:01:32.435692
2015-03-26 05:52:22.755937
count = 2000
time: 0:00:34.527336
2015-03-26 05:52:58.289474
count = 2100
time: 0:00:35.533537
2015-03-26 05:53:39.406794
count = 2200
time: 0:00:41.117320
2015-03-26 05:54:11.348939
count = 2300
time: 0:00:31.942145
2015-03-26 05:54:43.057281
count = 2400
time: 0:00:31.708342
2015-03-26 05:55:19.483600
count = 2500
time: 0:00:36.426319
2015-03-26 05:55:52.216424
count = 2600
time: 0:00:32.732824
2015-03-26 05:56:27.409991
count = 2700
time: 0:00:35.193567
2015-03-26 05:57:00.810139
count = 2800
time: 0:00:33.400148
2015-03-26 05:58:17.109425
count = 2900
time: 0:01:16.299286
2015-03-26 05:59:31.021719
count = 3000
time: 0:01:13.912294
2015-03-26 06:00:49.200303
count = 3100
time: 0:01:18.178584
2015-03-26 06:02:07.732028
count = 3200
time: 0:01:18.531725
2015-03-26 06:03:28.518541
count = 3300
time: 0:01:20.786513
count = 3324
end of cycle: 2015-03-26 06:03:47.321182
end: 2015-03-26 06:03:47.321182
total duration: 0:45:21.372772
And as requested, the source code:
import pickle
import pandas as pd
import numpy as np
from os import listdir
from os.path import isfile, join
from datetime import datetime
# Defining function to deep copy pandas data frame:
def very_deep_copy(self):
return pd.DataFrame(self.values.copy(), self.index.copy(), self.columns.copy())
# Adding function to Dataframe module:
pd.DataFrame.very_deep_copy = very_deep_copy
#Define Data Frame Header:
head = [
'ConcatIndex', 'Concatenated String Index', 'FileID', ..., 'Attribute<autosave>', 'Attribute<bgcolor>'
]
exclude = [
'ConcatIndex', 'Concatenated String Index', 'FileID', ... , 'Real URL Array'
]
path = "./dataset_final/"
pickleFiles = [ f for f in listdir(path) if isfile(join(path,f)) ]
finalnormCSVFile = 'finalNormalizedDataFrame2.csv'
count = 0
start_time = datetime.now()
t1 = start_time
print("start: " + str(start_time) + "\n")
for picklefile in pickleFiles:
if count%100 == 0:
t2 = datetime.now()
print(str(t2))
print('count = ' + str(count))
print('time: ' + str(t2 - t1) + '\n')
t1 = t2
#DataFrame Manipulation:
df = pd.read_pickle(path + picklefile)
df['ConcatIndex'] = 100000*df.FileID + df.ID
for i in range(0, len(df)):
df.loc[i, 'Concatenated String Index'] = str(df['ConcatIndex'][i]).zfill(10)
df.index = df.ConcatIndex
#DataFrame Normalization:
dftemp = df.very_deep_copy()
for string in head:
if string in exclude:
if string != 'ConcatIndex':
dftemp.drop(string, axis=1, inplace=True)
else:
if 'Real ' in string:
max = pd.DataFrame.max(df[string.strip('Real ')])
elif 'child' in string:
max = pd.DataFrame.max(df[string.strip('child')+'desc'])
else:
max = pd.DataFrame.max(df[string])
if max != 0:
dftemp[string] = dftemp[string]/max
dftemp.drop('ConcatIndex', axis=1, inplace=True)
#Saving DataFrame in CSV:
if picklefile == '0000.p':
dftemp.to_csv(finalnormCSVFile)
else:
dftemp.to_csv(finalnormCSVFile, mode='a', header=False)
count += 1
print('count = ' + str(count))
cycle_end_time = datetime.now()
print("end of cycle: " + str(cycle_end_time) + "\n")
end_time = datetime.now()
print("end: " + str(end_time))
print('total duration: ' + str(end_time - start_time) + '\n')
Update #2:
As suggested I executed the command %prun %run "./DataSetNormalization.py" for the first couple of hundred picklefiles and the result is as followed:
136373640 function calls (136342619 primitive calls) in 1018.769 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
220 667.069 3.032 667.069 3.032 {method 'close' of '_io.TextIOWrapper' objects}
1540 42.046 0.027 46.341 0.030 {pandas.lib.write_csv_rows}
219 34.886 0.159 34.886 0.159 {built-in method collect}
3520 16.782 0.005 16.782 0.005 {pandas.algos.take_2d_axis1_object_object}
78323 9.948 0.000 9.948 0.000 {built-in method empty}
25336892 9.645 0.000 12.635 0.000 {built-in method isinstance}
1433941 9.344 0.000 9.363 0.000 generic.py:1845(__setattr__)
221051/220831 7.387 0.000 119.767 0.001 indexing.py:194(_setitem_with_indexer)
723540 7.312 0.000 7.312 0.000 {method 'reduce' of 'numpy.ufunc' objects}
273414 7.137 0.000 20.642 0.000 internals.py:2656(set)
604245 6.846 0.000 6.850 0.000 {method 'copy' of 'numpy.ndarray' objects}
1760 6.566 0.004 6.566 0.004 {pandas.lib.isnullobj}
276274 5.315 0.000 5.315 0.000 {method 'ravel' of 'numpy.ndarray' objects}
1719244 5.264 0.000 5.266 0.000 {built-in method array}
1102450 5.070 0.000 29.543 0.000 internals.py:1804(make_block)
1045687 5.056 0.000 10.209 0.000 index.py:709(__getitem__)
1 4.718 4.718 1018.727 1018.727 DataSetNormalization.py:6(<module>)
602485 4.575 0.000 15.087 0.000 internals.py:2586(iget)
441662 4.562 0.000 33.386 0.000 internals.py:2129(apply)
272754 4.550 0.000 4.550 0.000 internals.py:1291(set)
220883 4.073 0.000 4.073 0.000 {built-in method charmap_encode}
4781222 3.805 0.000 4.349 0.000 {built-in method getattr}
52143 3.673 0.000 3.673 0.000 {built-in method truediv}
1920486 3.671 0.000 3.672 0.000 {method 'get_loc' of 'pandas.index.IndexEngine' objects}
1096730 3.513 0.000 8.370 0.000 internals.py:3035(__init__)
875899 3.508 0.000 14.458 0.000 series.py:134(__init__)
334357 3.420 0.000 3.439 0.000 {pandas.lib.infer_dtype}
2581268 3.419 0.000 4.774 0.000 {pandas.lib.values_from_object}
1102450 3.036 0.000 6.110 0.000 internals.py:59(__init__)
824856 2.888 0.000 45.749 0.000 generic.py:1047(_get_item_cache)
2424185 2.657 0.000 3.870 0.000 numeric.py:1910(isscalar)
273414 2.505 0.000 9.332 0.000 frame.py:2113(_sanitize_column)
1646198 2.491 0.000 2.880 0.000 index.py:698(__contains__)
879639 2.461 0.000 2.461 0.000 generic.py:87(__init__)
552988 2.385 0.000 4.451 0.000 internals.py:3565(_get_blkno_placements)
824856 2.349 0.000 51.282 0.000 frame.py:1655(__getitem__)
220831 2.224 0.000 21.670 0.000 internals.py:460(setitem)
326437 2.183 0.000 11.352 0.000 common.py:1862(_possibly_infer_to_datetimelike)
602485 2.167 0.000 16.974 0.000 frame.py:1982(_box_item_values)
602485 2.087 0.000 23.202 0.000 internals.py:2558(get)
770739 2.036 0.000 6.471 0.000 internals.py:1238(__init__)
276494 1.966 0.000 1.966 0.000 {pandas.lib.get_blkno_indexers}
10903876/10873076 1.935 0.000 1.972 0.000 {built-in method len}
220831 1.924 0.000 76.647 0.000 indexing.py:372(setter)
220 1.893 0.009 1.995 0.009 {built-in method load}
1920486 1.855 0.000 8.198 0.000 index.py:1173(get_loc)
112860 1.828 0.000 9.607 0.000 common.py:202(_isnull_ndarraylike)
602485 1.707 0.000 8.903 0.000 series.py:238(from_array)
875899 1.688 0.000 2.493 0.000 series.py:263(_set_axis)
3300 1.661 0.001 1.661 0.001 {method 'tolist' of 'numpy.ndarray' objects}
1102670 1.609 0.000 2.024 0.000 internals.py:108(mgr_locs)
4211850 1.593 0.000 1.593 0.000 {built-in method issubclass}
1335546 1.501 0.000 2.253 0.000 generic.py:297(_get_axis_name)
273414 1.411 0.000 37.866 0.000 frame.py:1994(__setitem__)
441662 1.356 0.000 7.884 0.000 indexing.py:982(_convert_to_indexer)
220831 1.349 0.000 131.331 0.001 indexing.py:95(__setitem__)
273414 1.329 0.000 23.170 0.000 generic.py:1138(_set_item)
326437 1.276 0.000 6.203 0.000 fromnumeric.py:2259(prod)
274734 1.271 0.000 2.113 0.000 shape_base.py:60(atleast_2d)
273414 1.242 0.000 34.396 0.000 frame.py:2072(_set_item)
602485 1.183 0.000 1.979 0.000 generic.py:1061(_set_as_cached)
934422 1.175 0.000 1.894 0.000 {method 'view' of 'numpy.ndarray'objects}
1540 1.144 0.001 58.217 0.038 format.py:1409(_save_chunk)
220831 1.144 0.000 9.198 0.000 indexing.py:139(_convert_tuple)
441662 1.137 0.000 3.036 0.000 indexing.py:154(_convert_scalar_indexer)
220831 1.087 0.000 1.281 0.000 arrayprint.py:343(array2string)
1332026 1.056 0.000 3.997 0.000 generic.py:310(_get_axis)
602485 1.046 0.000 9.949 0.000 frame.py:1989(_box_col_values)
220 1.029 0.005 1.644 0.007 internals.py:2429(_interleave)
824856 1.025 0.000 46.777 0.000 frame.py:1680(_getitem_column)
1491578 1.022 0.000 2.990 0.000 common.py:58(_check)
782616 1.010 0.000 3.513 0.000 numeric.py:394(asarray)
290354 0.988 0.000 1.386 0.000 internals.py:1950(shape)
220831 0.958 0.000 15.392 0.000 generic.py:2101(copy)
273414 0.940 0.000 1.796 0.000 indexing.py:1520(_convert_to_index_sliceable)
220831 0.920 0.000 1.558 0.000 common.py:1110(_possibly_downcast_to_dtype)
220611 0.914 0.000 0.914 0.000 {pandas.lib.is_bool_array}
498646 0.906 0.000 0.906 0.000 {method 'clear' of 'dict' objects}
715345 0.848 0.000 13.083 0.000 common.py:132(_isnull_new)
452882 0.824 0.000 1.653 0.000 index.py:256(__array_finalize__)
602485 0.801 0.000 0.801 0.000 internals.py:208(iget)
52583 0.748 0.000 2.038 0.000 common.py:1223(_fill_zeros)
606005 0.736 0.000 6.755 0.000 internals.py:95(make_block_same_class)
708971 0.732 0.000 2.156 0.000 internals.py:3165(values)
1760378 0.724 0.000 0.724 0.000 internals.py:2025(_get_items)
109560 0.720 0.000 6.140 0.000 nanops.py:152(_get_values)
220831 0.718 0.000 11.017 0.000 internals.py:2395(copy)
924669 0.712 0.000 1.298 0.000 common.py:2248(_get_dtype_type)
1515796 0.698 0.000 0.868 0.000 {built-in method hasattr}
220831 0.670 0.000 4.299 0.000 internals.py:435(copy)
875899 0.661 0.000 0.661 0.000 series.py:285(_set_subtyp)
220831 0.648 0.000 0.649 0.000 {method 'get_value' of 'pandas.index.IndexEngine' objects}
452882 0.640 0.000 0.640 0.000 index.py:218(_reset_identity)
715345 0.634 0.000 1.886 0.000 {pandas.lib.isscalar}
1980 0.626 0.000 1.172 0.001 internals.py:3497(_merge_blocks)
220831 0.620 0.000 2.635 0.000 common.py:1933(_is_bool_indexer)
272754 0.608 0.000 0.899 0.000 internals.py:1338(should_store)
220831 0.599 0.000 3.463 0.000 series.py:482(__getitem__)
498645 0.591 0.000 1.497 0.000 generic.py:1122(_clear_item_cache)
1119390 0.584 0.000 1.171 0.000 index.py:3936(_ensure_index)
220831 0.573 0.000 1.883 0.000 index.py:222(view)
814797 0.555 0.000 0.905 0.000 internals.py:3086(_values)
52583 0.543 0.000 15.545 0.000 ops.py:469(wrapper)
220831 0.536 0.000 3.760 0.000 internals.py:371(_try_cast_result)
228971 0.533 0.000 0.622 0.000 generic.py:1829(__getattr__)
769651 0.528 0.000 0.528 0.000 {built-in method min}
224351 0.509 0.000 2.030 0.000 generic.py:1099(_maybe_update_cacher)
...
I will rerun it for confirmation but looks like it certainly has something to do with pandas' to_csv() method, because most of the run time is used on io and the csv writer. Why is it having this effect? Any suggestions?
Update #3:
Well, I did a full %prun test and indeed almost 90% of the time spent is used on {method 'close' of '_io.TextIOWrapper' objects}. So I guess here's the problem... What do you guys think?
My questions here are:
What originates here the decrease in performance?
Does pandas.DataFrames.to_csv() append mode load the whole file each time it writes to it?
Is there a way to enhance the process?
In these kind of situation you should profile your code (to see which function calls are taking the most time), that way you can check empirically that it is indeed slow in the read_csv rather than elsewhere...
From looking at your code: Firstly there's a lot of copying here and a lot of looping (not enough vectorization)... everytime you see looping look for a way to remove it. Secondly, when you use things like zfill, I wonder if you want to_fwf (fixed width format) rather than to_csv?
Some sanity testing: Are some files are significantly bigger than others (which could lead to you hitting swap)? Are you sure the largest files are only 1200 rows?? Have your checked this? e.g. using wc -l.
IMO I think it unlikely to be garbage collection.. (as was suggested in the other answer).
Here are a few improvements on your code, which should improve the runtime.
Columns are fixed I would extract the column calculations and vectorize the real, child and other normalizations. Use apply rather than iterating (for zfill).
columns_to_drop = set(head) & set(exclude) # maybe also - ['ConcatIndex']
remaining_cols = set(head) - set(exclude)
real_cols = [r for r in remaining_cols if 'Real ' in r]
real_cols_suffix = [r.strip('Real ') for r in real]
remaining_cols = remaining_cols - real_cols
child_cols = [r for r in remaining_cols if 'child' in r]
child_cols_desc = [r.strip('child'+'desc') for r in real]
remaining_cols = remaining_cols - child_cols
for count, picklefile in enumerate(pickleFiles):
if count % 100 == 0:
t2 = datetime.now()
print(str(t2))
print('count = ' + str(count))
print('time: ' + str(t2 - t1) + '\n')
t1 = t2
#DataFrame Manipulation:
df = pd.read_pickle(path + picklefile)
df['ConcatIndex'] = 100000*df.FileID + df.ID
# use apply here rather than iterating
df['Concatenated String Index'] = df['ConcatIndex'].apply(lambda x: str(x).zfill(10))
df.index = df.ConcatIndex
#DataFrame Normalization:
dftemp = df.very_deep_copy() # don't *think* you need this
# drop all excludes
dftemp.drop(columns_to_drop), axis=1, inplace=True)
# normalize real cols
m = dftemp[real_cols_suffix].max()
m.index = real_cols
dftemp[real_cols] = dftemp[real_cols] / m
# normalize child cols
m = dftemp[child_cols_desc].max()
m.index = child_cols
dftemp[child_cols] = dftemp[child_cols] / m
# normalize remaining
remaining = list(remaining - child)
dftemp[remaining] = dftemp[remaining] / dftemp[remaining].max()
# if this case is important then discard the rows of m with .max() is 0
#if max != 0:
# dftemp[string] = dftemp[string]/max
# this is dropped earlier, if you need it, then subtract ['ConcatIndex'] from columns_to_drop
# dftemp.drop('ConcatIndex', axis=1, inplace=True)
#Saving DataFrame in CSV:
if picklefile == '0000.p':
dftemp.to_csv(finalnormCSVFile)
else:
dftemp.to_csv(finalnormCSVFile, mode='a', header=False)
As a point of style I would probably choose to wrap each of these parts into functions, this will also mean more things can be gc'd if that really was the issue...
Another options which would be faster is to use pytables (HDF5Store) if you didn't need to resulting output to be csv (but I expect you do)...
The best thing to do by far is to profile your code. e.g. with %prun in ipython e.g. see http://pynash.org/2013/03/06/timing-and-profiling.html. Then you can see it definitely is read_csv and specifically where (which line of your code and which lines of pandas code).
Ah ha, I'd missed that you are appending all these to a single csv file. And in your prun it shows most of the time is spent in close, so let's keep the file open:
# outside of the for loop (so the file is opened and closed only once)
f = open(finalnormCSVFile, 'w')
...
for picklefile in ...
if picklefile == '0000.p':
dftemp.to_csv(f)
else:
dftemp.to_csv(f, mode='a', header=False)
...
f.close()
Each time the file is opened before it can append to, it needs to seek to the end before writing, it could be that this is the expensive (I don't see why this should be that bad, but keeping it open removes the need to do this).
My guess would be that it comes from the very_deep_copy you are doing, did you check the memory usage over time ? It is possible that the memory is not freed correctly.
If that is the problem, you could do one of the following:
1) Avoid the copying altogether (better performance-wise).
2) Force a garbage collection using gc.collect() once in a while.
See "Python garbage collection" for a probably related issue, and this article for an introduction about garbage collection in python.
Edit:
A solution to remove the copy would be to:
1) store the normalizing constant for each column before normalizing.
2) drop the columns you do not need after the normalization.
# Get the normalizing constant for each column.
max = {}
for string in head:
if string not in exclude:
if 'Real ' in string:
max[string] = df[string.strip('Real ')].max()
elif 'child' in string:
max[string] = df[string.strip('child')+'desc'].max()
else:
max[string] = df[string].max()
# Actual normalization, each column is divided by
# its constant if possible.
for key,value in max.items():
if value != 0:
df[key] /= value
# Drop the excluded columns
df.drop(exclude, axis=1, inplace=True)
Related
NumPy version: 1.14.5
Purpose of the 'foo' function:
Finding the Euclid distance between arrays with the shapes (1,512), which represent facial features.
Issue:
foo function takes ~223.32 ms , but after that, some background operations related to NumPy take 170 seconds for some reason
Question:
Is keeping arrays in dictionaries, and iterating over them is a very dangerous usage of NumPy arrays?
Request for an Advice:
When I keep the arrays stacked and separate from dict, Euclid distance calculation takes half the time (~120ms instead of ~250ms), but overall performance doesn't change much for some reason. Allocating new arrays and stacking them may have countered the benefits of bigger array calculations.
I am open to any advice.
Code:
import numpy as np
import time
import uuid
import random
from funcy import print_durations
#print_durations
def foo(merged_faces_rec, face):
t = time.time()
for uid, feature_list in merged_faces_rec.items():
dist = np.linalg.norm( np.subtract(feature_list[0], face))
print("foo inside : ", time.time()-t)
rand_age = lambda : random.choice(["0-18", "18-35", "35-55", "55+"])
rand_gender = lambda : random.choice(["Erkek", "Kadin"])
rand_emo = lambda : random.choice(["happy", "sad", "neutral", "scared"])
date_list = []
emb = lambda : np.random.rand(1, 512)
def generate_faces_rec(d, n=12000):
for _ in range(n):
d[uuid.uuid4().hex] = [emb(), rand_gender(), rand_age(), rand_emo(), date_list]
faces_rec1 = dict()
generate_faces_rec(faces_rec1)
faces_rec2 = dict()
generate_faces_rec(faces_rec2)
faces_rec3 = dict()
generate_faces_rec(faces_rec3)
faces_rec4 = dict()
generate_faces_rec(faces_rec4)
faces_rec5 = dict()
generate_faces_rec(faces_rec5)
merged_faces_rec = dict()
st = time.time()
merged_faces_rec.update(faces_rec1)
merged_faces_rec.update(faces_rec2)
merged_faces_rec.update(faces_rec3)
merged_faces_rec.update(faces_rec4)
merged_faces_rec.update(faces_rec5)
t2 = time.time()
print("updates: ", t2-st)
face = list(merged_faces_rec.values())[0][0]
t3 = time.time()
print("face: ", t3-t2)
t4 = time.time()
foo(merged_faces_rec, face)
t5 = time.time()
print("foo: ", t5-t4)
Result:
Computations between t4 and t5 took 168 seconds.
updates: 0.00468754768371582
face: 0.0011434555053710938
foo inside : 0.2232837677001953
223.32 ms in foo({'d02d46999aa145be8116..., [[0.96475353 0.8055263...)
foo: 168.42408967018127
cProfile
python3 -m cProfile -s tottime test.py
cProfile Result:
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
30720512 44.991 0.000 85.425 0.000 arrayprint.py:888(__call__)
36791296 42.447 0.000 42.447 0.000 {built-in method numpy.core.multiarray.dragon4_positional}
30840514/60001 36.154 0.000 149.749 0.002 arrayprint.py:659(recurser)
24649728 25.967 0.000 25.967 0.000 {built-in method numpy.core.multiarray.dragon4_scientific}
30720512 20.183 0.000 26.420 0.000 arrayprint.py:636(_extendLine)
10 12.281 1.228 12.281 1.228 {method 'sub' of '_sre.SRE_Pattern' objects}
60001 11.434 0.000 79.370 0.001 arrayprint.py:804(fillFormat)
228330011/228329975 10.270 0.000 10.270 0.000 {built-in method builtins.len}
204081 4.815 0.000 16.469 0.000 {built-in method builtins.max}
18431577 4.624 0.000 21.742 0.000 arrayprint.py:854(<genexpr>)
18431577 4.453 0.000 28.627 0.000 arrayprint.py:859(<genexpr>)
30720531 3.987 0.000 3.987 0.000 {method 'split' of 'str' objects}
12348936 3.012 0.000 13.873 0.000 arrayprint.py:829(<genexpr>)
12348936 3.007 0.000 17.955 0.000 arrayprint.py:832(<genexpr>)
18431577 2.179 0.000 2.941 0.000 arrayprint.py:863(<genexpr>)
18431577 2.124 0.000 2.870 0.000 arrayprint.py:864(<genexpr>)
12348936 1.625 0.000 3.180 0.000 arrayprint.py:833(<genexpr>)
12348936 1.468 0.000 1.992 0.000 arrayprint.py:834(<genexpr>)
12348936 1.433 0.000 1.922 0.000 arrayprint.py:844(<genexpr>)
12348936 1.432 0.000 1.929 0.000 arrayprint.py:837(<genexpr>)
12324864 1.074 0.000 1.074 0.000 {method 'partition' of 'str' objects}
6845518 0.761 0.000 0.761 0.000 {method 'rstrip' of 'str' objects}
60001 0.747 0.000 80.175 0.001 arrayprint.py:777(__init__)
2 0.637 0.319 245.563 122.782 debug.py:237(smart_repr)
120002 0.573 0.000 0.573 0.000 {method 'reduce' of 'numpy.ufunc' objects}
60001 0.421 0.000 231.153 0.004 arrayprint.py:436(_array2string)
60000 0.370 0.000 0.370 0.000 {method 'rand' of 'mtrand.RandomState' objects}
60000 0.303 0.000 232.641 0.004 arrayprint.py:1334(array_repr)
60001 0.274 0.000 232.208 0.004 arrayprint.py:465(array2string)
60001 0.261 0.000 80.780 0.001 arrayprint.py:367(_get_format_function)
120008 0.255 0.000 0.611 0.000 numeric.py:2460(seterr)
Update to Clearify the Question
This is the part that has the bug. Something behind the scenes causes to program to take too long. Is it something to do with garbage collector, or just weird numpy bug? I don't have any clue.
t6 = time.time()
foo1(big_array, face) # 223.32ms
t7 = time.time()
print("foo1 : ", t7-t6) # foo1 : 170 seconds
I am facing a performance issue of pandas rolling(expanding) 10 years history record zscore calculation. It is too slow
for single recent day zscore, it need 17seconds
for calculate to whole history, it need around 30 minutes.(I has already resample this history record to weekly level to downsize to total record.
If you are any advise to speed up my lastz function, pls feel free to share you idea.
Here is the detail.
1. Data set. a 10 years stock record which has been resampled to balance the size & accuracy.
Total size is (207376, 8)
which covered about 500 index data for last 10 years. Here is the sample:
> Close PB1 PB2 PE1 PE2 TurnoverValue TurnoverVol ROE
>ticker tradeDate
>000001 2007-01-07 2678.526489 3.38135 2.87570 34.423700 61.361549 7.703712e+10 1.131558e+10 0.098227
>2007-01-14 2755.759814 3.45878 3.09090 35.209019 66.407800 7.897185e+10 1.116473e+10 0.098236
>2007-01-21 2796.761572 3.49394 3.31458 35.561800 70.449658 8.416415e+10 1.129387e+10 0.098250
I want to analyze the zscore changing in history and to forecast to future.
So, lastz function defined as below
The functions need speed up:
ts_start=pd.to_date("20180831")
#numba.jit
def lastz(x):
if x.index.max()[1]<ts_start:
return np.nan
else:
freedom = 1 # it is sample, so the sample std degree of freedome should not be 0 but 1
nlimit_interpolate = int(len(x)/100) #1% fill allowed
#print(nlimit_interpolate, len(x))
x=x.interpolate(limit=nlimit_interpolate+1 ) # plus 1 in case of 0 or minus
x=x.loc[x.notnull()]
Arry=x.values
zscore = stats.zmap(Arry[-1],Arry,ddof=freedom)
return zscore
weekly = weekly.sort_index()
%prun -s cumtime result = weekly.groupby(level="ticker").agg(lastz)
Here is the prun results for single calling:
13447048 function calls (13340521 primitive calls) in 17.183 seconds
Ordered by: cumulative time
> ncalls tottime percall cumtime percall
> filename:lineno(function)
> 1 0.000 0.000 17.183 17.183 {built-in method builtins.exec}
> 1 0.000 0.000 17.183 17.183 <string>:1(<module>)
> 1 0.000 0.000 17.176 17.176 groupby.py:4652(aggregate)
> 1 0.000 0.000 17.176 17.176 groupby.py:4086(aggregate)
> 1 0.000 0.000 17.176 17.176 base.py:562(_aggregate_multiple_funcs)
> 16/8 0.000 0.000 17.171 2.146 groupby.py:3471(aggregate)
> 8 0.000 0.000 17.171 2.146 groupby.py:3513(_aggregate_multiple_funcs)
> 8 0.000 0.000 17.147 2.143 groupby.py:1060(_python_agg_general)
> 8 0.000 0.000 17.145 2.143 groupby.py:2668(agg_series)
> 8 0.172 0.022 17.145 2.143 groupby.py:2693(_aggregate_series_pure_python)
> 4400 0.066 0.000 15.762 0.004 groupby.py:1062(<lambda>)
> 4400 0.162 0.000 14.255 0.003 <ipython-input-10-fdb784c8abd8>:15(lastz)
> 4400 0.035 0.000 8.982 0.002 base.py:807(max)
> 4400 0.070 0.000 7.955 0.002 multi.py:807(values)
> 4400 0.017 0.000 6.406 0.001 datetimes.py:976(astype)
> 4400 0.007 0.000 6.316 0.001 datetimelike.py:1130(astype)
> 4400 0.030 0.000 6.301 0.001 datetimelike.py:368(_box_values_as_index)
> 4400 0.009 0.000 5.613 0.001 datetimelike.py:362(_box_values)
> 4400 0.860 0.000 5.602 0.001 {pandas._libs.lib.map_infer} 1659008 4.278 0.000 4.741
> 0.000 datetimes.py:606(<lambda>)
> 4328 0.096 0.000 1.774 0.000 generic.py:5980(interpolate)
> 4336 0.015 0.000 1.696 0.000 indexing.py:1463(__getitem__)
> 4328 0.028 0.000 1.675 0.000 indexing.py:1854(_getitem_axis)
I was wondering if the datatime compare call too frequency and at better method to skip those calculated result. I calc the result weekly. So, last week data has already on hand no need to calculated again. the index.max()[1] was used to check if the dataset is later than certain day. If newer, calculated, otherwise , just return nan.
if I used rolling or expanding mode, half hour or 2 hour will be need to get the result.
Appreciate any idea or clue to speed up the function.
timeit result of different index method speed in pandas multiindex
I change the index selection method to save 6 seconds for each single calculation.
However the total running time still too long to accepted. need you clue to optimize it.
I did some diagnosis and found that on htop:
python save_to_db.py takes 86% of the CPU
postgres: mydb mydb localhost idle in transaction takes 16% of the CPU.
My code for save_to_db.py looks something like:
import datetime
import django
import os
import sys
import json
import itertools
import cProfile
# setting up standalone django environment
...
from django.db import transaction
from xxx.models import File
INPUT_FILE = "xxx"
with open("xxx", "r") as f:
volume_name = f.read().strip()
def todate(seconds):
return datetime.datetime.fromtimestamp(seconds)
#transaction.atomic
def batch_save_files(files, volume_name):
for jf in files:
metadata = files[jf]
f = File(xxx=jf, yyy=todate(metadata[0]), zzz=todate(metadata[1]), vvv=metadata[2], www=volume_name)
f.save()
with open(INPUT_FILE, "r") as f:
dirdump = json.load(f)
timestamp = dirdump["curtime"]
files = {k : dirdump["files"][k] for k in list(dirdump["files"].keys())[:1000000]}
cProfile.run('batch_save_files(files, volume_name)')
And the respective cProfile dump(I only kept the ones with large cumtime):
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 881.336 881.336 <string>:1(<module>)
1000000 5.325 0.000 844.553 0.001 base.py:655(save)
1000000 14.574 0.000 834.125 0.001 base.py:732(save_base)
1000000 10.108 0.000 800.494 0.001 base.py:795(_save_table)
1000000 5.265 0.000 720.608 0.001 base.py:847(_do_update)
1000000 4.522 0.000 446.781 0.000 compiler.py:1038(execute_sql)
1000000 23.669 0.000 196.273 0.000 compiler.py:1314(as_sql)
1000000 7.473 0.000 458.064 0.000 compiler.py:1371(execute_sql)
1 0.000 0.000 881.336 881.336 contextlib.py:49(inner)
1000000 7.370 0.000 62.090 0.000 lookups.py:150(process_lhs)
1000000 3.907 0.000 81.685 0.000 lookups.py:159(as_sql)
1000000 3.251 0.000 44.679 0.000 lookups.py:74(process_lhs)
1000000 3.594 0.000 53.745 0.000 manager.py:81(manager_method)
1000000 19.855 0.000 106.487 0.000 query.py:1117(build_filter)
1000000 5.523 0.000 161.104 0.000 query.py:1241(add_q)
1000000 10.684 0.000 152.080 0.000 query.py:1258(_add_q)
1000000 7.448 0.000 513.984 0.001 query.py:697(_update)
1000000 2.221 0.000 201.359 0.000 query.py:831(filter)
1000000 5.371 0.000 199.138 0.000 query.py:845(_filter_or_exclude)
1 7.982 7.982 881.329 881.329 save_to_db.py:47(batch_save_files)
1000000 1.834 0.000 204.064 0.000 utils.py:67(execute)
1000000 3.099 0.000 202.231 0.000 utils.py:73(_execute_with_wrappers)
1000000 4.306 0.000 199.131 0.000 utils.py:79(_execute)
1000000 10.830 0.000 222.880 0.000 utils.py:97(execute)
2/1 0.000 0.000 881.336 881.336 {built-in method builtins.exec}
1000001 189.750 0.000 193.764 0.000 {method 'execute' of 'psycopg2.extensions.cursor' objects}
Running time python save_to_db.py takes 14minutes, and roughly around 1000 inserts/sec. This is fairly slow.
My schema for File looks like:
xxx TEXT UNIQUE NOT NULL PRIMARY KEY
yyy DATETIME
zzz DATETIME
vvv INTEGER
www TEXT
I can't seem to figure out how to speed this process up. Is there some way of doing this that I'm not aware of? Currently I index everything, but I would be very surprised if that's the main bottleneck.
Thank you!
You can use bulk create.
objs = [
File(
xxx=jf,
yyy=todate(metadata[0]),
zzz=todate(metadata[1]),
vvv=metadata[2],
www=volume_name
)
for jf in files
]
filelist = File.objects.bulk_create(objs)
Using numpy.reshape helped a lot and using map helped a little. Is it possible to speed this up some more?
import pydicom
import numpy as np
import cProfile
import pstats
def parse_coords(contour):
"""Given a contour from a DICOM ROIContourSequence, returns coordinates
[loop][[x0, x1, x2, ...][y0, y1, y2, ...][z0, z1, z2, ...]]"""
if not hasattr(contour, "ContourSequence"):
return [] # empty structure
def _reshape_contour_data(loop):
return np.reshape(np.array(loop.ContourData),
(3, len(loop.ContourData) // 3),
order='F')
return list(map(_reshape_contour_data,contour.ContourSequence))
def profile_load_contours():
rs = pydicom.dcmread('RS.gyn1.dcm')
structs = [parse_coords(contour) for contour in rs.ROIContourSequence]
cProfile.run('profile_load_contours()','prof.stats')
p = pstats.Stats('prof.stats')
p.sort_stats('cumulative').print_stats(30)
Using a real structure set exported from Varian Eclipse.
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 12.165 12.165 {built-in method builtins.exec}
1 0.151 0.151 12.165 12.165 <string>:1(<module>)
1 0.000 0.000 12.014 12.014 load_contour_time.py:19(profile_load_contours)
1 0.000 0.000 11.983 11.983 load_contour_time.py:21(<listcomp>)
56 0.009 0.000 11.983 0.214 load_contour_time.py:7(parse_coords)
50745/33837 0.129 0.000 11.422 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:455(__getattr__)
50741/33825 0.152 0.000 10.938 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:496(__getitem__)
16864 0.069 0.000 9.839 0.001 load_contour_time.py:12(_reshape_contour_data)
16915 0.101 0.000 9.780 0.001 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataelem.py:439(DataElement_from_raw)
16915 0.052 0.000 9.300 0.001 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:320(convert_value)
16864 0.038 0.000 7.099 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:89(convert_DS_string)
16870 0.042 0.000 7.010 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/valuerep.py:495(MultiString)
16908 1.013 0.000 6.826 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/multival.py:29(__init__)
3004437 3.013 0.000 5.577 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/multival.py:42(number_string_type_constructor)
3038317/3038231 1.037 0.000 3.171 0.000 {built-in method builtins.hasattr}
Much of the time is in convert_DS_string. Is it possible to make it faster? I guess part of the problem is that the coordinates are not stored very efficiently in the DICOM file.
EDIT:
As a way of avoiding the loop at the end of MultiVal.__init__ I am wondering about getting the raw double string of each ContourData and using numpy.fromstring on it. However, I have not been able to get the raw double string.
Eliminating the loop in MultiVal.__init__ and using numpy.fromstring provides more than 4 times speedup. I will post on the pydicom github see if there is some interest in taking this into the library code. It is a little ugly. I would welcome advice on further improvement.
import pydicom
import numpy as np
import cProfile
import pstats
def parse_coords(contour):
"""Given a contour from a DICOM ROIContourSequence, returns coordinates
[loop][[x0, x1, x2, ...][y0, y1, y2, ...][z0, z1, z2, ...]]"""
if not hasattr(contour, "ContourSequence"):
return [] # empty structure
cd_tag = pydicom.tag.Tag(0x3006, 0x0050) # ContourData tag
def _reshape_contour_data(loop):
val = super(loop.__class__, loop).__getitem__(cd_tag).value
try:
double_string = val.decode(encoding='utf-8')
double_vec = np.fromstring(double_string, dtype=float, sep=chr(92)) # 92 is '/'
except AttributeError: # 'MultiValue' has no 'decode' (bytes does)
# It's already been converted to doubles and cached
double_vec = loop.ContourData
return np.reshape(np.array(double_vec),
(3, len(double_vec) // 3),
order='F')
return list(map(_reshape_contour_data, contour.ContourSequence))
def profile_load_contours():
rs = pydicom.dcmread('RS.gyn1.dcm')
structs = [parse_coords(contour) for contour in rs.ROIContourSequence]
profile_load_contours()
cProfile.run('profile_load_contours()','prof.stats')
p = pstats.Stats('prof.stats')
p.sort_stats('cumulative').print_stats(15)
Result
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 2.800 2.800 {built-in method builtins.exec}
1 0.017 0.017 2.800 2.800 <string>:1(<module>)
1 0.000 0.000 2.783 2.783 load_contour_time3.py:29(profile_load_contours)
1 0.000 0.000 2.761 2.761 load_contour_time3.py:31(<listcomp>)
56 0.006 0.000 2.760 0.049 load_contour_time3.py:9(parse_coords)
153/109 0.001 0.000 2.184 0.020 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:455(__getattr__)
149/97 0.001 0.000 2.182 0.022 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:496(__getitem__)
51 0.000 0.000 2.178 0.043 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataelem.py:439(DataElement_from_raw)
51 0.000 0.000 2.177 0.043 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:320(convert_value)
44 0.000 0.000 2.176 0.049 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/values.py:255(convert_SQ)
44 0.035 0.001 2.176 0.049 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/filereader.py:427(read_sequence)
152/66 0.000 0.000 2.171 0.033 {built-in method builtins.hasattr}
16920 0.147 0.000 1.993 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/filereader.py:452(read_sequence_item)
16923 0.116 0.000 1.267 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/filereader.py:365(read_dataset)
84616 0.113 0.000 0.699 0.000 /home/cf/python/venv/lib/python3.5/site-packages/pydicom/dataset.py:960(__setattr__)
I'm trying to profile a few lines of Pandas code, and when I run %prun i'm finding most of my time is taken by {isinstance}. This seems to happen a lot -- can anyone suggest what that means and, for bonus points, suggest a way to avoid it?
This isn't meant to be application specific, but here's a thinned out version of the code if that's important:
def flagOtherGroup(df):
try:mostUsed0 = df[df.subGroupDummy == 0].siteid.iloc[0]
except: mostUsed0 = -1
try: mostUsed1 = df[df.subGroupDummy == 1].siteid.iloc[0]
except: mostUsed1 = -1
df['mostUsed'] = 0
df.loc[(df.subGroupDummy == 0) & (df.siteid == mostUsed1), 'mostUsed'] = 1
df.loc[(df.subGroupDummy == 1) & (df.siteid == mostUsed0), 'mostUsed'] = 1
return df[['mostUsed']]
%prun -l15 temp = test.groupby('userCode').apply(flagOtherGroup)
And top lines of prun:
Ordered by: internal time
List reduced from 531 to 15 due to restriction <15>
ncalls tottime percall cumtime percall filename:lineno(function)
834472 1.908 0.000 2.280 0.000 {isinstance}
497048/395400 1.192 0.000 1.572 0.000 {len}
32722 0.879 0.000 4.479 0.000 series.py:114(__init__)
34444 0.613 0.000 1.792 0.000 internals.py:3286(__init__)
25990 0.568 0.000 0.568 0.000 {method 'reduce' of 'numpy.ufunc' objects}
82266/78821 0.549 0.000 0.744 0.000 {numpy.core.multiarray.array}
42201 0.544 0.000 1.195 0.000 internals.py:62(__init__)
42201 0.485 0.000 1.812 0.000 internals.py:2015(make_block)
166244 0.476 0.000 0.615 0.000 {getattr}
4310 0.455 0.000 1.121 0.000 internals.py:2217(_rebuild_blknos_and_blklocs)
12054 0.417 0.000 2.134 0.000 internals.py:2355(apply)
9474 0.385 0.000 1.284 0.000 common.py:727(take_nd)
isinstance, len and getattr are just the built-in functions. There are a huge number of calls to the isinstance() function here; it is not that the call itself takes a lot of time, but the function was used 834472 times.
Presumably it is the pandas code that uses it.