I have a program with a function that needs to open big pickle files (a few GB), look at the obtained dictionary (dict), and return a partial view of it (a few elements). Curiously, the large amount of data opened by the function remains in memory.
So I did a few tests with the following code:
import numpy as np
def test_mem_1():
data = np.random.random((2**27)) #1 GB
input("Data generated, awaiting input to continue")
return 4
def test_mem_2():
keys = list(range(100000))
lvls = list(range(10))
data = {}
for k in keys:
data[k] = {}
for lvl in lvls:
data[k][lvl] = np.random.random(100) #1'000'000 X 8 = 800 MB
input("Data generated, awaiting input to continue")
data = None
return 4
if __name__ == "__main__":
a = test_mem_1()
input("Tested mem 1, continue to test mem 2")
a = test_mem_2()#Memory usage falls from 995 MB inside test_mem_1 to 855 MB when returned
input("Finished")
exit()
When running this experiment, the first test allocates 1 GB, then this data is freed as soon as the function returns. At the same time, the second test (working with a dict), first allocates 995 MB, then, when the function returns, only 140 MB are freed (resulting in a memory footprint, after test_mem_2, of 855 MB).
What is happening here? How can I free this memory?
P.S.
I have tried deleting the data in test_mem_2 in several methods: Not doing anything, using "del", assigning to a new dict, and (as in this example) assigning the reference to None
Answer after comment discussion.
Memory management is handled by python itself using Garbage Collection. Normally you should not touch this at all. Garbage collection is automagic in python. Unless you actually have a good reason to mess with it, don't.
However, you can force garbage collection, which can be usefull if you are dealing with a limited resource system fe.
I've combined your code with a function to get the memory usage which I've shamelessly stolen from this excellent answer, and I implemented the most basic garbage collection...
By running the loopity() multiple times I've not had it crash yet.
Note that I did add a data = None in test_mem_1()
file: memleak.py
import numpy as np
import sys
import gc
import tracemalloc
import linecache
import os
tracemalloc.start()
def display_top(snapshot, key_type='lineno', limit=3, where=''):
#
#
# Shamelessly stolen from:
# https://stackoverflow.com/a/45679009/9267296
#
# I replaced all old string formatting with f-strings
#
#
print('======================================================================')
if where != '':
print(f'Printing stats:\n {where}')
print('======================================================================')
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, '<frozen importlib._bootstrap>'),
tracemalloc.Filter(False, '<unknown>'),
))
top_stats = snapshot.statistics(key_type)
print(f'Top {limit} lines')
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace '/path/to/module/file.py' with 'module/file.py'
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print(f'#{index}: {filename}:{frame.lineno}: {stat.size / 1024:.1f} KiB')
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(f' {line}')
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print(f'{len(other)} other: {size / 1024:.1f} KiB')
total = sum(stat.size for stat in top_stats)
print()
print(f'=====> Total allocated size: {total / 1024:.1f} KiB')
print()
def test_mem_1():
display_top(tracemalloc.take_snapshot(), where='test_mem_1: start')
data = np.random.random((2**27)) #1 GB
display_top(tracemalloc.take_snapshot(), where='test_mem_1: data generated')
input('Data generated, awaiting input to continue')
data = None
display_top(tracemalloc.take_snapshot(), where='test_mem_1: data == None')
gc.collect()
display_top(tracemalloc.take_snapshot(), where='test_mem_1: gc collected')
return 4
def test_mem_2():
display_top(tracemalloc.take_snapshot(), where='test_mem_2: start')
keys = list(range(100000))
lvls = list(range(10))
display_top(tracemalloc.take_snapshot(), where='test_mem_2: lists generated')
data = {}
for k in keys:
data[k] = {}
for lvl in lvls:
data[k][lvl] = np.random.random(100) #1'000'000 X 8 = 800 MB
display_top(tracemalloc.take_snapshot(), where='test_mem_2: np data generated')
input('Data generated, awaiting input to continue')
data = None
display_top(tracemalloc.take_snapshot(), where='test_mem_2: data == None')
gc.collect()
display_top(tracemalloc.take_snapshot(), where='test_mem_2: gc collected')
return 4
def loopity():
# added this logic to be able to run multiple times.
# stops when input for finished != ''
inp = ''
while inp == '':
display_top(tracemalloc.take_snapshot(), where='loopity: start')
a = test_mem_1()
display_top(tracemalloc.take_snapshot(), where='loopity: test_mem_1 done')
input('Tested mem 1, continue to test mem 2')
a = test_mem_2()
display_top(tracemalloc.take_snapshot(), where='loopity: test_mem_2 done')
inp = input('Finished')
if __name__ == '__main__':
loopity()
this is the output from a Windows box running python 3.8.10 (don't ask):
======================================================================
Printing stats:
loopity: start
======================================================================
Top 3 lines
#1: .\memleak.py:93: 0.1 KiB
def loopity():
#2: .\memleak.py:69: 0.1 KiB
def test_mem_2():
#3: .\memleak.py:53: 0.1 KiB
def test_mem_1():
1 other: 0.1 KiB
=====> Total allocated size: 0.5 KiB
======================================================================
Printing stats:
test_mem_1: start
======================================================================
Top 3 lines
#1: lib\linecache.py:137: 8.3 KiB
lines = fp.readlines()
#2: .\memleak.py:39: 1.2 KiB
line = linecache.getline(frame.filename, frame.lineno).strip()
#3: lib\tracemalloc.py:509: 1.2 KiB
statistics.sort(reverse=True, key=Statistic._sort_key)
59 other: 20.4 KiB
=====> Total allocated size: 31.1 KiB
======================================================================
Printing stats:
test_mem_1: data generated
======================================================================
Top 3 lines
#1: .\memleak.py:56: 1048576.3 KiB
data = np.random.random((2**27)) #1 GB
#2: lib\linecache.py:137: 63.9 KiB
lines = fp.readlines()
#3: lib\tracemalloc.py:65: 3.8 KiB
return (self.size, self.count, self.traceback)
59 other: 26.3 KiB
=====> Total allocated size: 1048670.3 KiB
Data generated, awaiting input to continue
======================================================================
Printing stats:
test_mem_1: data == None
======================================================================
Top 3 lines
#1: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#2: lib\tracemalloc.py:532: 5.8 KiB
traces = _get_traces()
#3: lib\tracemalloc.py:65: 3.9 KiB
return (self.size, self.count, self.traceback)
66 other: 25.2 KiB
=====> Total allocated size: 98.6 KiB
======================================================================
Printing stats:
test_mem_1: gc collected
======================================================================
Top 3 lines
#1: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#2: .\memleak.py:39: 1.2 KiB
line = linecache.getline(frame.filename, frame.lineno).strip()
#3: lib\tracemalloc.py:509: 1.2 KiB
statistics.sort(reverse=True, key=Statistic._sort_key)
56 other: 19.0 KiB
=====> Total allocated size: 85.3 KiB
======================================================================
Printing stats:
loopity: test_mem_1 done
======================================================================
Top 3 lines
#1: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#2: lib\tracemalloc.py:65: 3.7 KiB
return (self.size, self.count, self.traceback)
#3: lib\tracemalloc.py:185: 2.8 KiB
self._frames = tuple(reversed(frames))
70 other: 22.9 KiB
=====> Total allocated size: 93.2 KiB
Tested mem 1, continue to test mem 2
======================================================================
Printing stats:
test_mem_2: start
======================================================================
Top 3 lines
#1: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#2: lib\tracemalloc.py:65: 4.6 KiB
return (self.size, self.count, self.traceback)
#3: lib\tracemalloc.py:532: 4.5 KiB
traces = _get_traces()
71 other: 26.8 KiB
=====> Total allocated size: 99.7 KiB
======================================================================
Printing stats:
test_mem_2: lists generated
======================================================================
Top 3 lines
#1: .\memleak.py:72: 3508.7 KiB
keys = list(range(100000))
#2: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#3: lib\tracemalloc.py:532: 9.2 KiB
traces = _get_traces()
73 other: 31.6 KiB
=====> Total allocated size: 3613.3 KiB
======================================================================
Printing stats:
test_mem_2: np data generated
======================================================================
Top 3 lines
#1: .\memleak.py:80: 911719.1 KiB
data[k][lvl] = np.random.random(100) #1'000'000 X 8 = 800 MB
#2: .\memleak.py:78: 11370.0 KiB
data[k] = {}
#3: .\memleak.py:72: 3508.7 KiB
keys = list(range(100000))
71 other: 96.4 KiB
=====> Total allocated size: 926694.2 KiB
Data generated, awaiting input to continue
======================================================================
Printing stats:
test_mem_2: data == None
======================================================================
Top 3 lines
#1: .\memleak.py:72: 3508.7 KiB
keys = list(range(100000))
#2: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#3: .\memleak.py:80: 5.7 KiB
data[k][lvl] = np.random.random(100) #1'000'000 X 8 = 800 MB
75 other: 37.6 KiB
=====> Total allocated size: 3615.8 KiB
======================================================================
Printing stats:
test_mem_2: gc collected
======================================================================
Top 3 lines
#1: .\memleak.py:72: 3508.7 KiB
keys = list(range(100000))
#2: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#3: .\memleak.py:80: 5.5 KiB
data[k][lvl] = np.random.random(100) #1'000'000 X 8 = 800 MB
60 other: 22.0 KiB
=====> Total allocated size: 3600.0 KiB
======================================================================
Printing stats:
loopity: test_mem_2 done
======================================================================
Top 3 lines
#1: lib\linecache.py:137: 63.8 KiB
lines = fp.readlines()
#2: .\memleak.py:80: 5.5 KiB
data[k][lvl] = np.random.random(100) #1'000'000 X 8 = 800 MB
#3: lib\tracemalloc.py:65: 3.9 KiB
return (self.size, self.count, self.traceback)
73 other: 26.4 KiB
=====> Total allocated size: 99.7 KiB
Finished
Related
I've recently become interested in algorithms and have begun exploring them by writing a naive implementation and then optimizing it in various ways.
I'm already familiar with the standard Python module for profiling runtime (for most things I've found the timeit magic function in IPython to be sufficient), but I'm also interested in memory usage so I can explore those tradeoffs as well (e.g. the cost of caching a table of previously computed values versus recomputing them as needed). Is there a module that will profile the memory usage of a given function for me?
Python 3.4 includes a new module: tracemalloc. It provides detailed statistics about which code is allocating the most memory. Here's an example that displays the top three lines allocating memory.
from collections import Counter
import linecache
import os
import tracemalloc
def display_top(snapshot, key_type='lineno', limit=3):
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace "/path/to/module/file.py" with "module/file.py"
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print("#%s: %s:%s: %.1f KiB"
% (index, filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(' %s' % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
tracemalloc.start()
counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
words = list(words)
for word in words:
prefix = word[:3]
counts[prefix] += 1
print('Top prefixes:', counts.most_common(3))
snapshot = tracemalloc.take_snapshot()
display_top(snapshot)
And here are the results:
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: scratches/memory_test.py:37: 6527.1 KiB
words = list(words)
#2: scratches/memory_test.py:39: 247.7 KiB
prefix = word[:3]
#3: scratches/memory_test.py:40: 193.0 KiB
counts[prefix] += 1
4 other: 4.3 KiB
Total allocated size: 6972.1 KiB
When is a memory leak not a leak?
That example is great when the memory is still being held at the end of the calculation, but sometimes you have code that allocates a lot of memory and then releases it all. It's not technically a memory leak, but it's using more memory than you think it should. How can you track memory usage when it all gets released? If it's your code, you can probably add some debugging code to take snapshots while it's running. If not, you can start a background thread to monitor memory usage while the main thread runs.
Here's the previous example where the code has all been moved into the count_prefixes() function. When that function returns, all the memory is released. I also added some sleep() calls to simulate a long-running calculation.
from collections import Counter
import linecache
import os
import tracemalloc
from time import sleep
def count_prefixes():
sleep(2) # Start up time.
counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
words = list(words)
for word in words:
prefix = word[:3]
counts[prefix] += 1
sleep(0.0001)
most_common = counts.most_common(3)
sleep(3) # Shut down time.
return most_common
def main():
tracemalloc.start()
most_common = count_prefixes()
print('Top prefixes:', most_common)
snapshot = tracemalloc.take_snapshot()
display_top(snapshot)
def display_top(snapshot, key_type='lineno', limit=3):
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace "/path/to/module/file.py" with "module/file.py"
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print("#%s: %s:%s: %.1f KiB"
% (index, filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(' %s' % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
main()
When I run that version, the memory usage has gone from 6MB down to 4KB, because the function released all its memory when it finished.
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: collections/__init__.py:537: 0.7 KiB
self.update(*args, **kwds)
#2: collections/__init__.py:555: 0.6 KiB
return _heapq.nlargest(n, self.items(), key=_itemgetter(1))
#3: python3.6/heapq.py:569: 0.5 KiB
result = [(key(elem), i, elem) for i, elem in zip(range(0, -n, -1), it)]
10 other: 2.2 KiB
Total allocated size: 4.0 KiB
Now here's a version inspired by another answer that starts a second thread to monitor memory usage.
from collections import Counter
import linecache
import os
import tracemalloc
from datetime import datetime
from queue import Queue, Empty
from resource import getrusage, RUSAGE_SELF
from threading import Thread
from time import sleep
def memory_monitor(command_queue: Queue, poll_interval=1):
tracemalloc.start()
old_max = 0
snapshot = None
while True:
try:
command_queue.get(timeout=poll_interval)
if snapshot is not None:
print(datetime.now())
display_top(snapshot)
return
except Empty:
max_rss = getrusage(RUSAGE_SELF).ru_maxrss
if max_rss > old_max:
old_max = max_rss
snapshot = tracemalloc.take_snapshot()
print(datetime.now(), 'max RSS', max_rss)
def count_prefixes():
sleep(2) # Start up time.
counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
words = list(words)
for word in words:
prefix = word[:3]
counts[prefix] += 1
sleep(0.0001)
most_common = counts.most_common(3)
sleep(3) # Shut down time.
return most_common
def main():
queue = Queue()
poll_interval = 0.1
monitor_thread = Thread(target=memory_monitor, args=(queue, poll_interval))
monitor_thread.start()
try:
most_common = count_prefixes()
print('Top prefixes:', most_common)
finally:
queue.put('stop')
monitor_thread.join()
def display_top(snapshot, key_type='lineno', limit=3):
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace "/path/to/module/file.py" with "module/file.py"
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print("#%s: %s:%s: %.1f KiB"
% (index, filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(' %s' % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
main()
The resource module lets you check the current memory usage, and save the snapshot from the peak memory usage. The queue lets the main thread tell the memory monitor thread when to print its report and shut down. When it runs, it shows the memory being used by the list() call:
2018-05-29 10:34:34.441334 max RSS 10188
2018-05-29 10:34:36.475707 max RSS 23588
2018-05-29 10:34:36.616524 max RSS 38104
2018-05-29 10:34:36.772978 max RSS 45924
2018-05-29 10:34:36.929688 max RSS 46824
2018-05-29 10:34:37.087554 max RSS 46852
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
2018-05-29 10:34:56.281262
Top 3 lines
#1: scratches/scratch.py:36: 6527.0 KiB
words = list(words)
#2: scratches/scratch.py:38: 16.4 KiB
prefix = word[:3]
#3: scratches/scratch.py:39: 10.1 KiB
counts[prefix] += 1
19 other: 10.8 KiB
Total allocated size: 6564.3 KiB
If you're on Linux, you may find /proc/self/statm more useful than the resource module.
This one has been answered already here: Python memory profiler
Basically you do something like that (cited from Guppy-PE):
>>> from guppy import hpy; h=hpy()
>>> h.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 25773 53 1612820 49 1612820 49 str
1 11699 24 483960 15 2096780 64 tuple
2 174 0 241584 7 2338364 72 dict of module
3 3478 7 222592 7 2560956 78 types.CodeType
4 3296 7 184576 6 2745532 84 function
5 401 1 175112 5 2920644 89 dict of class
6 108 0 81888 3 3002532 92 dict (no owner)
7 114 0 79632 2 3082164 94 dict of type
8 117 0 51336 2 3133500 96 type
9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> h.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 33 136 77 136 77 dict (no owner)
1 1 33 28 16 164 93 list
2 1 33 12 7 176 100 int
>>> x=[]
>>> h.iso(x).sp
0: h.Root.i0_modules['__main__'].__dict__['x']
>>>
If you only want to look at the memory usage of an object, (answer to other question)
There is a module called Pympler which contains the asizeof
module.
Use as follows:
from pympler import asizeof
asizeof.asizeof(my_object)
Unlike sys.getsizeof, it works for your self-created objects.
>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> help(asizeof.asizeof)
Help on function asizeof in module pympler.asizeof:
asizeof(*objs, **opts)
Return the combined size in bytes of all objects passed as positional arguments.
Disclosure:
Applicable on Linux only
Reports memory used by the current process as a whole, not individual functions within
But nice because of its simplicity:
import resource
def using(point=""):
usage=resource.getrusage(resource.RUSAGE_SELF)
return '''%s: usertime=%s systime=%s mem=%s mb
'''%(point,usage[0],usage[1],
usage[2]/1024.0 )
Just insert using("Label") where you want to see what's going on. For example
print(using("before"))
wrk = ["wasting mem"] * 1000000
print(using("after"))
>>> before: usertime=2.117053 systime=1.703466 mem=53.97265625 mb
>>> after: usertime=2.12023 systime=1.70708 mem=60.8828125 mb
Below is a simple function decorator which allows to track how much memory the process consumed before the function call, after the function call, and what is the difference:
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
mem_info = process.memory_info()
return mem_info.rss
def profile(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print("{}: memory before: {:,}, after: {:,}, consumed: {:,}; exec time: {}".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
Here is my blog which describes all the details. (archived link)
Since the accepted answer and also the next highest voted answer have, in my opinion, some problems, I'd like to offer one more answer that is based closely on Ihor B.'s answer with some small but important modifications.
This solution allows you to run profiling on either by wrapping a function call with the profile function and calling it, or by decorating your function/method with the #profile decorator.
The first technique is useful when you want to profile some third-party code without messing with its source, whereas the second technique is a bit "cleaner" and works better when you are don't mind modifying the source of the function/method you want to profile.
I've also modified the output, so that you get RSS, VMS, and shared memory. I don't care much about the "before" and "after" values, but only the delta, so I removed those (if you're comparing to Ihor B.'s answer).
Profiling code
# profile.py
import time
import os
import psutil
import inspect
def elapsed_since(start):
#return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
elapsed = time.time() - start
if elapsed < 1:
return str(round(elapsed*1000,2)) + "ms"
if elapsed < 60:
return str(round(elapsed, 2)) + "s"
if elapsed < 3600:
return str(round(elapsed/60, 2)) + "min"
else:
return str(round(elapsed / 3600, 2)) + "hrs"
def get_process_memory():
process = psutil.Process(os.getpid())
mi = process.memory_info()
return mi.rss, mi.vms, mi.shared
def format_bytes(bytes):
if abs(bytes) < 1000:
return str(bytes)+"B"
elif abs(bytes) < 1e6:
return str(round(bytes/1e3,2)) + "kB"
elif abs(bytes) < 1e9:
return str(round(bytes / 1e6, 2)) + "MB"
else:
return str(round(bytes / 1e9, 2)) + "GB"
def profile(func, *args, **kwargs):
def wrapper(*args, **kwargs):
rss_before, vms_before, shared_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
rss_after, vms_after, shared_after = get_process_memory()
print("Profiling: {:>20} RSS: {:>8} | VMS: {:>8} | SHR {"
":>8} | time: {:>8}"
.format("<" + func.__name__ + ">",
format_bytes(rss_after - rss_before),
format_bytes(vms_after - vms_before),
format_bytes(shared_after - shared_before),
elapsed_time))
return result
if inspect.isfunction(func):
return wrapper
elif inspect.ismethod(func):
return wrapper(*args,**kwargs)
Example usage, assuming the above code is saved as profile.py:
from profile import profile
from time import sleep
from sklearn import datasets # Just an example of 3rd party function call
# Method 1
run_profiling = profile(datasets.load_digits)
data = run_profiling()
# Method 2
#profile
def my_function():
# do some stuff
a_list = []
for i in range(1,100000):
a_list.append(i)
return a_list
res = my_function()
This should result in output similar to the below:
Profiling: <load_digits> RSS: 5.07MB | VMS: 4.91MB | SHR 73.73kB | time: 89.99ms
Profiling: <my_function> RSS: 1.06MB | VMS: 1.35MB | SHR 0B | time: 8.43ms
A couple of important final notes:
Keep in mind, this method of profiling is only going to be approximate, since lots of other stuff might be happening on the machine. Due to garbage collection and other factors, the deltas might even be zero.
For some unknown reason, very short function calls (e.g. 1 or 2 ms)
show up with zero memory usage. I suspect this is some limitation of
the hardware/OS (tested on basic laptop with Linux) on how often
memory statistics are updated.
To keep the examples simple, I didn't use any function arguments, but they should work as one would expect, i.e.
profile(my_function, arg) to profile my_function(arg)
A simple example to calculate the memory usage of a block of codes / function using memory_profile, while returning result of the function:
import memory_profiler as mp
def fun(n):
tmp = []
for i in range(n):
tmp.extend(list(range(i*i)))
return "XXXXX"
calculate memory usage before running the code then calculate max usage during the code:
start_mem = mp.memory_usage(max_usage=True)
res = mp.memory_usage(proc=(fun, [100]), max_usage=True, retval=True)
print('start mem', start_mem)
print('max mem', res[0][0])
print('used mem', res[0][0]-start_mem)
print('fun output', res[1])
calculate usage in sampling points while running function:
res = mp.memory_usage((fun, [100]), interval=.001, retval=True)
print('min mem', min(res[0]))
print('max mem', max(res[0]))
print('used mem', max(res[0])-min(res[0]))
print('fun output', res[1])
Credits: #skeept
maybe it help:
<see additional>
pip install gprof2dot
sudo apt-get install graphviz
gprof2dot -f pstats profile_for_func1_001 | dot -Tpng -o profile.png
def profileit(name):
"""
#profileit("profile_for_func1_001")
"""
def inner(func):
def wrapper(*args, **kwargs):
prof = cProfile.Profile()
retval = prof.runcall(func, *args, **kwargs)
# Note use of name from outer scope
prof.dump_stats(name)
return retval
return wrapper
return inner
#profileit("profile_for_func1_001")
def func1(...)
Below is the snippet that is giving me 'Memory Error' when the counter reaches to about 53009525. I am running this on Ubuntu Virtual Machine with 12GB of memory.
from collections import defaultdict
from collections import OrderedDict
import math
import numpy as np
import operator
import time
from itertools import product
class State():
def __init__(self, field1, field2, field3, field4, field5, field6):
self.lat = field1
self.lon = field2
self.alt = field3
self.temp = field4
self.ws = field5
self.wd = field6
...
trans = defaultdict(dict)
freq = {}
...
matrix_col = {}
matrix_col = OrderedDict(sorted(freq.items(), key=lambda t: t[0].lat, reverse=True))
trans_mat = []
counter = 0
for u1, u2 in product(matrix_col, matrix_col):
print counter, time.ctime()
if (u1 in trans) and (u2 in trans[u1]):
trans_mat.append([trans[u1][u2]*1.0])
else:
trans_mat.append([[0.0001]])
counter += 1
trans_mat = np.asarray(trans_mat)
trans_mat = np.reshape(trans_mat, (10734, 10734))
print trans_mat
Both freq and trans store a type "State". Ant help is appreciated. Here is the error:
...
53009525 Mon Oct 12 18:11:16 2015
Traceback (most recent call last):
File "hmm_freq.py", line 295, in <module>
trans_mat.append([[0.0001]])
MemoryError
It looks as though on each iteration you are appending a Python float inside two nested lists to trans_mat. You can check the size of each element using sys.getsizeof:
import sys
# each of the two nested lists is 80 bytes
print(sys.getsizeof([]))
# 80
# a Python float object is 24 bytes
print(sys.getsizeof(0.0001))
# 24
On each iteration you are appending 2 * 80 + 24 = 184 bytes to trans_mat. After 53009525 iterations you will have appended 9753752600 bytes or 9.75GB.
A very simple way to make this much more memory-efficient would be to store the results directly to a numpy array rather than in nested lists:
trans_mat = np.empty(10734 * 10734, np.double)
counter = 0
for u1, u2 in product(matrix_col, matrix_col):
print counter, time.ctime()
if (u1 in trans) and (u2 in trans[u1]):
# you may need to check that this line yields a float
# (it's hard for me to tell exactly what `trans` is from your code snippet)
trans_mat[counter] = trans[u1][u2]*1.0
else:
trans_mat[counter] = 0.0001
counter += 1
For reference:
# the size of the Python container is 80 bytes
print(sys.getsizeof(trans_mat))
# 80
# the size of the array buffer is 921750048 bytes or 921 MB
print(trans_mat.nbytes)
# 921750048
I have a script which calculates a variable every second (my program read an output of script bash and interprete the datas every second). Is a way exist to detect if this var changes?
Here is a part of my code, the text is ffmpeg or avconv output, read from a vte terminal:
#Terminal
def terminal(self):
self.v = vte.Terminal()
self.v.connect ("child-exited", lambda term: self.verif(self, my_class))
self.v.connect('contents-changed', self.term_output)
[...]
def term_output(self, my_class, donnees=None):
text = str(self.v.get_text(lambda *a: True).rstrip())
[...] # decode the text
print "time", self.time
print "duration", self.duration
The return in the vte terminal (avconv output):
Duration: 00:00:23.00, start: 0.100511, bitrate: 0 kb/s
Output #0, matroska, to '/media/guillaume/XT/Telechargements/uzz/la_qualite_de_l_air_1000019643.mkv':
Press [q] to stop, [?] for help
frame= 589 fps=115 q=-1.0 Lsize= 2191kB time=00:00:23.79 bitrate= 754.3kbits/s
FIN DU TRAITEMENT
Votre Fichier Final Est:
/media/guillaume/XT/Telechargements/uzz/la_qualite_de_l_air_1000019643.mkv
Example of output (it's time and duration from vte output):
time 5.1
duration 23.0
time 6.1
duration 23.0
time 9.1
duration 23.0
time 14.1
duration 23.0
time 14.1
duration 23.0
time 16.1
duration 23.0
time 18.1
duration 23.0
time 19.1
duration 23.0
time 21.1
duration 23.0
time 23.1
duration 23.0
time 23.1
duration 23.0
time 23.1
duration 23.0
time 23.1
duration 23.0
time 23.1
duration 3960.0 # detect this change? (correspond to a second output of avconv)
time 1.1
duration 3960.0
time 7.1
duration 3960.0
time 10.1
duration 3960.0
time 20.1
duration 3960.0
time 20.1
duration 3960.0
Add a flag variable (global or local depending upon the required scope).
Assign the value to flag in the beginning / first loop.
Compare the value of this flag against the variable and if it changes, (either a signal or a print whatever you need).
time_val = 'init';
def term_output(self, my_class, donnees=None):
text = str(self.v.get_text(lambda *a: True).rstrip())
[...] # decode the text
if (time_val == 'init'):
time_val = self.time
if self.time != time_val:
print "The value of time has changed from " + str(time_val) + " to " + str(self.time)
print self.time
I'm using the library called psutil to get system/network stats, but I can only get the total uploaded/downloaded bytes on my script.
What would be the way to natively get the network speed using Python?
If you need to know the transfer rate immediately, you should create a thread that does the calculations continuously. I'm not an expert on the subject, but I tried writing a simple program that does what you need:
import threading
import time
from collections import deque
import psutil
def calc_ul_dl(rate, dt=3, interface="WiFi"):
t0 = time.time()
counter = psutil.net_io_counters(pernic=True)[interface]
tot = (counter.bytes_sent, counter.bytes_recv)
while True:
last_tot = tot
time.sleep(dt)
counter = psutil.net_io_counters(pernic=True)[interface]
t1 = time.time()
tot = (counter.bytes_sent, counter.bytes_recv)
ul, dl = [
(now - last) / (t1 - t0) / 1000.0
for now, last in zip(tot, last_tot)
]
rate.append((ul, dl))
t0 = time.time()
def print_rate(rate):
try:
print("UL: {0:.0f} kB/s / DL: {1:.0f} kB/s".format(*rate[-1]))
except IndexError:
"UL: - kB/s/ DL: - kB/s"
# Create the ul/dl thread and a deque of length 1 to hold the ul/dl- values
transfer_rate = deque(maxlen=1)
t = threading.Thread(target=calc_ul_dl, args=(transfer_rate,))
# The program will exit if there are only daemonic threads left.
t.daemon = True
t.start()
# The rest of your program, emulated by me using a while True loop
while True:
print_rate(transfer_rate)
time.sleep(5)
Here you should set the dt argument to whatever seams reasonable for you. I tried using 3 seconds, and this is my output while runnning an online speedtest:
UL: 2 kB/s / DL: 8 kB/s
UL: 3 kB/s / DL: 45 kB/s
UL: 24 kB/s / DL: 1306 kB/s
UL: 79 kB/s / DL: 4 kB/s
UL: 121 kB/s / DL: 3 kB/s
UL: 116 kB/s / DL: 4 kB/s
UL: 0 kB/s / DL: 0 kB/s
The values seems reasonable since my result from the speedtest were DL: 1258 kB/s and UL: 111 kB/s.
The answer provided by Steinar Lima is correct.
But it can be done without threading also:
import time
import psutil
import os
count = 0
qry = ""
ul = 0.00
dl = 0.00
t0 = time.time()
upload = psutil.net_io_counters(pernic=True)["Wireless Network Connection"][0]
download = psutil.net_io_counters(pernic=True)["Wireless Network Connection"][1]
up_down = (upload, download)
while True:
last_up_down = up_down
upload = psutil.net_io_counters(pernic=True)["Wireless Network Connection"][0]
download = psutil.net_io_counters(pernic=True)["Wireless Network Connection"][1]
t1 = time.time()
up_down = (upload, download)
try:
ul, dl = [
(now - last) / (t1 - t0) / 1024.0
for now, last in zip(up_down, last_up_down)
]
t0 = time.time()
except:
pass
if dl > 0.1 or ul >= 0.1:
time.sleep(0.75)
os.system("cls")
print("UL: {:0.2f} kB/s \n".format(ul) + "DL: {:0.2f} kB/s".format(dl))
v = input()
Simple and easy ;)
I added an LCD mod for this code if you want to test it on a raspberry pi but you need to add the psutil and the lcddriver to your project code!!!!
import time
import psutil
import os
import lcddriver
count=0
qry=''
ul=0.00
dl=0.00
t0 = time.time()
upload=psutil.net_io_counters(pernic=True)['wlan0'][0]
download=psutil.net_io_counters(pernic=True)['wlan0'][1]
up_down=(upload,download)
display = lcddriver.lcd()
while True:
last_up_down = up_down
upload=psutil.net_io_counters(pernic=True)['wlan0'][0]
download=psutil.net_io_counters(pernic=True)['wlan0'][1]
t1 = time.time()
up_down = (upload,download)
try:
ul, dl = [(now - last) / (t1 - t0) / 1024.0
for now,last in zip(up_down, last_up_down)]
t0 = time.time()
#display.lcd_display_string(str(datetime.datetime.now().time()), 1)
except:
pass
if dl>0.1 or ul>=0.1:
time.sleep(0.75)
os.system('cls')
print('UL: {:0.2f} kB/s \n'.format(ul)+'DL:{:0.2f} kB/s'.format(dl))
display.lcd_display_string(str('DL:{:0.2f} KB/s '.format(dl)), 1)
display.lcd_display_string(str('UL:{:0.2f} KB/s '.format(ul)), 2)
# if KeyboardInterrupt: # If there is a KeyboardInterrupt (when you press ctrl+c), exit the program and cleanup
# print("Cleaning up!")
# display.lcd_clear()
v=input()
The (effective) network speed is simply bytes transferred in a given time interval, divided by the length of the interval. Obviously there are different ways to aggregate / average the times and they give you different "measures" ... but it all basically boils down to division.
Another and more simple solution (without threading and queues although still based on #Steinar Lima) and for more recent python:
import time
import psutil
def on_calculate_speed(self, interface):
dt = 1 # I find that dt = 1 is good enough
t0 = time.time()
try:
counter = psutil.net_io_counters(pernic=True)[interface]
except KeyError:
return []
tot = (counter.bytes_sent, counter.bytes_recv)
while True:
last_tot = tot
time.sleep(dt)
try:
counter = psutil.net_io_counters(pernic=True)[interface]
except KeyError:
break
t1 = time.time()
tot = (counter.bytes_sent, counter.bytes_recv)
ul, dl = [
(now - last) / (t1 - t0) / 1000.0
for now, last
in zip(tot, last_tot)
]
return [int(ul), int(dl)]
t0 = time.time()
while SomeCondition:
# "wlp2s0" is usually the default wifi interface for linux, but you
# could use any other interface that you want/have.
interface = "wlp2s0"
result_speed = on_calculate_speed(interface)
if len(result_speed) < 1:
print("Upload: - kB/s/ Download: - kB/s")
else:
ul, dl = result_speed[0], result_speed[1]
print("Upload: {} kB/s / Download: {} kB/s".format(ul, dl))
Or you could also fetch the default interface with pyroute2:
while SomeCondition:
ip = IPDB()
interface = ip.interfaces[ip.routes['default']['oif']]["ifname"]
result_speed = on_calculate_speed(interface)
if len(result_speed) < 1:
print("Upload: - kB/s/ Download: - kB/s")
else:
ul, dl = result_speed[0], result_speed[1]
print("Upload: {} kB/s / Download: {} kB/s".format(ul, dl))
ip.release()
i found this tread, and dont have any idea from python, i jst copy and paste codes, and now need a little help, this script, i have jst show the total of bytes send/recived, can modify to show the actual speed?
def network(iface):
stat = psutil.net_io_counters(pernic=True)[iface]
return "%s: Tx%s, Rx%s" % \
(iface, bytes2human(stat.bytes_sent), bytes2human(stat.bytes_recv))
def stats(device):
# use custom font
font_path = str(Path(__file__).resolve().parent.joinpath('fonts', 'C&C Red Alert [INET].ttf'))
font_path2 = str(Path(__file__).resolve().parent.joinpath('fonts', 'Stockholm.ttf'))
font2 = ImageFont.truetype(font_path, 12)
font3 = ImageFont.truetype(font_path2, 11)
with canvas(device) as draw:
draw.text((0, 0), cpu_usage(), font=font2, fill="white")
if device.height >= 32:
draw.text((0, 14), mem_usage(), font=font2, fill="white")
if device.height >= 64:
draw.text((0, 26), "IP: " + getIP("eth0"), font=font2, fill=255)
try:
draw.text((0, 38), network('eth0'), font=font2, fill="white")
except KeyError:
# no wifi enabled/available
pass
The code
# pip install speedtest-cli
import speedtest
speed_test = speedtest.Speedtest()
def bytes_to_mb(bytes):
KB = 1024 # One Kilobyte is 1024 bytes
MB = KB * 1024 # One MB is 1024 KB
return int(bytes/MB)
download_speed = bytes_to_mb(speed_test.download())
print("Your Download speed is", download_speed, "MB")
upload_speed = bytes_to_mb(speed_test.upload())
print("Your Upload speed is", upload_speed, "MB")
The first answer in interface should be change to desired network adapter. To see the name in ubuntu you can use ifconfig, then change interface='wifi' to the device name.
a little change to formatting in python3
def print_rate(rate):
try:
print(('UL: {0:.0f} kB/s / DL: {1:.0f} kB/s').format(*rate[-1]))
except IndexError:
'UL: - kB/s/ DL: - kB/s'
I've recently become interested in algorithms and have begun exploring them by writing a naive implementation and then optimizing it in various ways.
I'm already familiar with the standard Python module for profiling runtime (for most things I've found the timeit magic function in IPython to be sufficient), but I'm also interested in memory usage so I can explore those tradeoffs as well (e.g. the cost of caching a table of previously computed values versus recomputing them as needed). Is there a module that will profile the memory usage of a given function for me?
Python 3.4 includes a new module: tracemalloc. It provides detailed statistics about which code is allocating the most memory. Here's an example that displays the top three lines allocating memory.
from collections import Counter
import linecache
import os
import tracemalloc
def display_top(snapshot, key_type='lineno', limit=3):
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace "/path/to/module/file.py" with "module/file.py"
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print("#%s: %s:%s: %.1f KiB"
% (index, filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(' %s' % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
tracemalloc.start()
counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
words = list(words)
for word in words:
prefix = word[:3]
counts[prefix] += 1
print('Top prefixes:', counts.most_common(3))
snapshot = tracemalloc.take_snapshot()
display_top(snapshot)
And here are the results:
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: scratches/memory_test.py:37: 6527.1 KiB
words = list(words)
#2: scratches/memory_test.py:39: 247.7 KiB
prefix = word[:3]
#3: scratches/memory_test.py:40: 193.0 KiB
counts[prefix] += 1
4 other: 4.3 KiB
Total allocated size: 6972.1 KiB
When is a memory leak not a leak?
That example is great when the memory is still being held at the end of the calculation, but sometimes you have code that allocates a lot of memory and then releases it all. It's not technically a memory leak, but it's using more memory than you think it should. How can you track memory usage when it all gets released? If it's your code, you can probably add some debugging code to take snapshots while it's running. If not, you can start a background thread to monitor memory usage while the main thread runs.
Here's the previous example where the code has all been moved into the count_prefixes() function. When that function returns, all the memory is released. I also added some sleep() calls to simulate a long-running calculation.
from collections import Counter
import linecache
import os
import tracemalloc
from time import sleep
def count_prefixes():
sleep(2) # Start up time.
counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
words = list(words)
for word in words:
prefix = word[:3]
counts[prefix] += 1
sleep(0.0001)
most_common = counts.most_common(3)
sleep(3) # Shut down time.
return most_common
def main():
tracemalloc.start()
most_common = count_prefixes()
print('Top prefixes:', most_common)
snapshot = tracemalloc.take_snapshot()
display_top(snapshot)
def display_top(snapshot, key_type='lineno', limit=3):
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace "/path/to/module/file.py" with "module/file.py"
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print("#%s: %s:%s: %.1f KiB"
% (index, filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(' %s' % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
main()
When I run that version, the memory usage has gone from 6MB down to 4KB, because the function released all its memory when it finished.
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
Top 3 lines
#1: collections/__init__.py:537: 0.7 KiB
self.update(*args, **kwds)
#2: collections/__init__.py:555: 0.6 KiB
return _heapq.nlargest(n, self.items(), key=_itemgetter(1))
#3: python3.6/heapq.py:569: 0.5 KiB
result = [(key(elem), i, elem) for i, elem in zip(range(0, -n, -1), it)]
10 other: 2.2 KiB
Total allocated size: 4.0 KiB
Now here's a version inspired by another answer that starts a second thread to monitor memory usage.
from collections import Counter
import linecache
import os
import tracemalloc
from datetime import datetime
from queue import Queue, Empty
from resource import getrusage, RUSAGE_SELF
from threading import Thread
from time import sleep
def memory_monitor(command_queue: Queue, poll_interval=1):
tracemalloc.start()
old_max = 0
snapshot = None
while True:
try:
command_queue.get(timeout=poll_interval)
if snapshot is not None:
print(datetime.now())
display_top(snapshot)
return
except Empty:
max_rss = getrusage(RUSAGE_SELF).ru_maxrss
if max_rss > old_max:
old_max = max_rss
snapshot = tracemalloc.take_snapshot()
print(datetime.now(), 'max RSS', max_rss)
def count_prefixes():
sleep(2) # Start up time.
counts = Counter()
fname = '/usr/share/dict/american-english'
with open(fname) as words:
words = list(words)
for word in words:
prefix = word[:3]
counts[prefix] += 1
sleep(0.0001)
most_common = counts.most_common(3)
sleep(3) # Shut down time.
return most_common
def main():
queue = Queue()
poll_interval = 0.1
monitor_thread = Thread(target=memory_monitor, args=(queue, poll_interval))
monitor_thread.start()
try:
most_common = count_prefixes()
print('Top prefixes:', most_common)
finally:
queue.put('stop')
monitor_thread.join()
def display_top(snapshot, key_type='lineno', limit=3):
snapshot = snapshot.filter_traces((
tracemalloc.Filter(False, "<frozen importlib._bootstrap>"),
tracemalloc.Filter(False, "<unknown>"),
))
top_stats = snapshot.statistics(key_type)
print("Top %s lines" % limit)
for index, stat in enumerate(top_stats[:limit], 1):
frame = stat.traceback[0]
# replace "/path/to/module/file.py" with "module/file.py"
filename = os.sep.join(frame.filename.split(os.sep)[-2:])
print("#%s: %s:%s: %.1f KiB"
% (index, filename, frame.lineno, stat.size / 1024))
line = linecache.getline(frame.filename, frame.lineno).strip()
if line:
print(' %s' % line)
other = top_stats[limit:]
if other:
size = sum(stat.size for stat in other)
print("%s other: %.1f KiB" % (len(other), size / 1024))
total = sum(stat.size for stat in top_stats)
print("Total allocated size: %.1f KiB" % (total / 1024))
main()
The resource module lets you check the current memory usage, and save the snapshot from the peak memory usage. The queue lets the main thread tell the memory monitor thread when to print its report and shut down. When it runs, it shows the memory being used by the list() call:
2018-05-29 10:34:34.441334 max RSS 10188
2018-05-29 10:34:36.475707 max RSS 23588
2018-05-29 10:34:36.616524 max RSS 38104
2018-05-29 10:34:36.772978 max RSS 45924
2018-05-29 10:34:36.929688 max RSS 46824
2018-05-29 10:34:37.087554 max RSS 46852
Top prefixes: [('con', 1220), ('dis', 1002), ('pro', 809)]
2018-05-29 10:34:56.281262
Top 3 lines
#1: scratches/scratch.py:36: 6527.0 KiB
words = list(words)
#2: scratches/scratch.py:38: 16.4 KiB
prefix = word[:3]
#3: scratches/scratch.py:39: 10.1 KiB
counts[prefix] += 1
19 other: 10.8 KiB
Total allocated size: 6564.3 KiB
If you're on Linux, you may find /proc/self/statm more useful than the resource module.
This one has been answered already here: Python memory profiler
Basically you do something like that (cited from Guppy-PE):
>>> from guppy import hpy; h=hpy()
>>> h.heap()
Partition of a set of 48477 objects. Total size = 3265516 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 25773 53 1612820 49 1612820 49 str
1 11699 24 483960 15 2096780 64 tuple
2 174 0 241584 7 2338364 72 dict of module
3 3478 7 222592 7 2560956 78 types.CodeType
4 3296 7 184576 6 2745532 84 function
5 401 1 175112 5 2920644 89 dict of class
6 108 0 81888 3 3002532 92 dict (no owner)
7 114 0 79632 2 3082164 94 dict of type
8 117 0 51336 2 3133500 96 type
9 667 1 24012 1 3157512 97 __builtin__.wrapper_descriptor
<76 more rows. Type e.g. '_.more' to view.>
>>> h.iso(1,[],{})
Partition of a set of 3 objects. Total size = 176 bytes.
Index Count % Size % Cumulative % Kind (class / dict of class)
0 1 33 136 77 136 77 dict (no owner)
1 1 33 28 16 164 93 list
2 1 33 12 7 176 100 int
>>> x=[]
>>> h.iso(x).sp
0: h.Root.i0_modules['__main__'].__dict__['x']
>>>
If you only want to look at the memory usage of an object, (answer to other question)
There is a module called Pympler which contains the asizeof
module.
Use as follows:
from pympler import asizeof
asizeof.asizeof(my_object)
Unlike sys.getsizeof, it works for your self-created objects.
>>> asizeof.asizeof(tuple('bcd'))
200
>>> asizeof.asizeof({'foo': 'bar', 'baz': 'bar'})
400
>>> asizeof.asizeof({})
280
>>> asizeof.asizeof({'foo':'bar'})
360
>>> asizeof.asizeof('foo')
40
>>> asizeof.asizeof(Bar())
352
>>> asizeof.asizeof(Bar().__dict__)
280
>>> help(asizeof.asizeof)
Help on function asizeof in module pympler.asizeof:
asizeof(*objs, **opts)
Return the combined size in bytes of all objects passed as positional arguments.
Disclosure:
Applicable on Linux only
Reports memory used by the current process as a whole, not individual functions within
But nice because of its simplicity:
import resource
def using(point=""):
usage=resource.getrusage(resource.RUSAGE_SELF)
return '''%s: usertime=%s systime=%s mem=%s mb
'''%(point,usage[0],usage[1],
usage[2]/1024.0 )
Just insert using("Label") where you want to see what's going on. For example
print(using("before"))
wrk = ["wasting mem"] * 1000000
print(using("after"))
>>> before: usertime=2.117053 systime=1.703466 mem=53.97265625 mb
>>> after: usertime=2.12023 systime=1.70708 mem=60.8828125 mb
Below is a simple function decorator which allows to track how much memory the process consumed before the function call, after the function call, and what is the difference:
import time
import os
import psutil
def elapsed_since(start):
return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
def get_process_memory():
process = psutil.Process(os.getpid())
mem_info = process.memory_info()
return mem_info.rss
def profile(func):
def wrapper(*args, **kwargs):
mem_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
mem_after = get_process_memory()
print("{}: memory before: {:,}, after: {:,}, consumed: {:,}; exec time: {}".format(
func.__name__,
mem_before, mem_after, mem_after - mem_before,
elapsed_time))
return result
return wrapper
Here is my blog which describes all the details. (archived link)
Since the accepted answer and also the next highest voted answer have, in my opinion, some problems, I'd like to offer one more answer that is based closely on Ihor B.'s answer with some small but important modifications.
This solution allows you to run profiling on either by wrapping a function call with the profile function and calling it, or by decorating your function/method with the #profile decorator.
The first technique is useful when you want to profile some third-party code without messing with its source, whereas the second technique is a bit "cleaner" and works better when you are don't mind modifying the source of the function/method you want to profile.
I've also modified the output, so that you get RSS, VMS, and shared memory. I don't care much about the "before" and "after" values, but only the delta, so I removed those (if you're comparing to Ihor B.'s answer).
Profiling code
# profile.py
import time
import os
import psutil
import inspect
def elapsed_since(start):
#return time.strftime("%H:%M:%S", time.gmtime(time.time() - start))
elapsed = time.time() - start
if elapsed < 1:
return str(round(elapsed*1000,2)) + "ms"
if elapsed < 60:
return str(round(elapsed, 2)) + "s"
if elapsed < 3600:
return str(round(elapsed/60, 2)) + "min"
else:
return str(round(elapsed / 3600, 2)) + "hrs"
def get_process_memory():
process = psutil.Process(os.getpid())
mi = process.memory_info()
return mi.rss, mi.vms, mi.shared
def format_bytes(bytes):
if abs(bytes) < 1000:
return str(bytes)+"B"
elif abs(bytes) < 1e6:
return str(round(bytes/1e3,2)) + "kB"
elif abs(bytes) < 1e9:
return str(round(bytes / 1e6, 2)) + "MB"
else:
return str(round(bytes / 1e9, 2)) + "GB"
def profile(func, *args, **kwargs):
def wrapper(*args, **kwargs):
rss_before, vms_before, shared_before = get_process_memory()
start = time.time()
result = func(*args, **kwargs)
elapsed_time = elapsed_since(start)
rss_after, vms_after, shared_after = get_process_memory()
print("Profiling: {:>20} RSS: {:>8} | VMS: {:>8} | SHR {"
":>8} | time: {:>8}"
.format("<" + func.__name__ + ">",
format_bytes(rss_after - rss_before),
format_bytes(vms_after - vms_before),
format_bytes(shared_after - shared_before),
elapsed_time))
return result
if inspect.isfunction(func):
return wrapper
elif inspect.ismethod(func):
return wrapper(*args,**kwargs)
Example usage, assuming the above code is saved as profile.py:
from profile import profile
from time import sleep
from sklearn import datasets # Just an example of 3rd party function call
# Method 1
run_profiling = profile(datasets.load_digits)
data = run_profiling()
# Method 2
#profile
def my_function():
# do some stuff
a_list = []
for i in range(1,100000):
a_list.append(i)
return a_list
res = my_function()
This should result in output similar to the below:
Profiling: <load_digits> RSS: 5.07MB | VMS: 4.91MB | SHR 73.73kB | time: 89.99ms
Profiling: <my_function> RSS: 1.06MB | VMS: 1.35MB | SHR 0B | time: 8.43ms
A couple of important final notes:
Keep in mind, this method of profiling is only going to be approximate, since lots of other stuff might be happening on the machine. Due to garbage collection and other factors, the deltas might even be zero.
For some unknown reason, very short function calls (e.g. 1 or 2 ms)
show up with zero memory usage. I suspect this is some limitation of
the hardware/OS (tested on basic laptop with Linux) on how often
memory statistics are updated.
To keep the examples simple, I didn't use any function arguments, but they should work as one would expect, i.e.
profile(my_function, arg) to profile my_function(arg)
A simple example to calculate the memory usage of a block of codes / function using memory_profile, while returning result of the function:
import memory_profiler as mp
def fun(n):
tmp = []
for i in range(n):
tmp.extend(list(range(i*i)))
return "XXXXX"
calculate memory usage before running the code then calculate max usage during the code:
start_mem = mp.memory_usage(max_usage=True)
res = mp.memory_usage(proc=(fun, [100]), max_usage=True, retval=True)
print('start mem', start_mem)
print('max mem', res[0][0])
print('used mem', res[0][0]-start_mem)
print('fun output', res[1])
calculate usage in sampling points while running function:
res = mp.memory_usage((fun, [100]), interval=.001, retval=True)
print('min mem', min(res[0]))
print('max mem', max(res[0]))
print('used mem', max(res[0])-min(res[0]))
print('fun output', res[1])
Credits: #skeept
maybe it help:
<see additional>
pip install gprof2dot
sudo apt-get install graphviz
gprof2dot -f pstats profile_for_func1_001 | dot -Tpng -o profile.png
def profileit(name):
"""
#profileit("profile_for_func1_001")
"""
def inner(func):
def wrapper(*args, **kwargs):
prof = cProfile.Profile()
retval = prof.runcall(func, *args, **kwargs)
# Note use of name from outer scope
prof.dump_stats(name)
return retval
return wrapper
return inner
#profileit("profile_for_func1_001")
def func1(...)