I have a program that uses and creates very large numpy arrays. However, garbage collection doesn't seem to be releasing the memory of these arrays.
Take the following script as an example, where it says big is not tracked (aka is_tracked is False).
big = np.ones((100, 100, 100))
print(gc.is_tracked(big))
big2=big+1
get_referrers etc is redundant in this case returning empty lists, since it's not tracked.
memory_profiler confirms this with the following output. Showing that neither del (as expected) or gc.collect() are freeing memory.
Line # Mem usage Increment Occurrences Line Contents
=============================================================
5 212.3 MiB 212.3 MiB 1 #mprof
6 def main():
7 212.3 MiB 0.0 MiB 1 size = 100
8 242.8 MiB 30.5 MiB 1 big = np.ones((size, size, size))
9 242.8 MiB 0.0 MiB 1 print(gc.is_tracked(big))
10 273.3 MiB 30.5 MiB 1 big2=big+1
11 273.3 MiB 0.0 MiB 1 del big
12 273.3 MiB 0.0 MiB 1 gc.collect()
13 273.3 MiB 0.0 MiB 1 big2
From this thread it implies that they should be tracked and collected. Anyone know why these arrays aren't being tracked for my configuration and how to make sure they are / manually free up the memory?
I'm on MacOs, using the latest versions of numpy-1.22.1 and python3.9.
Many thanks in advance.
Related
I'm using concurrent.futures.ThreadPoolExecutor to do multithreading tasks. The memory usage is very high and doesn't get released after the job is finished. I use memory_profiler to track the memory usage. Here is my test code and output.
import gc
import numpy as np
from concurrent.futures import ThreadPoolExecutor
from memory_profiler import profile
import sys
import time
from concurrent.futures import ThreadPoolExecutor
def do():
a = [1] * (4096*1023)
return sum(a)
#profile
def main():
threadpool = ThreadPoolExecutor(max_workers=60)
tt = time.time()
jobs = []
for x in range(1000):
jobs.append(threadpool.submit(do,))
rst = [j.result() for j in jobs]
print(time.time()-tt)
return None
Line # Mem usage Increment Occurrences Line Contents
=============================================================
18 51.2 MiB 51.2 MiB 1 #profile
19 def main():
20 51.2 MiB 0.0 MiB 1 threadpool = ThreadPoolExecutor(max_workers=60)
21
22 51.2 MiB 0.0 MiB 1 tt = time.time()
23 51.2 MiB 0.0 MiB 1 jobs = []
24 404.2 MiB 0.0 MiB 1001 for x in range(1000):
25 404.2 MiB 353.0 MiB 1000 jobs.append(threadpool.submit(do,))
26 404.2 MiB 0.0 MiB 1003 rst = [j.result() for j in jobs]
27 404.2 MiB 0.0 MiB 1 print(time.time()-tt)
28 404.2 MiB 0.0 MiB 1 return None
As showed in the stats, the memory did not get released after the jobs are finished. However, if we change the do function into
def do():
a = [1] * (4096*1024) # <- Increase the size of list a
return sum(a)
The memory will be released correctly. The stats are:
Line # Mem usage Increment Occurrences Line Contents
=============================================================
18 51.4 MiB 51.4 MiB 1 #profile
19 def main():
20 51.4 MiB 0.0 MiB 1 threadpool = ThreadPoolExecutor(max_workers=60)
21
22 51.4 MiB 0.0 MiB 1 tt = time.time()
23 51.4 MiB 0.0 MiB 1 jobs = []
24 116.4 MiB -116311.3 MiB 1001 for x in range(1000):
25 244.3 MiB -48019.8 MiB 1000 jobs.append(threadpool.submit(do,))
26 53.5 MiB -62.9 MiB 1003 rst = [j.result() for j in jobs]
27 53.5 MiB 0.0 MiB 1 print(time.time()-tt)
28 53.5 MiB 0.0 MiB 1 return None
It seems there is a obj size threshold to control whether the memory will get released. I wonder what's the rule behind this? Also, if I want to make the memory released for all cases, what should I do?
Edit:
About my environment, I'm using a Intel(R) Xeon(R) Platinum 8260 CPU with 16GB RAM, OS is Debian 9 linux 4.14, Python version is 3.8.12, and I'm using Anaconda to manage my Python environment.
I was profiling some code that raises exceptions to determine which of two approaches was better when I came across some memory usage that seems counter intuitive. Perhaps someone can shed some light. Test1 below raises a new exception 10K times. It takes less memory than raising the same exception 10K times. ???
Python 3.9
from memory_profiler import profile
TEST_COUNT = 10000
class ApplicationException(Exception):
def __init__(self):
self.code = 0
#profile
def test1():
for x in range(TEST_COUNT):
try:
raise ApplicationException()
except:
pass
#profile
def test2():
application_exception = ApplicationException()
for x in range(TEST_COUNT):
try:
raise application_exception
except:
pass
test1()
test2()
The results were:
Line # Mem usage Increment Occurences Line Contents
_____________________________________________________________
10 14.1 MiB 14.1 MiB 1 #profile
11 def test1():
12 14.1 MiB 0.0 MiB 10001 for x in range(TEST_COUNT):
13 14.1 MiB 0.0 MiB 10000 try:
14 14.1 MiB 0.0 MiB 10000 raise ApplicationException()
15 14.1 MiB 0.0 MiB 10000 except:
16 14.1 MiB 0.0 MiB 10000 pass
Line # Mem usage Increment Occurences Line Contents
_____________________________________________________________
19 14.2 MiB 14.2 MiB 1 #profile
20 def test2():
21 14.2 MiB 0.0 MiB 1 application_exception = ApplicationException()
22 14.7 MiB 0.0 MiB 10001 for x in range(TEST_COUNT):
23 14.7 MiB 0.0 MiB 10000 try:
24 14.7 MiB 0.5 MiB 10000 raise application_exception
25 14.7 MiB 0.0 MiB 10000 except:
26 14.7 MiB 0.0 MiB 10000 pass
Not sure what is going on here. Line 24 incurs some expense in memory. Can someone explain?
In the first test, you are raising a new instance of ApplicationException class but you don't assign it any variable.
However in the second test, you assign a ApplicationException instance to a variable, than raise it. Assigned variables has extra information like name and so on. So, it uses more memory.
I want to generate and keep a set of tuples in a certain time. Yet I found the program seemed to consume all the memory if given enough time.
I have tried two methods. One is delete the newly generated variables, the other is gc.collect(). But neither of them worked. If I just generate and not keep the tuples, the program would consume limited memory.
generate and keep: gk.py
import gc
import time
from memory_profiler import profile
from random import sample
from sys import getsizeof
#profile
def loop(limit):
t = time.time()
i = 0
A = set()
while True:
i += 1
duration = time.time() - t
a = tuple(sorted(sample(range(200), 100)))
A.add(a)
if not i % int(1e4):
print('step {:.2e}...'.format(i))
if duration > limit:
print('done')
break
# method 1: delete the variables
# del duration, a
# method 2: use gc
# gc.collect()
memory = getsizeof(t) + getsizeof(i) + getsizeof(duration) + \
getsizeof(a) + getsizeof(limit) + getsizeof(A)
print('memory consumed: {:.2e}MB'.format(memory/2**20))
pass
def main():
limit = 300
loop(limit)
pass
if __name__ == '__main__':
print('running...')
main()
generate and not keep: gnk.py
import time
from memory_profiler import profile
from random import sample
from sys import getsizeof
#profile
def loop(limit):
t = time.time()
i = 0
while True:
i += 1
duration = time.time() - t
a = tuple(sorted(sample(range(200), 100)))
if not i % int(1e4):
print('step {:.2e}...'.format(i))
if duration > limit:
print('done')
break
memory = getsizeof(t) + getsizeof(i) + getsizeof(duration) + \
getsizeof(a) + getsizeof(limit)
print('memory consumed: {:.2e}MB'.format(memory/2**20))
pass
def main():
limit = 300
loop(limit)
pass
if __name__ == '__main__':
print('running...')
main()
use "mprof" (needs module memory_profiler) in cmd/shell to check memory usage
mprof run my_file.py
mprof plot
result of gk.py
memory consumed: 4.00e+00MB
Filename: gk.py
Line # Mem usage Increment Line Contents
================================================
12 32.9 MiB 32.9 MiB #profile
13 def loop(limit):
14 32.9 MiB 0.0 MiB t = time.time()
15 32.9 MiB 0.0 MiB i = 0
16 32.9 MiB 0.0 MiB A = set()
17 32.9 MiB 0.0 MiB while True:
18 115.8 MiB 0.0 MiB i += 1
19 115.8 MiB 0.0 MiB duration = time.time() - t
20 115.8 MiB 0.3 MiB a = tuple(sorted(sample(range(200), 100)))
21 115.8 MiB 2.0 MiB A.add(a)
22 115.8 MiB 0.0 MiB if not i % int(1e4):
23 111.8 MiB 0.0 MiB print('step {:.2e}...'.format(i))
24 115.8 MiB 0.0 MiB if duration > limit:
25 115.8 MiB 0.0 MiB print('done')
26 115.8 MiB 0.0 MiB break
27 # method 1: delete the variables
28 # del duration, a
29 # method 2: use gc
30 # gc.collect()
31 memory = getsizeof(t) + getsizeof(i) + getsizeof(duration) + \
32 115.8 MiB 0.0 MiB getsizeof(a) + getsizeof(limit) + getsizeof(A)
33 115.8 MiB 0.0 MiB print('memory consumed: {:.2e}MB'.format(memory/2**20))
34 115.8 MiB 0.0 MiB pass
result of gnk.py
memory consumed: 9.08e-04MB
Filename: gnk.py
Line # Mem usage Increment Line Contents
================================================
11 33.0 MiB 33.0 MiB #profile
12 def loop(limit):
13 33.0 MiB 0.0 MiB t = time.time()
14 33.0 MiB 0.0 MiB i = 0
15 33.0 MiB 0.0 MiB while True:
16 33.0 MiB 0.0 MiB i += 1
17 33.0 MiB 0.0 MiB duration = time.time() - t
18 33.0 MiB 0.1 MiB a = tuple(sorted(sample(range(200), 100)))
19 33.0 MiB 0.0 MiB if not i % int(1e4):
20 33.0 MiB 0.0 MiB print('step {:.2e}...'.format(i))
21 33.0 MiB 0.0 MiB if duration > limit:
22 33.0 MiB 0.0 MiB print('done')
23 33.0 MiB 0.0 MiB break
24 memory = getsizeof(t) + getsizeof(i) + getsizeof(duration) + \
25 33.0 MiB 0.0 MiB getsizeof(a) + getsizeof(limit)
26 33.0 MiB 0.0 MiB print('memory consumed: {:.2e}MB'.format(memory/2**20))
27 33.0 MiB 0.0 MiB pass
I have two problems:
both the programs consumed more memory than the variables occupied. "gk.py" consumed 115.8MB, its variables occupied 4.00MB. "gnk.py" consumed 33.0MB, its variables occupied 9.08e-04MB. Why the programs consumed more memory than the corresponding variables occupied?
memory that "gk.py" consumed increases linearly with time. memory that "gnk.py" consumed remains constantly with time. Why does this happen?
Any help would be appreciated.
Given that the size of the set is being constantly increased, there will be a time when it will eventually consume all memory.
An estimative (from my computer):
10 seconds of code running ~ 5e4 tuples saved to the set
300 seconds of code running ~ 1.5e6 tuples saved to the set
1 tuple = 100 integers ~ 400bytes
total:
1.5e6 * 400bytes = 6e8bytes = 600MB filled in 300s
I am using a python memory profiler and at top of every function, I am using #profile on top of function to analyze the memory consumption of the function.But every time I refresh the same page the size of my function always increases.Why is it so I don't know.
I tried using python garbage collector but that has no impact.An example of this i am pasting here.
Line # Mem usage Increment Line Contents
================================================
27 83.2 MiB 83.2 MiB #login_required
28 #profile
29 def app_user_detail(request, slug=None):
30 83.3 MiB 0.1 MiB university_obj = Universities.objects.using('cms').filter(deleted=0, status=1, verified=1)
31 83.3 MiB 0.0 MiB ids = [4, 5]
32 83.3 MiB 0.0 MiB master_user_types = MasterUserTypes.objects.using("cms").filter(~Q(id__in=ids)).all()
33 83.3 MiB 0.0 MiB gc.isenabled()
34 83.3 MiB 0.0 MiB gc.collect()
35 83.3 MiB 0.0 MiB return render(request, 'templates/news_managment/news_dashboard_detail.html',
36 89.0 MiB 5.7 MiB {'slug': slug, 'university_obj': university_obj, 'master_user_types': master_user_types})
Suppose,i have 89.0 Mib right now for this function but when i refresh it the size will increase.I am running the django project on localhost
While checking Jake van der Plas' "Python Data Science Handbook", I was recreating the usage examples of various debugging and profiling tools. He provides an example for demonstrating %mprun with the following function:
def sum_of_lists(N):
total = 0
for i in range(5):
L = [j ^ (j >> i) for j in range(N)]
total += sum(L)
del L
return total
I proceeded to execute it in a Jupyter notebook, and got the following output:
Line # Mem usage Increment Line Contents
================================================
1 81.3 MiB 81.3 MiB def sum_of_lists(N):
2 81.3 MiB 0.0 MiB total = 0
3 81.3 MiB 0.0 MiB for i in range(5):
4 113.2 MiB -51106533.7 MiB L = [j ^ (j >> i) for j in range(N)]
5 119.1 MiB 23.5 MiB total += sum(L)
6 81.3 MiB -158.8 MiB del L
7 81.3 MiB 0.0 MiB return total
... which immediately struck me as odd. According to the book, I should have gotten a 25.4 MiB increase on line 4, and a corresponding negative increase on line 6. But I have a massive negative increment instead, which does not line up at all to what I would have expected to happen. According to line 6, there should be a 158.8 increment.
On the other hand, Mem usage paints a more sensible picture (113.2 - 81.3 = 31.9 MiB increase). So I'm left with a weird, giant negative increment, and two measured changes in memory usage that don't agree with each other. What is going on, then?
Just to check if there's something truly bizarre going on with my interpreter/profiler, I went ahead and replicated the example given in this answer, and got this output:
Line # Mem usage Increment Line Contents
================================================
2 86.5 MiB 86.5 MiB def my_func():
3 94.1 MiB 7.6 MiB a = [1] * (10 ** 6)
4 246.7 MiB 152.6 MiB b = [2] * (2 * 10 ** 7)
5 94.1 MiB -152.6 MiB del b
6 94.1 MiB 0.0 MiB return a
Nothing wrong there, I think. What could be going on with the previous example?