Why is 0.5 taking less memory than 1 in python? - python

I think I understand the concept of how python is storing variables and why certain vars are larger than others. I also googled about float point but that couldn't answer my question:
Why is a float e.g. 0.5 is only taking 24 bytes of memory but a integer like 1 is taking 28? What even confuses me more is that a 0 takes 24 bytes too (That I understand. It stores just the object with "no" integer (I think...)). But how does it work that, when python adds 4 bytes if the number can't be saved with less, python can store a larger binary number like 0.5 in the same space like 0.
I used sys.getsizeof() to get the size of the objects in Python 3.9.1 64-bit

Related

How does Python manage size of integers and floats? [duplicate]

This question already has an answer here:
"sys.getsizeof(int)" returns an unreasonably large value?
(1 answer)
Closed 2 years ago.
I am running a 64-bit machine.
If i type getsizeof(int()), i get 24. What are the elements or objects that use these 24 bytes?
Here are some more confusing results:
getsizeof(0) returns 24.
getsizeof(1) returns 28. Why does 1 take 4 more bytes than 0?
And getsizeof(1.5) returns 24. Why does 1.5 which is a float takes smaller size than an integer 1.
I'll talk just about ints for now, and look at floats at the end.
In Python, unlike C and many other languages, int is not just a datatype storing a number. It is a full object, with extra information about the object (more detail here).
This extra information takes up lots of space, which is why the objects can seem unusually large. Specifically, it takes up 3 lots of 8 bytes (3*8=24 bytes). These are a reference count, a type and a size.
Python stores integers using a specific number of bytes depending on how large they are. Specifically:
0 <= n < 2**0:
requires 24 bytes
2**0 <= n < 2**30:
requires 28 bytes
2**30 <= n < 2**60:
requires 32 bytes
In general, for every increase of 30 powers of 2 (for want of better terminology!), 4 extra bytes are required.
This pattern also follows for the negative numbers, just going the opposite direction.
These specific values are the values on my computer, but will vary depending on the environment you're running Python on. However, the general patterns are likely to be the same.
I believe (as explained a little here) that the reason zero alone uses less space is that the only int which can be represented using just 24 bytes is zero, so there is no need to additionally store the actual value, reducing the size. It's possible I've misunderstood this specific case so please correct me below if so!
However, floats are stored differently. Their values are simply stored using 64 bits (i.e. a double), which means there are 8 bytes representing the value. Since this never varies, there is no need to store the size as we do with integers, meaning there are only two 8 byte values to store alongside the specific float value. This means the total size is two lots of 8 bytes for the object data and one lot of 8 bytes for the actual value, or 24 bytes in total.
It is this property of not needing to store the value's size that frees up 8 bytes, which means 1.5 requires less space than 1.

What takes more memory, an 8 char string or 8 digit int?

I'm developing a program that will deal with approx. 90 billion records, so I need to manage memory carefully. Which is larger in memory: 8 char string or 8 digit int?
Details:
-Python 3.7.4
-64 bits
Edit1:
following the advice of user8080blablabla I got:
sys.getsizeof(99999999)
28
sys.getsizeof("99999999")
57
seriously? a 8 char string is 57 bytes long?!?
An int will generally take less memory than its representation as a string, because it is more compact. However, because Python int values are objects, they still take quite a lot of space each compared to primitive values in other languages: the integer object 1 takes up 28 bytes of memory on my machine.
>>> import sys
>>> sys.getsizeof(1)
28
If minimising memory use is your priority, and there is a maximum range the integers can be in, consider using the array module. It can store numeric data (or Unicode characters) in an array, in a primitive data type of your choice, so that each value isn't an object taking up 28+ bytes.
>>> from array import array
>>> arr = array('I') # unsigned int in C
>>> arr.extend(range(10000))
>>> arr.itemsize
4
>>> sys.getsizeof(arr)
40404
The actual number of bytes used per item is dependent on the machine architecture. On my machine, each number takes 4 bytes; there are 404 bytes of overhead for an array of length 10,000. Check arr.itemsize on your machine to see if you need a different primitive type; fewer than 4 bytes is not enough for an 8-digit number.
That said, you should not be trying to fit 90 billion numbers in memory, at 4 bytes each; this would take 360GB of memory. Look for a solution which doesn't require holding every record in memory at once.
You ought to remember that strings are represented as Unicodes in Python, therefore storing a digit in a string can take an upwards of 4-bytes per character to store, which is why you see such a large discrepancy between int and str (interesting read on the topic).
If you are worried about memory allocation I would instead recommend using pandas to manage the backend for you when it comes to manipulating large datasets.

How does Python convert data types?

So as the title says how does converting data types work? For example in Python, when you take an intger or floating number and turn it into a string. What's going on behind the scenes that does this kind of conversion. My hypothesis was that it reads the actual bytes and then goes into memory and makes a new variable that's a string.
I am sharing idea and then you can extend it more :
There is no limit to how long an integer value can be but depends on amount of memory your system has, but beyond that an integer can be as long as you need it to be.
Strings can be prepended to an integer
Code : print(0x10)
Output : 16
Code : print(0b10)
Output : 2

A RAM error of big array

I need to get the numbers of one line randomly, and put each line in other array, then get the numbers of one col.
I have a big file, more than 400M. In that file, there are 13496*13496 number, means 13496 rows and 13496 cols. I want to read them to a array.
This is my code:
_L1 = [[0 for col in range(13496)] for row in range(13496)]
_L1file = open('distanceCMD.function.txt')
while (i<13496):
print "i="+str(i)
_strlf = _L1file.readline()
_strlf = _strlf.split('\t')
_strlf = _strlf[:-1]
_L1[i] = _strlf
i += 1
_L1file.close()
And this is my error message:
MemoryError:
File "D:\research\space-function\ART3.py", line 30, in <module>
_strlf = _strlf.split('\t')
you might want to approach your problem in another way. Process the file line by line. I don't see a need to store the whole big file into array. Otherwise, you might want to tell us what you are actually trying to do.
for line in open("400MB_file"):
# do something with line.
Or
f=open("file")
for linenum,line in enumerate(f):
if linenum+1 in [2,3,10]:
print "there are ", len(line.split())," columns" #assuming you want to split on spaces
print "100th column value is: ", line.split()[99]
if linenum+1>10:
break # break if you want to stop after the 10th line
f.close()
This is a simple case of your program demanding more memory than is available to the computer. An array of 13496x13496 elements requires 182,142,016 'cells', where a cell is a minimum of one byte (if storing chars) and potentially several bytes (if storing floating-point numerics, for example). I'm not even taking your particular runtimes' array metadata into account, though this would typically be a tiny overhead on a simple array.
Assuming each array element is just a single byte, your computer needs around 180MB of RAM to hold it in memory in its' entirety. Trying to process it could be impractical.
You need to think about the problem a different way; as has already been mentioned, a line-by-line approach might be a better option. Or perhaps processing the grid in smaller units, perhaps 10x10 or 100x100, and aggregating the results. Or maybe the problem itself can be expressed in a different form, which avoids the need to process the entire dataset altogether...?
If you give us a little more detail on the nature of the data and the objective, perhaps someone will have an idea to make the task more manageable.
Short answer: the Python object overhead is killing you. In Python 2.x on a 64-bit machine, a list of strings consumes 48 bytes per list entry even before accounting for the content of the strings. That's over 8.7 Gb of overhead for the size of array you describe.
On a 32-bit machine it'll be a bit better: only 28 bytes per list entry.
Longer explanation: you should be aware that Python objects themselves can be quite large: even simple objects like ints, floats and strings. In your code you're ending up with a list of lists of strings. On my (64-bit) machine, even an empty string object takes up 40 bytes, and to that you need to add 8 bytes for the list pointer that's pointing to this string object in memory. So that's already 48 bytes per entry, or around 8.7 Gb. Given that Python allocates memory in multiples of 8 bytes at a time, and that your strings are almost certainly non-empty, you're actually looking at 56 or 64 bytes (I don't know how long your strings are) per entry.
Possible solutions:
(1) You might do (a little) better by converting your entries from strings to ints or floats as appropriate.
(2) You'd do much better by either using Python's array type (not the same as list!) or by using numpy: then your ints or floats would only take 4 or 8 bytes each.
Since Python 2.6, you can get basic information about object sizes with the sys.getsizeof function. Note that if you apply it to a list (or other container) then the returned size doesn't include the size of the contained list objects; only of the structure used to hold those objects. Here are some values on my machine.
>>> import sys
>>> sys.getsizeof("")
40
>>> sys.getsizeof(5.0)
24
>>> sys.getsizeof(5)
24
>>> sys.getsizeof([])
72
>>> sys.getsizeof(range(10)) # 72 + 8 bytes for each pointer
152
MemoryError exception:
Raised when an operation runs out of
memory but the situation may still be
rescued (by deleting some objects).
The associated value is a string
indicating what kind of (internal)
operation ran out of memory. Note that
because of the underlying memory
management architecture (C’s malloc()
function), the interpreter may not
always be able to completely recover
from this situation; it nevertheless
raises an exception so that a stack
traceback can be printed, in case a
run-away program was the cause.
It seems that, at least in your case, reading the entire file into memory is not a doable option.
Replace this:
_strlf = _strlf[:-1]
with this:
_strlf = [float(val) for val in _strlf[:-1]]
You are making a big array of strings. I can guarantee that the string "123.00123214213" takes a lot less memory when you convert it to floating point.
You might want to include some handling for null values.
You can also go to numpy's array type, but your problem may be too small to bother.

Converting 2.5 byte comparisons to 3

I'm trying to convert a 2.5 program to 3.
Is there a way in python 3 to change a byte string, such as b'\x01\x02' to a python 2.5 style string, such as '\x01\x02', so that string and byte-by-byte comparisons work similarly to 2.5? I'm reading the string from a binary file.
I have a 2.5 program that reads bytes from a file, then compares or processes each byte or combination of bytes with specified constants. To run the program under 3, I'd like to avoid changing all my constants to bytes and byte strings ('\x01' to b'\x01'), then dealing with issues in 3 such as:
a = b'\x01'
b = b'\x02'
results in
(a+b)[0] != a
even though similar operation work in 2.5. I have to do (a+b)[0] == ord(a), while a+b == b'\x01\x02' works fine. (By the way, what do I do to (a+b)[0] so it equals a?)
Unpacking structures is also an issue.
Am I missing something simple?
Bytes is an immutable sequence of integers (in the range 0<= to <256), therefore when you're accessing (a+b)[0] you're getting back an integer, exactly the same one you'd get by accessing a[0]. so when you're comparing sequence a to an integer (a+b)[0], they're naturally different.
using the slice notation you could however get a sequence back:
>>> (a+b)[:1] == a # 1 == len(a) ;)
True
because slicing returns bytes object.
I would also advised to run 2to3 utility (it needs to be run with py2k) to convert some code automatically. It won't solve all your problems, but it'll help a lot.

Categories