What is the max value of pygame.time.get_ticks()? - python

I'm wondering, if i use pygame.time.get_ticks(), can i get a stack overflow?
I wrote a game, and the score depends on the time the player spend with it, before dies. This way, I can get a quite large number, especially measuring it in miliseconds. What's the upper limit, pygame can handle? I know, that python is only limited by my computer's memory, but it looks ineffective, even compared to itself.
What are the alternatives, if i dont want this type of memory-leak? Or is it a problem at all?

This is 100000% not an issue. When dealing with an int Python will never throw an OverflowError.
get_ticks returns a python int, which when initialized on a 32 bit system is a 4-byte number. So it can store 2^32 values. 2^32 milliseconds is equal to about 50 days, so you are not going to run into any issues whatsoever. Even if it did somehow run longer than that, Python automatically increases the size of integers and there is no limit. Even a simple 8 byte value (2^64) can store milliseconds totaling up to 584,554,531 years.
See also How does Python manage int and long? and the OverflowError doc.

Related

Python's pow function : How to integrate pow function into assembly language

I am designing an operating system, written in Nasm. The purpose is to calculate Fermat's primality test on a low level system in protected mod without multitasking. I use DOS level system calls in DPMI. The code is a standard one and i do not think it is a good idea blowing up this question with long codes.
The question is about python's pow(int,exponent,modulus) function. I will obviously calculate very long numbers about 10^100000000 length. (Example: 2**332200000-1) The code gets the input from user or reads this from a file. It takes approx. 40Mb to cache this large sized integer into file or just into memory.
This means i just allocate a memory for a size of 40Mb in protected mod.
The Fermat's little theorem is working as you know:
if p is prime,
a is integer and gcd(a,p)=1 then,
a**(p-1) mod p = 1
In python, it is being calculated like a charm with no extra effort it gives much back, but extra-ordinary integers like 2^332200000-1 are at a low speed and i decided to make my own operating system-shell, fired on when my computer is booting. The reason is to get the most out of my piety computer system without any system calls which are lowering the speed of my calculations. I have following question:
Is there a website where i can observe and calculate the assembly-code of python's power function ?
If yes or no, can you give me a hint how to do this effectively in assembly very short and speedy ?
The idea is very basic and brief:
The 4-Byte integer will not work in assembly. So decided to read the long hex integers from a file into allocated memory(40Mbyte). When i do calculations with this integer which is very long, for example multiply by 2 then i roll every 4 Byte-Integer to right into a second memory free place. If there is a carry in rest, this will be added to second 4-Byte calculation and so on. It is possible to use memory for these long integers. Everything is ready designed, but the kick to make sense in assembly is just in researching phase. Can you help or inform me in some way.
Again, how to calculate with very, very long numbers in assembly and how to make a power function python's-like with exponent and a modulus inside. How would that seem to be like in code-form.

Why does my Python freeze when I do an overflow calculation?

I'm a MATLAB user trying to understand Python so sorry if this is obvious.
If I say
print(9**9)
I get:
387420489
Great.
If I say print(9**9**9)
Python just sits there indefinitely and freezes (I use Spyder version 4). Ctrl-C doesn't stop it.
Why does it not just immediately return Inf? Is this expected behavior?
When doing numerical calculations with integers, python is not limited to machine-specific numbers such as "int32", and therefore a number such as "2147483647" does not mean much to it. Instead, it uses a "big integer" library, which can, in principle, express any large number, provided there is enough memory for it. When facing a computation such as 9**9**9 python tries to perform it exactly, producing the exact result, however big it may be. For this particular calculation it just takes a lot of time (and memory, presumably internally python is trying to allocate more and more memory as needed).
the num 9**9**9 is very big to caculated
you can wait untill it will return a result
it can take much long time
Why does my Python freeze when I do an overflow calculation?
because no overflow occurred and python hasn't given up. Python will extend the precision until either the calculation succeeds or the machine runs out of memory.

At what point am I using too much memory on a Mac?

I've tried really hard to figure out why my python is using 8 gigs of memory. I've even use gc.get_object() and measured the size of each object and only one of them was larger than 10 megs. Still, all of the objects, and there were about 100,000 of them, added up to 5.5 gigs. On the other hand, my computer is working fine, and the program is running at a reasonable speed. So is the fact that I'm using so much memory cause for concern?
As #bnaecker said this doesn't have a simple (i.e., yes/no) answer. It's only a problem if the combined RSS (resident set size) of all running processes exceeds the available memory thus causing excessive demand paging.
You didn't say how you calculated the size of each object. Hopefully it was by using sys.getsizeof() which should accurately include the overhead associated with each object. If you used some other method (such as calling the __sizeof() method directly) then your answer will be far lower than the correct value. However, even sys.getsizeof() won't account for wasted space due to memory alignment. For example, consider this experiment (using python 3.6 on macOS):
In [25]: x='x'*8193
In [26]: sys.getsizeof(x)
Out[26]: 8242
In [28]: 8242/4
Out[28]: 2060.5
Notice that last value. It implies that the object is using 2060 and 1/2 words of memory. Which is wrong since all allocations consume a multiple of a word. In fact, it looks to me like sys.getsizeof() does not correctly account for word alignment and padding of either the underlying object or the data structure that describes the object. Which means the value is smaller than the amount of memory actually used by the object. Multiplied by 100,000 objects that could represent a substantial amount of memory.
Also, many memory allocators will round up large allocations to a page size (typically a multiple of 4 KiB). Which results in "wasted" space that is probably not going to be included in the sys.getsizeof() return value.

Numpy octuple precision floats and 128 bit ints. Why and how?

This is mostly a question out of curiosity. I noticed that the numpy test suite contains tests for 128 bit integers, and the numerictypes module refers to int128, float256 (octuple precision?), and other types that don't seem to map to numpy dtypes on my machine.
My machine is 64bit, yet I can use quadruple 128bit floats (but not really). I suppose that if it's possible to emulate quadruple floats in software, one can theoretically also emulate octuple floats and 128bit ints. On the other hand, until just now I had never heard of either 128bit ints or octuple precision floating point before. Why is there a reference to 128bit ints and 256bit floats in numpy's numerictypes module if there are no corresponding dtypes, and how can I use those?
This is a very interesting question and probably there are reasons related to python, to computing and/or to hardware. While not trying to give a full answer, here is what I would go towards...
First note that the types are defined by the language and can be different from your hardware architecture. For example you could even have doubles with an 8-bits processor. Of course any arithmetic involves multiple CPU instructions, making the computation much slower. Still, if your application requires it, it might be worth it or even required (better being late than wrong, especially if say you are running a simulation for a say bridge stability...) So where is 128bit precision required? Here's the wikipedia article on it...
One more interesting detail is that when we say a computer is say 64-bit, this is not fully describing the hardware. There are a lot of pieces that can each be (and at least have been at times) different bits: The computational registers in the CPU, the memory addressing scheme / memory registers and the different buses with most important the buss from CPU to memory.
-The ALU (arithmetic and logic unit) has registers that do calculations. Your machines are 64bit (not sure if that also mean they could do 2 32bit calculations at a similar time) This is clearly the most relevant quantity for this discussion. Long time ago, it used to be the case you could go out and buy a co-processor to speed that for calculations of higher precision...
-The Registers that hold memory addresses limit the memory the computer can see (directly) that is why computers that had 32bit memory registers could only see 2^32 bytes (or approx 4 GB) Notice that for 16bits, this becomes 65K which is very low. The OS can find ways around this limit, but not for a single program, so no program in a 32bit computer can normally have more than 4GB memmory.
-Notice that those limits are about bytes, not bits. That is because when referring and loading from memory we load bytes. In fact, the way this is done, loading a byte (8 bits) or 8 (64 bits == buss length for your computer) takes the same time. I ask for an address, and then get at once all bits through the bus.
It can be that in an architecture all these quantities are not the same number of bits.
NumPy is amazingly powerful and can handle numbers much bigger than the internal CPU representation (e.g. 64 bit).
In case of dynamic type it stores the number in an array. It can extend the memory block too, that is why you can have an integer with 500 digits. This dynamic type is called bignum. In older Python versions it was the type long. In newer Python (3.0+) there is only long, which is called int, which supports almost arbitrarily number of digits (-> bignum).
If you specify a data type (int32 for example), then you specify bit length and bit format, i.e. which bits in memory stands for what. Example:
dt = np.dtype(np.int32) # 32-bit integer
dt = np.dtype(np.complex128) # 128-bit complex floating-point number
Look in: https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html

Storing and replaying binary network data with python

I have a Python application which sends 556 bytes of data across the network at a rate of 50 Hz. The binary data is generated using struct.pack() which returns a string, which is subsequently written to a UDP socket.
As well as transmitting this data, I would like to save this data to file as space-efficiently as possible, including a timestamp for each message, so that I can replay the data at a later time. What would be the best way of doing this using Python?
I have mulled over using a logging object, but have not yet found out whether Python can read in log files so that I can replay the data. Also, I don't know whether the logging object can handle binary data.
Any tips would be much appreciated! Although Wireshark would be an option, I'd rather store the data using my application so that I can automatically start new data files each time I run the program.
Python's logging system is intended to process human-readable strings, and it's intended to be easy to enable or disable depending on whether it's you (the developer) or someone else running your program. Don't use it for something that your application always needs to output.
The simplest way to store the data is to just write the same 556-byte string that you send over the socket out to a file. If you want to have timestamps, you could precede each 556-byte message with the time of sending, converted to an integer, and packed into 4 or 8 bytes using struct.pack(). The exact method would depend on your specific requirements, e.g. how precise you need the time to be, and whether you need absolute time or just relative to some reference point.
One possibility for a compact timestamp for replay purposes...: set the time as a floating point number of seconds since the epoch with time.time(), multiply by 50 since you said you're repeating this 50 times a second (the resulting unit, one fiftieth of a second, is sometimes called "a jiffy"), truncate to int, subtract from the similar int count of jiffies since the epoch that you measured at the start of your program, and struct.pack the result into an unsigned int with the number of bytes you need to represent the intended duration -- for example, with 2 bytes for this timestamp, you could represent runs of about 1200 seconds (20 minutes), but if you plan longer runs you'd need 4 bytes (3 bytes is just too unwieldy IMHO;-).
Not all operating systems have time.time() returning decent precision, so you may need more devious means if you need to run on such unfortunately limited OSs. (That's VERY os-dependent, of course). What OSs do you need to support...?
Anyway...: for even more compactness, use a slightly higher multiplier than 50 (say 10000) for more accuracy, and store, each time, the difference wrt the previous timestamp -- since that difference should not be much different from a jiffy (if I understand your spec correctly) that should be about 200 or so of these "tenth-thousands of a second" and you can store a single unsigned byte (and have no limit wrt the duration of runs you're storing for future replay). This depends even more on accurate returns from time.time() of course.
If your 556-byte binary data is highly compressible, it will be worth your while to use gzip to store the stream of timestamp-then-data in compressed form; this is best assessed empirically on your actual data, though.

Categories