This is the python script that I'm trying to run:
n = 50000000000 ##50 billion
b = [0]*n
for x in range(0,n):
b[x] = random.randint(1,899999)
... But the output I'm getting is:
E:\python\> python sort.py
Traceback (most recent call last):
File "E:\python\sort.py", line 8, in <module>
b = [0]*n
MemoryError
So, what do I do now?
The size of the list you are generating (which is 50 billion not 5).
An int object instance takes 24 bytes (sys.getsizeof(int(899999)), the upper limit of your random numbers), so that list would take 50,000,000,000 * 24 bytes, which is about 1.09 TB.
In other words to create such a list you would need at least 1118 GB of RAM in your computer.
I don't know what your use case is, but you should consider a different approach to what you are trying to solve (maybe define a generator, or just don't store your numbers in memory and instead directly use the numbers in the for loop).
Since other people already answered your question here's a quick tip when dealing with big numbers: you can use "_" to separate the digits of your numbers as you wish:
n = 50_000_000_000
is the same as
n = 50000000000
but the former is much easier on the eyes
One other possibility is to increase you computers vitual memory. It helped me in my code. I had a max 3000MB virtual memory, when I increased it to 5000MB the memory error was gone.
Related
I have a large NumPy array nodes = np.arange(100_000_000) and I need to rearrange this array by:
Recording and then removing the middle value in the array
Split the array into the left half and right half
Repeat Steps 1-2 for each half
Stop when all values are exhausted
So, for a smaller input example nodes = np.arange(10), the output would be:
[5 2 8 1 4 7 9 0 3 6]
This was accomplished by naively doing:
import numpy as np
def split(node, out):
mid = len(node) // 2
out.append(node[mid])
return node[:mid], node[mid+1:]
def reorder(a):
nodes = [a.tolist()]
out = []
while nodes:
tmp = []
for node in nodes:
for n in split(node, out):
if n:
tmp.append(n)
nodes = tmp
return np.array(out)
if __name__ == "__main__":
nodes = np.arange(10)
print(reorder(nodes))
However, this is way too slow for nodes = np.arange(100_000_000) and so I am looking for a much faster solution.
You can vectorize your function with Numpy by working on groups of slices.
Here is an implementation:
# Similar to [e for tmp in zip(a, b) for e in tmp] ,
# but on Numpy arrays and much faster
def interleave(a, b):
assert len(a) == len(b)
return np.column_stack((a, b)).reshape(len(a) * 2)
# n is the length of the input range (len(a) in your example)
def fast_reorder(n):
if n == 0:
return np.empty(0, dtype=np.int32)
startSlices = np.array([0], dtype=np.int32)
endSlices = np.array([n], dtype=np.int32)
allMidSlices = np.empty(n, dtype=np.int32) # Similar to "out" in your implementation
midInsertCount = 0 # Actual size of allMidSlices
# Generate a bunch of middle values as long as there is valid slices to split
while midInsertCount < n:
# Generate the new mid/left/right slices
midSlices = (endSlices + startSlices) // 2
# Computing the next slices is not needed for the last step
if midInsertCount + len(midSlices) < n:
# Generate the nexts slices (possibly with invalid ones)
newStartSlices = interleave(startSlices, midSlices+1)
newEndSlices = interleave(midSlices, endSlices)
# Discard invalid slices
isValidSlices = newStartSlices < newEndSlices
startSlices = newStartSlices[isValidSlices]
endSlices = newEndSlices[isValidSlices]
# Fast appending
allMidSlices[midInsertCount:midInsertCount+len(midSlices)] = midSlices
midInsertCount += len(midSlices)
return allMidSlices[0:midInsertCount]
On my machine, this is 89 times faster than your scalar implementation with the input np.arange(100_000_000) dropping from 2min35 to 1.75s. It also consume far less memory (rougthly 3~4 times less). Note that if you want a faster code, then you probably need to use a native language like C or C++.
Edit:
The question has been updated to have a much smaller input array so I leave the below for historical reasons. Basically it was likely a typo but we often get accustomed to computers working with insanely large numbers and when memory is involved they can be a real problem.
There is already a numpy based solution submitted by someone else that I think fits the bill.
Your code requires an insane amount of RAM just to hold 100 billion 64 bit integers. Do you have 800GB of RAM? Then you convert the numpy array to a list which will be substantially larger than the array (each packed 64 bit int in the numpy array will become a much less memory efficient python int object and the list will have a pointer to that object). Then you make a lot of slices of the list which will not duplicate the data but will duplicate the pointers to the data and use even more RAM. You also append all the result values to a list a single value at a time. Lists are very fast for adding items generally but with such an extreme size this will not only be slow but the way the list is allocated is likely to be extremely wasteful RAM wise and contribute to major problems (I believe they double in size when they get to a certain level of fullness so you will end up allocating more RAM than you need and doing many allocations and likely copies). What kind of machine are you running this on? There are ways to improve your code but unless you're running it on a super computer I don't know that you're going to ever finish that calculation. I only..only? have 32GB of RAM and I'm not going to even try to create a 100B int_64 numpy array as I don't want to use up ssd write life for a mass of virtual memory.
As for improving your code stick to numpy arrays don't change to a python list it will greatly increase the RAM you need. Preallocate a numpy array to put the answer in. Then you need a new algorithm. Anything recursive or recursive like (ie a loop splitting the input,) will require tracking a lot of state, your nodes list is going to be extraordinarily gigantic and again use a lot of RAM. You could use len(a) to indicate values that are removed from your list and scan through the entire array each time to figure out what to do next but that will save RAM in favour of a tremendous amount of searching a gigantic array. I feel like there is an algorithm to cut numbers from each end and place them in the output and just track the beginning and end but I haven't figured it out at least not yet.
I also think there is a simpler algorithm where you just track the number of splits you've done instead of making a giant list of slices and keeping it all in memory. Take the middle of the left half and then the middle of the right then count up one and when you take the middle of the left half's left half you know you have to jump to the right half then the count is one so you jump over to the original right half's left half and on and on... Based on the depth into the halves and the length of the input you should be able to jump around without scanning or tracking all of those slices though I haven't been able to dedicate much time to thinking this through in my head.
With a problem of this nature if you really need to push the limits you should consider using C/C++ so you can be as efficient as possible with RAM usage and because you're doing an insane number of tiny things which doesn't map well to python performance.
I came across this question where 8 queens should be placed on a chessboard such that none can kill each other.This is how I tried to solve it:
import itertools
def allAlive(position):
qPosition=[]
for i in range(8):
qPosition.append(position[2*i:(2*i)+2])
hDel=list(qPosition) #Horizontal
for i in range(8):
a=hDel[0]
del hDel[0]
l=len(hDel)
for j in range(l):
if a[:1]==hDel[j][:1]:
return False
vDel=list(qPosition) #Vertical
for i in range(8):
a=vDel[0]
l=len(vDel)
for j in range(l):
if a[1:2]==vDel[j][1:2]:
return False
cDel=list(qPosition) #Cross
for i in range(8):
a=cDel[0]
l=len(cDel)
for j in range(l):
if abs(ord(a[:1])-ord(cDel[j][:1]))==1 and abs(int(a[1:2])-int(cDel[j][1:2]))==1:
return False
return True
chessPositions=['A1','A2','A3','A4','A5','A6','A7','A8','B1','B2','B3','B4','B5','B6','B7','B8','C1','C2','C3','C4','C5','C6','C7','C8','D1','D2','D3','D4','D5','D6','D7','D8','E1','E2','E3','E4','E5','E6','E7','E8','F1','F2','F3','F4','F5','F6','F7','F8','G1','G2','G3','G4','G5','G6','G7','G8','H1','H2','H3','H4','H5','H6','H7','H8']
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
for i in qPositions:
if allAlive(i)==True:
print(i)
Traceback (most recent call last):
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
MemoryError
I'm still a newbie.How can I overcome this error?Or is there any better way to solve this problem?
What you are trying to do is impossible ;)!
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
means that you will get a list with length 64 choose 8 = 4426165368, since len(chessPositions) = 64, which you cannot store in memory. Why not? Combining what I stated in the comments and #augray in his answer, the result of above operation would be a list which would take
(64 choose 8) * 2 * 8 bytes ~ 66GB
of RAM, since it will have 64 choose 8 elements, each element will have 8 substrings like 'A1' and each substring like this consists of 2 character. One character takes 1 byte.
You have to find another way. I am not answering to that because that is your job. The n-queens problem falls into dynamic programming. I suggest you to google 'n queens problem python' and search for an answer. Then try to understand the code and dynamic programming.
I did searching for you, take a look at this video. As suggested by #Jean François-Fabre, backtracking. Your job is now to watch the video once, twice,... as long as you don't understand the solution to problem. Then open up your favourite editor (mine is Vi :D) and code it down!
This is one case where it's important to understand the "science" (or more accurately, math) part of computer science as much as it is important to understand the nuts and bolts of programming.
From the documentation for itertools.combinations, we see that the number of items returned is n! / r! / (n-r)! where n is the length of the input collection (in your case the number of chess positions, 64) and r is the length of the subsequences you want returned (in your case 8). As #campovski has pointed out, this results in 4,426,165,368. Each returned subsequence will consist of 8*2 characters, each of which is a byte (not to mention the overhead of the other data structures to hold these and calculate the answer). Each character is 1 byte, so in total, just counting the memory consumption of the resulting subsequences gives 4,426,165,368*2*8=70818645888. dividing this by 1024^3 gives the number of Gigs of memory held by these subsequences, about 66GB.
I'm assuming you don't have that much memory :-) . Calculating the answer to this question will require a well thought out algorithm, not just "brute force". I recommend doing some research on the problem- Wikipedia looks like a good place to start.
As the other answers stated you cant get every combination to fit in memory, and you shouldn't use brute force because the speed will be slow. However, if you want to use brute force, you could constrain the problem, and eliminate common rows and columns and check the diagonal
from itertools import permutations
#All possible letters
letters = ['a','b','c','d','e','f','g','h']
#All possible numbers
numbers = [str(i) for i in range(1,len(letters)+1)]
#All possible permutations given rows != to eachother and columns != to eachother
r = [zip(letters, p) for p in permutations(numbers,8)]
#Formatted for your function
points = [''.join([''.join(z) for z in b]) for b in r]
Also as a note, this line of code attempts to first find all of the combinations, then feed your function, which is a waste of memory.
qPositions=[''.join(p) for p in itertools.combinations(chessPositions,8)]
If you decided you do want to use a brute force method, it is possible. Just modify the code for itertools combinations. Remove the yield and return and just feed your check function one at a time.
I am trying find the intesect sub set between two pretty big csv files of
phone numbers(one has 600k rows, and the other has 300mil). I am currently using pandas to open both files and then converting the needed columns into 1d numpy arrays and then using numpy intersect to get the intersect. Is there a better way of doing this, either with python or any other method. Thanks for any help
import pandas as pd
import numpy as np
df_dnc = pd.read_csv('dncTest.csv', names = ['phone'])
df_test = pd.read_csv('phoneTest.csv', names = ['phone'])
dnc_phone = df_dnc['phone']
test_phone = df_test['phone']
np.intersect1d(dnc_phone, test_phone)
I will give you general solution with some Python pseudo code. What you are trying to solve here is the classical problem from the book "Programming Pearls" by Jon Bentley.
This is solved very efficiently with just a simple bit array, hence my comment, how long is (how many digits does have) the phone number.
Let's say the phone number is at most 10 digits long, than the max phone number you can have is: 9 999 999 999 (spaces are used for better readability). Here we can use 1bit per number to identify if the number is in set or not (bit is set or not set respectively), thus we are going to use 9 999 999 999 bits to identify each number, i.e.:
bits[0] identifies the number 0 000 000 000
bits[193] identifies the number 0 000 000 193
having a number 659 234-4567 would be addressed by the bits[6592344567]
Doing so we'd need to pre-allocate 9 999 999 999 bits initially set to 0, which is: 9 999 999 999 / 8 / 1024 / 1024 = around 1.2 GB of memory.
I think that holding the intersection of numbers at the end will use more space than the bits representation => at most 600k ints will be stored => 64bit * 600k = around 4.6 GB (actually int is not stored that efficiently and might use much more), if these are string you'll probably end with even more memory requirements.
Parsing a phone number string from CSV file (line by line or buffered file reader), converting it to a number and than doing a constant time memory lookup will be IMO faster than dealing with strings and merging them. Unfortunately, I don't have these phone number files to test, but would be interested to hear your findings.
from bitstring import BitArray
max_number = 9999999999
found_phone_numbers = BitArray(length=max_number+1)
# replace this function with the file open function and retrieving
# the next found phone number
def number_from_file_iteator(dummy_data):
for number in dummy_data:
yield number
def calculate_intersect():
# should be open a file1 and getting the generator with numbers from it
# we use dummy data here
for number in number_from_file_iteator([1, 25, 77, 224322323, 8292, 1232422]):
found_phone_numbers[number] = True
# open second file and check if the number is there
for number in number_from_file_iteator([4, 24, 224322323, 1232422, max_number]):
if found_phone_numbers[number]:
yield number
number_intersection = set(calculate_intersect())
print number_intersection
I used BitArray from bitstring pip package and it needed around 2 secs to initialize the entire bitstring. Afterwards, scanning the file will use constant memory. At the end I used a set to store the items.
Note 1: This algorithm can be modified to just use the list. In that case a second loop as soon as bit number matches this bit must be reset, so that duplicates do not match again.
Note 2: Storing in the set/list occurs lazy, because we use the generator in the second for loop. Runtime complexity is linear, i.e. O(N).
Read the 600k phone numbers into a set.
Input the larger file row by row, checking each row against the set.
Write matches to an output file immediately.
That way you don't have to load all the data in memory at once.
I'm trying to find a number's max prime factor. Here is my code, but I don't know are my codes working correctly or not because I'm getting MemoryError all the time;
lst = []
for x in range(3,round(600851475143**0.5)+1):
for y in range(2,x+1):
if x%y==0:
for z in range(2,x+1):
if x%z!=0:
lst.append(x)
print (max(lst))
Here is the traceback;
>>>
Traceback (most recent call last):
File "C:--------", line 19, in <module>
lst.append(x)
MemoryError
>>>
After 20-30 seconds process, I got this error.How to avoid this one?
Edit: Actually, I guess that it may be MemoryError so as you see I put in the first range function, square root of that number 600851475143**0.5. Then I use round to aviod from float. But I still got MemoryError.
if x%y==0:
for z in range(2,x+1):
if x%z!=0:
lst.append(x)
I'm guessing what you're trying to do here is, append x to lst if x is prime. But what you're actually doing is, appending x to lst for every number less than x that isn't a factor of x. Ex. 6 will be appended twice because both 4 and 5 are not a factor of 6.
If you want to append x only when all z's are not a factor of x, then try:
if x%y==0:
if all(x%z != 0 for z in range(2, x)):
lst.append(x)
From the python documentation:
exception MemoryError
Raised when an operation runs out of memory but the situation may still be rescued (by deleting some objects). The associated value is a string indicating what kind of (internal) operation ran out of memory. Note that because of the underlying memory management architecture (C’s malloc() function), the interpreter may not always be able to completely recover from this situation; it nevertheless raises an exception so that a stack traceback can be printed, in case a run-away program was the cause.
You're putting too many elements into lst.
There are more efficient prime factorization algorithms than the one you wrote, which can be looked up on wikipedia.
Attempting to solve this problem:
For a positive number n, define S(n) as the sum of the integers x, for which 1 < x < n and x^3 ≡ 1 mod n.
When n=91, there are 8 possible values for x, namely : 9, 16, 22, 29, 53, 74, 79, 81.
Thus, S(91)=9+16+22+29+53+74+79+81=363.
Find S(13082761331670030).
Of course, my code works for S(91) and when attempting to find S(13082761331670030) I get two different errors.
Here is my code:
def modcube(n):
results = []
for k in range(1,n):
if k**3%n==1:
results.append(k)
return results
This produces Overflow error: range has too many items. When I try using 'xrange' instead of 'range' I get an error stating python int too large to convert to c long. I have also just tried several other things without success.
Can anyone point me in the right direction, without telling me exactly how to solve it?
No spoilers please. I've been at it for two days, my next option is to try implementing this in Java since I'm new to Python.
I think you need to understand two concepts here:
1. integer representation in C and in Python
The implementation of Python you use is called CPython, because it is written using the C language. In C, long integers (usually) are 32 bits long. It means it can work with integers between -2147483647 and 2147483648. In Python, when an integer exceeds this range, it converts them to arbitrary precision integers, where the size of the integer is limited only by the memory of your computer. However, operation on those arbitrary integers (called long integers in Python) are order of magnitude slower than operation on 32 bits integers.
2. The difference between range and xrange:
range produces a list. If you have range(10), it stores the list [0, 1, ... 9] entirely in memory. This is why storing a list of 13082761331670030 items in memory is too mush. Assuming each number is 64 bits, it would need 93 TB of RAM to store the entire list!
xrange produces an iterator. It returns each number one by one. This way, it allows to perform operations on each number of the list without needing to store the entire list in memory. But again, performing calculations on 13082761331670030 different numbers could take more time that you think... The other thing about xrange is that it doesn't work with Python long integers; it is limited (for speed reasons) to 32 bits integers. This is why your program doesn't work using xrange.
The bottom line: Project Euler problems are (more or less) classified by degree of difficulty. You should begin by lower problems first.
You wanted hints, not a solution.
Hints:
Consider that the prime factors of 13082761331670030 is equal to the following primes: 2 x 3 x 5 x 7 x 11 x 13 x 17 x 19 x 23 x 29 x 31 x 37 x 41 x 43
Chinese remainder theorem
Just because x^3 ≡ 1 mod n does not mean that there are not other values other than 3 that satisfy this condition. Specifically, prime1 ** (prime2 - 2) % prime2
My Python solution is 86 milliseconds...