Creating an array of objects in Python for pygame - python

I create the objects
class Disk:
def __init__(self,number,colour,position,size):
self.size = size
self.colour = colour
self.number = number
self.position = position
def Render(self,screen):
pygame.draw.rect(screen,self.colour,(self.position,self.size))
I am trying to create an array of this object using user input (for right now I am just making my own number)
Colours are a seperate array that I've created (it works)
def drawDisk(screen,colours):
num = 5
for i in range (num):
disk[i] = Disk(i,colours[i*num],(0+(i*15),500-(i*50)),(400 -(i*30),50))
disk[i].Render(screen)
My program works except for when I try creating an array of disks and using those disks instead of hard coding each individual disk.

You haven't defined disk. You are trying to simultaneously create the list and the items in it and iterate over it, but haven't actually told Python what disk is supposed to be. Try:
def drawDisk(screen, colours):
disk = [Disk(i, colours[i], (0+(i*15), 500-(i*50)), (400 -(i*30), 50))
for i in range(len(colours))] # create and fill disk
# list comprehension, equivalent to:
# disk = []
# for i in range(len(colours)):
# disk.append(Disk(i, colours[i], ...))
for d in disk:
d.Render(screen) # use items in disk
return disk # for use elsewhere

Related

How to scale multiple images with a for loop?

I'm trying to use a for-loop to iterate through a list of self classes. I want to give each one the same scale.
def __init__(self):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load("Migue/m_normal.png")
self.quieto = pygame.image.load("Migue/m_normal.png")
self.andando = pygame.image.load("Migue/m_andando_normal.png")
self.image = pygame.transform.scale(self.image, sizenorm)
states = [self.quieto, self.andando]
for i in states:
i = pygame.transform.scale(i, sizenorm)
This wont work, but I can achieve the result using this:
self.quieto = pygame.transform.scale(self.quieto, sizenorm)
self.andando = pygame.transform.scale(self.andando, sizenorm)
The problem is that I have to make a lot of more states, and using that for loop would be shorter. However, it doesn't work like the lower example does. I don't know what's wrong with the loop.
You can create a list of the scaled objects and assign the elements of the list to the original attributes:
states = [self.quieto, self.andando]
states = [pygame.transform.scale(i, sizenorm) for i in states]
(self.quieto, self.andando) = states
This can even be written in a single line
(self.quieto, self.andando) = [pygame.transform.scale(i, sizenorm) for i in [self.quieto, self.andando]]
Alternatively, you can simply put the images in a list:
filenames = ["Migue/m_normal.png", "Migue/m_andando_normal.png"]
self.states = [pygame.transform.scale(pygame.image.load(n), sizenorm) for n in filenames]

Python Serialization with Append Support

I am working with a lot of objects that have some attributes as well as numpy arrays (images, masks, etc.). I want to dump them onto disk during program execution to save memory and want to append more data when it is available (during same program execution) without loading the dumped object into memory.
The problem is that appending data to serialized/pickled file cannot be done without first loading it into memory. How can I save/update (without loading the whole object) these objects during program execution? Any idea is welcome.
The below is pseudocode.
class StoredObject():
def __init__(self, centroid, _image, _color, _bbox, _type, _mask):
self.centroids = [centroid]
self.bboxes = [_bbox]
self.track_color = random_color()
self.color = _color
self.images = [_image]
self.type = _type
self.last_appear = time.time()
self.masks = [_mask]
store = []
track_objects(obj, obj_image, obj_mask):
if obj already belongs to store:
find where it is stored earlier
and add obj_image and obj_mask to its obj_image list
and obj_mask list respectively
else
add obj(obj_image, obj_mask) in store

Ordering data from returned pool.apply_async

I am currently writing a steganography program. I currently have the majority of the things I want working. However I want to rebuild my message using multiple processes, this obviously means the bits returned from the processes need to be ordered. So currently I have:
Ok im home now I will put some actual code up.
def message_unhide(data):
inp = cv.LoadImage(data[0]) #data[0] path to image
steg = LSBSteg(inp)
bin = steg.unhideBin()
return bin
#code in main program underneath
count = 0
f = open(files[2], "wb") #files[2] = name of file to rebuild
fat = open("fat.txt", 'w+')
inp = cv.LoadImage(files[0][count]) # files[0] directory path of images
steg = LSBSteg(inp)
bin = steg.unhideBin()
fat.write(bin)
fat.close()
fat = open("fat.txt", 'rb')
num_files = fat.read() #amount of images message hidden across
fat.close()
count += 1
pool = Pool(5)
binary = []
''' Just something I was testing
for x in range(int(num_files)):
binary.append(0)
print (binary)
'''
while count <= int(num_files):
data = [files[0][count], count]
#f.write(pool.apply(message_unhide, args=(data, ))) #
#binary[count - 1] = [pool.apply_async(message_unhide, (data, ))] #
#again just another few ways i was trying to overcome
binary = [pool.apply_async(message_unhide, (data, ))]
count += 1
pool.close()
pool.join()
bits = [b.get() for b in binary]
print(binary)
#for b in bits:
# f.write(b)
f.close()
This method just overwrites binary
binary = [pool.apply_async(message_unhide, (data, ))]
This method fills the entire binary, however I loose the .get()
binary[count - 1] = [pool.apply_async(message_unhide, (data, ))]
Sorry for sloppy coding I am certainly no expert.
Your main issue has to do with overwriting binary in the loop. You only have one item in the list because you're throwing away the previous list and recreating it each time. Instead, you should use append to modify the existing list:
binary.append(pool.apply_async(message_unhide, (data, )))
But you might have a much nicer time if you use pool.map instead of rolling your own version. It expects an iterable yielding a single argument to pass to the function on each iteration, and it returns a list of the return values. The map call blocks until all the values are ready, so you don't need any other synchronization logic.
Here's an implementation using a generator expression to build the data argument items on the fly. You could simplify things and just pass files[0] to map if you rewrote message_unhide to accept the filename as its argument directly, without indexing a list (you never use the index, it seems):
# no loop this time
binary = pool.map(message_unhide, ([file, i] for i, file in enumerate(files[0])))

How to loop through one element of a zip() function twice - Python

So here's my dilema... I'm writing a script that reads all .png files from a folder and then converts them to a number of different dimensions which I have specified in a list. Everything works as it should except it quits after handling one image.
Here is my code:
sizeFormats = ["1024x1024", "114x114", "40x40", "58x58", "60x60", "640x1136", "640x960"]
def resizeImages():
widthList = []
heightList = []
resizedHeight = 0
resizedWidth = 0
#targetPath is the path to the folder that contains the images
folderToResizeContents = os.listdir(targetPath)
#This splits the dimensions into 2 separate lists for height and width (ex: 640x960 adds
#640 to widthList and 960 to heightList
for index in sizeFormats:
widthList.append(index.split("x")[0])
heightList.append(index.split("x")[1])
#for every image in the folder, apply the dimensions from the populated lists and save
for image,w,h in zip(folderToResizeContents,widthList,heightList):
resizedWidth = int(w)
resizedHeight = int(h)
sourceFilePath = os.path.join(targetPath,image)
imageFileToConvert = Image.open(sourceFilePath)
outputFile = imageFileToConvert.resize((resizedWidth,resizedHeight), Image.ANTIALIAS)
outputFile.save(sourceFilePath)
The following will be returned if the target folder contains 2 images called image1.png,image2.png (for sake of visualization I'll add the dimensions that get applied to the image after an underscore):
image1_1024x1024.png,
..............,
image1_640x690.png (Returns all 7 different dimensions for image1 fine)
it stops there when I need it to apply the same transformations to image_2. I know this is because the length of widthList and heightList are only 7 elements long and so exits the loop before image2 gets its turn. Is there any way I can go about looping through widthList and heightList for every image in the targetPath?
Why not keep it simple:
for image in folderToResizeContents:
for fmt in sizeFormats:
(w,h) = fmt.split('x')
N.B. You are overwriting the files produced as you are not changing the name of the outpath.
Nest your for loops and you can apply all 7 dimensions to each image
for image in folderToResizeContents:
for w,h in zip(widthList,heightList):
the first for loop will ensure it happens for each image, whereas the second for loop will ensure that the image is resized to each size
You need to re-iterate through the sizeFormats for every file. Zip doesn't do this unless you get even trickier with cyclic iterators for height and width.
Sometimes tools such as zip make for longer more complicated code when a couple of nested for loops work fine. I think its more straight forward than splitting into multiple lists and then zipping them back together again.
sizeFormats = ["1024x1024", "114x114", "40x40", "58x58", "60x60", "640x1136", "640x960"]
sizeTuples = [(int(w), int(h)) for w,h in map(lambda wh: wh.split('x'), sizeFormats)]
def resizeImages():
#for every image in the folder, apply the dimensions from the populated lists and save
for image in os.listdir(targetPath):
for resizedWidth, resizedHeight in sizeTuples:
sourceFilePath = os.path.join(targetPath,image)
imageFileToConvert = Image.open(sourceFilePath)
outputFile = imageFileToConvert.resize((resizedWidth,resizedHeight), Image.ANTIALIAS)
outputFile.save(sourceFilePath)

Size-Incremental Numpy Array in Python

I just came across the need of an incremental Numpy array in Python, and since I haven't found anything I implemented it. I'm just wondering if my way is the best way or you can come up with other ideas.
So, the problem is that I have a 2D array (the program handles nD arrays) for which the size is not known in advance and variable amount of data need to be concatenated to the array in one direction (let's say that I've to call np.vstak a lot of times). Every time I concatenate data, I need to take the array, sort it along axis 0 and do other stuff, so I cannot construct a long list of arrays and then np.vstak the list at once.
Since memory allocation is expensive, I turned to incremental arrays, where I increment the size of the array of a quantity bigger than the size I need (I use 50% increments), so that I minimize the number of allocations.
I coded this up and you can see it in the following code:
class ExpandingArray:
__DEFAULT_ALLOC_INIT_DIM = 10 # default initial dimension for all the axis is nothing is given by the user
__DEFAULT_MAX_INCREMENT = 10 # default value in order to limit the increment of memory allocation
__MAX_INCREMENT = [] # Max increment
__ALLOC_DIMS = [] # Dimensions of the allocated np.array
__DIMS = [] # Dimensions of the view with data on the allocated np.array (__DIMS <= __ALLOC_DIMS)
__ARRAY = [] # Allocated array
def __init__(self,initData,allocInitDim=None,dtype=np.float64,maxIncrement=None):
self.__DIMS = np.array(initData.shape)
self.__MAX_INCREMENT = maxIncrement
if self.__MAX_INCREMENT == None:
self.__MAX_INCREMENT = self.__DEFAULT_MAX_INCREMENT
# Compute the allocation dimensions based on user's input
if allocInitDim == None:
allocInitDim = self.__DIMS.copy()
while np.any( allocInitDim < self.__DIMS ) or np.any(allocInitDim == 0):
for i in range(len(self.__DIMS)):
if allocInitDim[i] == 0:
allocInitDim[i] = self.__DEFAULT_ALLOC_INIT_DIM
if allocInitDim[i] < self.__DIMS[i]:
allocInitDim[i] += min(allocInitDim[i]/2, self.__MAX_INCREMENT)
# Allocate memory
self.__ALLOC_DIMS = allocInitDim
self.__ARRAY = np.zeros(self.__ALLOC_DIMS,dtype=dtype)
# Set initData
sliceIdxs = [slice(self.__DIMS[i]) for i in range(len(self.__DIMS))]
self.__ARRAY[sliceIdxs] = initData
def shape(self):
return tuple(self.__DIMS)
def getAllocArray(self):
return self.__ARRAY
def getDataArray(self):
"""
Get the view of the array with data
"""
sliceIdxs = [slice(self.__DIMS[i]) for i in range(len(self.__DIMS))]
return self.__ARRAY[sliceIdxs]
def concatenate(self,X,axis=0):
if axis > len(self.__DIMS):
print "Error: axis number exceed the number of dimensions"
return
# Check dimensions for remaining axis
for i in range(len(self.__DIMS)):
if i != axis:
if X.shape[i] != self.shape()[i]:
print "Error: Dimensions of the input array are not consistent in the axis %d" % i
return
# Check whether allocated memory is enough
needAlloc = False
while self.__ALLOC_DIMS[axis] < self.__DIMS[axis] + X.shape[axis]:
needAlloc = True
# Increase the __ALLOC_DIMS
self.__ALLOC_DIMS[axis] += min(self.__ALLOC_DIMS[axis]/2,self.__MAX_INCREMENT)
# Reallocate memory and copy old data
if needAlloc:
# Allocate
newArray = np.zeros(self.__ALLOC_DIMS)
# Copy
sliceIdxs = [slice(self.__DIMS[i]) for i in range(len(self.__DIMS))]
newArray[sliceIdxs] = self.__ARRAY[sliceIdxs]
self.__ARRAY = newArray
# Concatenate new data
sliceIdxs = []
for i in range(len(self.__DIMS)):
if i != axis:
sliceIdxs.append(slice(self.__DIMS[i]))
else:
sliceIdxs.append(slice(self.__DIMS[i],self.__DIMS[i]+X.shape[i]))
self.__ARRAY[sliceIdxs] = X
self.__DIMS[axis] += X.shape[axis]
The code shows considerably better performances than vstack/hstack several random sized concatenations.
What I'm wondering about is: is it the best way? Is there anything that do this already in numpy?
Further it would be nice to be able to overload the slice assignment operator of np.array, so that as soon as the user assign anything outside the actual dimensions, an ExpandingArray.concatenate() is performed. How to do such overloading?
Testing code: I post here also some code I used to make comparison between vstack and my method. I add up random chunk of data of maximum length 100.
import time
N = 10000
def performEA(N):
EA = ExpandingArray(np.zeros((0,2)),maxIncrement=1000)
for i in range(N):
nNew = np.random.random_integers(low=1,high=100,size=1)
X = np.random.rand(nNew,2)
EA.concatenate(X,axis=0)
# Perform operations on EA.getDataArray()
return EA
def performVStack(N):
A = np.zeros((0,2))
for i in range(N):
nNew = np.random.random_integers(low=1,high=100,size=1)
X = np.random.rand(nNew,2)
A = np.vstack((A,X))
# Perform operations on A
return A
start_EA = time.clock()
EA = performEA(N)
stop_EA = time.clock()
start_VS = time.clock()
VS = performVStack(N)
stop_VS = time.clock()
print "Elapsed Time EA: %.2f" % (stop_EA-start_EA)
print "Elapsed Time VS: %.2f" % (stop_VS-start_VS)
I think the most common design pattern for these things is to just use a list for the small arrays. Sure you could do things like dynamic resizing (if you want to do crazy things, you can try to use the resize array method too). I think a typical method is to always double the size, when you really don't know how large things will be. Of course if you know how large the array will grow to, just allocating the full thing up front is simplest.
def performVStack_fromlist(N):
l = []
for i in range(N):
nNew = np.random.random_integers(low=1,high=100,size=1)
X = np.random.rand(nNew,2)
l.append(X)
return np.vstack(l)
I am sure there are some use cases where an expanding array could be useful (for example when the appending arrays are all very small), but this loop seems better handled with the above pattern. The optimization is mostly about how often you need to copy everything around, and doing a list like this (other then the list itself) this is exactly once here. So it is much faster normally.
When I faced a similar problem, I used ndarray.resize() (http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.resize.html#numpy.ndarray.resize). Most of the time, it will avoid reallocation+copying altogether. I can't guarantee it would prove to be faster (it probably would), but it's so much simpler.
As for your second question, I think overriding slice assignment for extending purposes is not a good idea. That operator is meant for assigning to existing items/slices. If you want to change that, it's not immediately clear how you'd want it to behave in some cases, e.g.:
a = MyExtendableArray(np.arange(100))
a[200] = 6 # resize to 200? pad [100:200] with what?
a[90:110] = 7 # assign to existing items AND automagically-allocated items?
a[::-1][200] = 6 # ...
My suggestion is that slice-assignment and data appending should remain separate.

Categories