Creating class objects in a function and retaining them - python

I'm building a game in Python with Pygame. I have created a class that acts as a button. When clicked it cycles through various states (it'll do more later.)
What I need is a grid of them that I can adjust the size of dynamically. I have a function that iterates and assigns the class objects coordinates and makes the grid (the images spawn but end up being static), but I'm not sure it's creating the objects correctly. Even if it does, they get dropped from memory when the function ends since they're local.
I've looked at a lot of stuff and people say to use dictionaries to store dynamically created objects, but I need the class objects to still be able to use their functions. Is there a way to iterate a group of class objects with dynamic names and retain them? I can manually make a huge grid of them, but I'd really rather not have to explicitly create and assign 100 or more coordinates to objects.

The dictionary (or list, or whatever) is the answer. I don't understand why you would think that storing instances in a container would mean you can't access their methods. You can, in exactly the same way as before.
For example, if your class has a method "my_method" you can call it on an object of that class retrieved from a dictionary:
my_dict['my_instance'].my_method()

Related

Any way to prevent modifications to content of a ndarray subclass?

I am creating various classes for computational geometry that all subclass numpy.ndarray. The DataCloud class, which is typical of these classes, has Python properties (for example, convex_hull, delaunay_trangulation) that would be time consuming and wasteful to calculate more than once. I want to do calculations once and only once. Also, just in time, because for a given instance, I might not need a given property at all. It is easy enough to set this up by setting self.__convex_hull = None in the constructor and, if/when the convex_hull property is called, doing the required calculation, setting self.__convex_hull, and returning the calculated value.
The problem is that once any of those complicated properties is invoked, any changes to the contents made, external to my subclass, by the various numpy (as opposed to DataCloud subclass) methods will invalidate all the calculated properties, and I won't know about it. For example, suppose external code simply does this to the instance: datacloud[3,8] = 5. So is there any way to either (1) make the ndarray base class read-only once any of those properties is calculated or (2) have ndarray set some indicator that there has been a change to its contents (which for my purposes makes it dirty), so that then invoking any of the complex properties will require recalculation?
Looks like the answer is:
np.ndarray.setflags(write=False)

Pros and cons of an object with big dataframe vs a list of lots of class objects in Python

I'm writing a python program to perform certain processing on data for couple of millions (N) users. Outcome of the process would be several 1D arrays for each user (the process would apply on each user separately). I need set and get functions for each user output data.
I have two options for implementing this:
1. Create a class with attributes of size N by column_size, so one object that contains big arrays
2. Create a class for each user and store instances of this class in a list, so a list of N objects
My question is that what are pros and cons of each approach in terms of speed and memory consumption?
The question is rather broad, so I will not go beyond generalities.
If you intend to process user by user, then it makes sense to have one object per user.
On the other hand, if you mainly process all users at the same time and attribute by attribute, then it makes sense to have classes for attributes each object containing the attribute for all users. That way, if memory become scarce, you can save everything to disk and only have one user (resp. attribute) in memory.

Providing an integer as external representation of an object

I have a family of classes that are composited upon one another. The innermost leaf of this composition is a Coordinate class that defines the position of its parent Variant class.
I would like to iterate over a list of Coordinates and check if any of the Coordinates are equal to a separate integer coordinate. I have looked at Python __ methods and I cannot find an exact form of what i wish to do, although __repr__ comes close (but only provides a string form).
Is there another way to implement this?
Sounds like eq might be more suited to what you want to do. Either that or just create a util function to do the comparison?

Python: Separate class methods from the class in an own package

I wrote a class with many different parameters, depending on the parameter the class uses different actions.
I can easily differentiate my methods into different cases, each set of methods belonging to certain parameters.
This resulted in a huge .py file, implementing all methods in the one class. For better readability, is it possible to write multiple methods in an own file and load it (similar as a package) into the class to treat them as class methods?
To give more details, my class is a decision tree. A parameter for example is the pruning method, used to shrink the tree. As I use different pruning methods, this takes a lot of lines in my class. I need to have a set of methods for each pruning parameter. It would be nice to simply load the methods for pruning from another file into the class and therefore shrinking the size of my decision tree .py file.
For better readability, I got a few suggestions.
Collapse all functions definitions, which can easily be achieved by
one click in most of the popular text editors.
Place related methods next to each other.
Give proper names to categorise and differentiate methods, for
example.
def search_board_first(): pass
def search_deep_first(): pass
Regarding splitting a huge class into a Object oriented behaviour, my rule of thumb is to consider the re-usability. If functions can be reused by other classes, they should be put in separate files and make them independent(static) of other classes.
If the methods are tied to the class and no where else, it is better to just to enclose that method within the class itself. Think it this way, to review the code, you need to refer the class properties anyway. Logically it doesn't quite make sense to split files just for splitting.

python program structure and use of global variables

I have a program completed that does the following:
1)Reads formatted data (a sequence of numbers and associated labels) from serial port in real time.
2)Does minor manipulations to data.
3)plots data in real time in a gui I wrote using pyqt.
4)Updates data stats in the gui.
5)Allows post analysis of the data after collection is stopped.
There are two dialogs (separate classes) that are called from within the main window in order to select certain preferences in plotting and statistics.
My question is the following: Right now my data is read in and declared as several global variables that are appended to as data comes in 20x per second or so - a 2d list of values for the numerical values and 1d lists for the various associated text values. Would it be better to create a class in which to store data and its various attributes, and then to use instances of this data class to make everything else happen - like the plotting of the data and the statistics associated with it?
I have a hunch that the answer is yes, but I need a bit of guidance on how to make this happen if it is the best way forward. For instance, would every single datum be a new instance of the data class? Would I then pass them one by one or as a list of instances to the other classes and to methods? How should the passing most elegantly be done?
If I'm not being specific enough, please let me know what other information would help me get a good answer.
A reasonably good rule of thumb is that if what you are doing needs more than 20 lines of code it is worth considering using an object oriented design rather than global variables, and if you get to 100 lines you should already be using classes. The purists will probably say never use globals but IMHO for a simple linear script it is probably overkill.
Be warned that you will probably get a lot of answers expressing horror that you are not already.
There are some really good, (and some of them free), books that introduce you to object oriented programming in python a quick google should provide the help you need.
Added Comments to the answer to preserve them:
So at 741 lines, I'll take that as a yes to OOP:) So specifically on the data class. Is it correct to create a new instance of the data class 20x per second as data strings come in, or is it more appropriate to append to some data list of an existing instance of the class? Or is there no clear preference either way? – TimoB
I would append/extend your existing instance. – seth
I think I see the light now. I can instantiate the data class when the "start data" button is pressed, and append to that instance in the subsequent thread that does the serial reading. THANKS! – TimoB

Categories