'object' is not subscriptable in 2d python list - python

Message='SodokuGame' object is not subscriptable
Source=C:\Users\PC\Desktop\python\Soduku\Soduku\Soduku.py
StackTrace: File
"C:\Users\PC\Desktop\python\Soduku\Soduku\Soduku.py", line 31, in
fillArray
if currentArray[x][y].value == 0:
File "C:\Users\PC\Desktop\python\Soduku\Soduku\Soduku.py", line 110,
in init
game.fillArray(game.matrix, 0, 0, game.newPool) File "C:\Users\PC\Desktop\python\Soduku\Soduku\Soduku.py", line 113, in
Run()
I am trying my own project and ran into an issue. To begin I have my cell class. My goal is to test the data in the Cell and run code dependent on the results but on runtime I run into the error above.
class Cell:
value = 0
def __getitem__(self):
return self.value
def __setitem__(newVal, self):
self.value = newVal
This is how I defined and tried to add my list
class SodokuGame:
matrix = []
for i in range(9):
arr = []
for j in range(9):
arr.append(Cell())
matrix.append(arr)
def fillArray(currentArray, x, y, pool, self):
if currentArray[x][y].value == 0:
print("fillArray loop") #unimportant context code from here on
poolNum = randint(0, pool.length)
if testNumber(pool[poolNum]):
currentArray[x][y]= pool.pop(pool[poolNum])
print(poolNum)
My first assumption was that the array was being filled incorrectly to fail the if statement but that is not the issue. I believe the issue is during the
if currentArray[x][y].value == 0:
somehow even when I instantiated all nodes at an (x,y) it is still giving me an error as if I'm comparing a SodukuGame object to 0.
How it is called originally:
class Run:
def __init__(self):
print("Run")
game = SodokuGame()
game.printGrid()
game.fillArray(game.matrix, 0, 0, game.newPool)
game.printGrid()
Run()
Notes: I don't think it's relevant to the question but this function's intention is to check to see if the current cell is empty(=0), and if not, it will attempt to fill the cell and recursively run the function again moving over one cell until the structure is full.
I've tried implementing methods in the Cell class to workaround this, including adding a __getitem__ function, a native getInfo function, and even tried to use a isZero boolean function but all of these result in the same error. This is not for homework.

Welcome Justin. There are a few problems here, but the first one is that you aren't starting your instance methods with self. Python is still treating those variables like self, which is why you're getting the error "SodokuGame is not subscriptable". It's not subscripting the matrix you're passing in; it's subscripting the instantiated object of the SodokuGame class itself.
This is what the SodokuGame class's fillArray method should look like
class SodokuGame:
def fillArray(self, currentArray, x, y, pool):
if currentArray[x][y].value == 0:
# do stuff
You'll notice I put self up front in the argument list, which you'll always need to do. Python doesn't listen to where you put self, because you can technically name it whatever you want (you shouldn't, but you can). It's just always the first argument in an instance method.
After this you'll run into problems with Cell. You're implementing __getitem__, but Cell doesn't have an array to subscript. If you really want to subscript it, but always return the same value for some reason, you need to implement the method correctly (same goes for __setitem__):
class Cell:
value = 0
def __getitem__(self, item):
return self.value
def __setitem__(self, item, value):
self.value = value
if you don't actually want to subscript Cell, i.e. you don't want to do
c = Cell()
c[247321] = 2
# 247321 can literally be anything; 'turkey', 12, 32.1, etc.
# that is what item is in the __getitem__ and __setitem__ methods, and
# you're not using that argument in the implementations
you should probably not utilize __getitem__, but rather do something like:
class Cell:
def get_value(self):
return self.value
def set_value(self, value):
self.value = value
but your method of accessing the property directly with .value also works.

Related

TypeError: __init__() missing 1 required positional argument: 'lists'

I created a class, something like below -
class child:
def __init__(self,lists):
self.myList = lists
def find_mean(self):
mean=np.mean(self.myList)
return mean
and when I create an onject something like below -
obj=child()
it gives the error -
TypeError: __init__() missing 1 required positional argument: 'lists'
if I create object like below then it works well -
obj=child([44,22,55)
or If I create the class like below -
class child:
def find_mean(self,myList):
mean=np.mean(myList)
return mean
and then I create the object like below -
obj=child()
then also it works well, however I need to make it in the way I explained in the very begining. Can you please help me understand this context?
In the first example, the __init__ method expects two parameters:
self is automatically filled in by Python.
lists is a parameter which you must give it. It will try to assign this value to a new variable called self.myList, and it won't know what value it is supposed to use if you don't give it one.
In the second example, you have not written an __init__ method. This means that Python creates its own default __init__ function which will not require any parameters. However, the find_mean method now requires you to give it a parameter instead.
When you say you want to create it in the way you explained at the beginning, this is actually impossible: the class requires a value, and you are not giving it one.
Therefore, it is hard for me to tell what you really want to do. However, one option might be that you want to create the class earlier, and then add a list to it later on. In this case, the code would look like this:
import numpy as np
class Child:
def __init__(self, lists=None):
self.myList = lists
def find_mean(self):
if self.myList is None:
return np.nan
mean = np.mean(self.myList)
return mean
This code allows you to create the object earlier, and add a list to it later. If you try to call find_mean without giving it a list, it will simply return nan:
child = Child()
print(child.find_mean()) # Returns `nan`
child.myList = [1, 2, 3]
print(child.find_mean()) # Returns `2`
the code you have at the top of your question defines a class called child, which has one attribute, lists, which is assigned at the time of instance creation in the __init__ method. This means that you must supply a list when creating an instance of child.
class child:
def __init__(self, lists):
self.myList = lists
def find_mean(self):
mean=np.mean(self.myList)
return mean
# works because a list is provided
obj = child([44,22,55])
# does not work because no list is given
obj = child() # TypeError
If you create the class like in your second example, __init__ is no longer being explicitly specified, and as such, the object has no attributes that must be assigned at instance creation:
class child:
def find_mean(self, myList):
mean=np.mean(myList)
return mean
# does not work because `child()` does not take any arguments
obj = child([44,22,55]) # TypeError
# works because no list is needed
obj = child()
The only way to both have the myList attribute, and not need to specify it at creation would be to assign a default value to it:
class child:
def find_mean(self,myList=None):
mean=np.mean(myList)
return mean
# now this will work
obj = child()
# as will this
obj = child([24, 35, 27])

Python #property same id as field returned by it

I made this simple snippet to understand properties better:
class foo:
def __init__(self, val):
self.val = val
#property
def val(self):
return self._val
#val.setter
def val(self, value):
if value < 0:
raise ValueError("Value has to be larger than 0")
else:
self._val = value
f1 = foo(2)
print(id(f1._val) == id(f1.val)) #print true
Right now I undestand less than before I wrote that code. How is it, that both f1._val and f1.val are the same object? Is it some magic done in backstage? Also, why type(f1.val) is int when it is a function?
Another thing is that I though to change both val functions (one decorated by #property, second by #val.setter) to return self.val not return self._val. When trying to instantiate object I got:
RecursionError: maximum recursion depth exceeded in comparison
What is quite obvious as I return pointer to function (not sure if pointer is proper in this context?) with no condition. But here, how my first snippet worked fine even though both self._val and self.val are same object?
In your example, val is a descriptor as described in 3.3.2.2. Implementing Descriptors. With a descriptor, access to a variable is replaced with calls to its __get__, __set__ and __delete__ methods. The property function is a shortcut to creating descriptors. Since "property" and "descriptor" amount to the same thing, I don't know why two different names are used.
With
#property
def val(self):
return self._val
val is bound to an object whose __get__ executes the body of the function. It gets the local variable _val.
With
#val.setter
def val(self, value):
if value < 0:
raise ValueError("Value has to be larger than 0")
else:
self._val = value
The object bound to val is updated so that its __set__ executes the body of the function, setting _val.
When python executes f1.val, it gets the object referenced by "val" but before returning, it checks to see if the object has a __get__. If so, instead of returning the object, it calls __get__ and uses its return. In your case, "_val" is returned, so naturally they are equal.
If you change your property function to get "val" instead of "_val", python still follows the same rule. "val" is a descriptor so its __get__ is called, which tries to get "val" again, but its a descriptor ... and on and on.
How is it, that both f1._val and f1.val are the same object?
Because you are setting them to the same object in your code. You take your value, in this case 2 and assign it to both self.val & self._val
What you did is the equivalent of
a = 2
b = 2
a == b
True
Also, why type(f1.val) is int when it is a function?
When using the #property decorator, the method no longer is callable. You can think of it almost like an attribute.

Non-iterable Object After \__init__ Array in Python Class

Say I would like to create a python class that behave as array of another class. While the __init__ is called, it recognizes itself as an array (iterable); however, when I call it again through some other method, or even call by the index, the object becomes non-iterable. I wonder which part I got it wrong, or perhaps, there's DO and DON'T for python class?
Last but not least, this is an attempt to simplify one object type to another (trying to cast from one class to another). Perhaps the code below will give a better clarification.
The example is below:
Say I have an object FOO
FOO.name = "john"
FOO.records[0].a = 1
FOO.records[0].b = 2
FOO.records[1].a = 4
FOO.records[1].b = 5
And I create a python class
class BAR:
__init__(self, record):
self.a = int(record.a)
self.b = int(record.b)
and another class which would like to store BAR class as array
class BARS:
__init__(self,bars):
self = numpy.array([]) # regardless the array type whether python native or Numpy it does not work
for item in bars:
self = numpy.append(self, BAR(item))
so what I would expect this code to perform would be that if I call
A = BARS(FOO.records)
I would get an iterable A. But this does not work, though if I call SELF in BARS __init__, it would see SELF as iterable object.
If one should not expect python class to behave in this manner, at least I hope you could help pointing me out, what would be the alternative logical and pythonic way to achieve it.
Perhaps answering my own question after a hint from comment above would be good.
It turns out that assining self in class as itself is a DON'T (silly me trying to get a shortcut).
To achieve an iterable class, one would require __iter__ method alongside with __next__, and __getitem__ to fulfill (maybe some others methods as well, but let's stick to these three for now).
So, the code above should look like this
class BARS:
def __init__(self, records):
self.records = [] # Use list for simplicity
for record in records:
self.records.append(BAR(record))
def __iter__(self):
self.n = 0
return self
def __next__(self):
if self.n < len(self.records):
result = self.records[self.n]
self.n += 1
return result
else:
raise StopIteration
def __getitem__(self, key):
return self.records[key]
Eventually, this will yield a iteration and index accessible object.

How to set up a class with all the methods of and functions like a built in such as float, but holds onto extra data?

I am working with 2 data sets on the order of ~ 100,000 values. These 2 data sets are simply lists. Each item in the list is a small class.
class Datum(object):
def __init__(self, value, dtype, source, index1=None, index2=None):
self.value = value
self.dtype = dtype
self.source = source
self.index1 = index1
self.index2 = index2
For each datum in one list, there is a matching datum in the other list that has the same dtype, source, index1, and index2, which I use to sort the two data sets such that they align. I then do various work with the matching data points' values, which are always floats.
Currently, if I want to determine the relative values of the floats in one data set, I do something like this.
minimum = min([x.value for x in data])
for datum in data:
datum.value -= minimum
However, it would be nice to have my custom class inherit from float, and be able to act like this.
minimum = min(data)
data = [x - minimum for x in data]
I tried the following.
class Datum(float):
def __new__(cls, value, dtype, source, index1=None, index2=None):
new = float.__new__(cls, value)
new.dtype = dtype
new.source = source
new.index1 = index1
new.index2 = index2
return new
However, doing
data = [x - minimum for x in data]
removes all of the extra attributes (dtype, source, index1, index2).
How should I set up a class that functions like a float, but holds onto the extra data that I instantiate it with?
UPDATE: I do many types of mathematical operations beyond subtraction, so rewriting all of the methods that work with a float would be very troublesome, and frankly I'm not sure I could rewrite them properly.
I suggest subclassing float and using a couple decorators to "capture" the float output from any method (except for __new__ of course) and returning a Datum object instead of a float object.
First we write the method decorator (which really isn't being used as a decorator below, it's just a function that modifies the output of another function, AKA a wrapper function):
def mydecorator(f,cls):
#f is the method being modified, cls is its class (in this case, Datum)
def func_wrapper(*args,**kwargs):
#*args and **kwargs are all the arguments that were passed to f
newvalue = f(*args,**kwargs)
#newvalue now contains the output float would normally produce
##Now get cls instance provided as part of args (we need one
##if we're going to reattach instance information later):
try:
self = args[0]
##Now check to make sure new value is an instance of some numerical
##type, but NOT a bool or a cls type (which might lead to recursion)
##Including ints so things like modulo and round will work right
if (isinstance(newvalue,float) or isinstance(newvalue,int)) and not isinstance(newvalue,bool) and type(newvalue) != cls:
##If newvalue is a float or int, now we make a new cls instance using the
##newvalue for value and using the previous self instance information (arg[0])
##for the other fields
return cls(newvalue,self.dtype,self.source,self.index1,self.index2)
#IndexError raised if no args provided, AttributeError raised of self isn't a cls instance
except (IndexError, AttributeError):
pass
##If newvalue isn't numerical, or we don't have a self, just return what
##float would normally return
return newvalue
#the function has now been modified and we return the modified version
#to be used instead of the original version, f
return func_wrapper
The first decorator only applies to a method to which it is attached. But we want it to decorate all (actually, almost all) the methods inherited from float (well, those that appear in the float's __dict__, anyway). This second decorator will apply our first decorator to all of the methods in the float subclass except for those listed as exceptions (see this answer):
def for_all_methods_in_float(decorator,*exceptions):
def decorate(cls):
for attr in float.__dict__:
if callable(getattr(float, attr)) and not attr in exceptions:
setattr(cls, attr, decorator(getattr(float, attr),cls))
return cls
return decorate
Now we write the subclass much the same as you had before, but decorated, and excluding __new__ from decoration (I guess we could also exclude __init__ but __init__ doesn't return anything, anyway):
#for_all_methods_in_float(mydecorator,'__new__')
class Datum(float):
def __new__(klass, value, dtype="dtype", source="source", index1="index1", index2="index2"):
return super(Datum,klass).__new__(klass,value)
def __init__(self, value, dtype="dtype", source="source", index1="index1", index2="index2"):
self.value = value
self.dtype = dtype
self.source = source
self.index1 = index1
self.index2 = index2
super(Datum,self).__init__()
Here are our testing procedures; iteration seems to work correctly:
d1 = Datum(1.5)
d2 = Datum(3.2)
d3 = d1+d2
assert d3.source == 'source'
L=[d1,d2,d3]
d4=max(L)
assert d4.source == 'source'
L = [i for i in L]
assert L[0].source == 'source'
assert type(L[0]) == Datum
minimum = min(L)
assert [x - minimum for x in L][0].source == 'source'
Notes:
I am using Python 3. Not certain if that will make a difference for you.
This approach effectively overrides EVERY method of float other than the exceptions, even the ones for which the result isn't modified. There may be side effects to this (subclassing a built-in and then overriding all of its methods), e.g. a performance hit or something; I really don't know.
This will also decorate nested classes.
This same approach could also be implemented using a metaclass.
The problem is when you do :
x - minimum
in terms of types you are doing either :
datum - float, or datum - integer
Either way python doesn't know how to do either of them, so what it does is look at parent classes of the arguments if it can. since datum is a type of float, it can easily use float - and the calculation ends up being
float - float
which will obviously result in a 'float' - python has no way of knowing how to construct your datum object unless you tell it.
To solve this you either need to implement the mathematical operators so that python knows how to do datum - float or come up with a different design.
Assuming that 'dtype', 'source', index1 & index2 need to stay the same after a calculation - then as an example your class needs :
def __sub__(self, other):
return datum(value-other, self.dtype, self.source, self.index1, self.index2)
this should work - not tested
and this will now allow you to do this
d = datum(23.0, dtype="float", source="me", index1=1)
e = d - 16
print e.value, e.dtype, e.source, e.index1, e.index2
which should result in :
7.0 float me 1 None

Why does assigning to self not work, and how to work around the issue?

I have a class (list of dicts) and I want it to sort itself:
class Table(list):
…
def sort (self, in_col_name):
self = Table(sorted(self, key=lambda x: x[in_col_name]))
but it doesn't work at all. Why? How to avoid it? Except for sorting it externally, like:
new_table = Table(sorted(old_table, key=lambda x: x['col_name'])
Isn't it possible to manipulate the object itself? It's more meaningful to have:
class Table(list):
pass
than:
class Table(object):
l = []
…
def sort (self, in_col_name):
self.l = sorted(self.l, key=lambda x: x[in_col_name])
which, I think, works.
And in general, isn't there any way in Python which an object is able to change itself (not only an instance variable)?
You can't re-assign to self from within a method and expect it to change external references to the object.
self is just an argument that is passed to your function. It's a name that points to the instance the method was called on. "Assigning to self" is equivalent to:
def fn(a):
a = 2
a = 1
fn(a)
# a is still equal to 1
Assigning to self changes what the self name points to (from one Table instance to a new Table instance here). But that's it. It just changes the name (in the scope of your method), and does affect not the underlying object, nor other names (references) that point to it.
Just sort in place using list.sort:
def sort(self, in_col_name):
super(Table, self).sort(key=lambda x: x[in_col_name])
Python is pass by value, always. This means that assigning to a parameter will never have an effect on the outside of the function. self is just the name you chose for one of the parameters.
I was intrigued by this question because I had never thought about this. I looked for the list.sort code, to see how it's done there, but apparently it's in C. I think I see where you're getting at; what if there is no super method to invoke? Then you can do something like this:
class Table(list):
def pop_n(self, n):
for _ in range(n):
self.pop()
>>> a = Table(range(10))
>>> a.pop_n(3)
>>> print a
[0, 1, 2, 3, 4, 5, 6]
You can call self's methods, do index assignments to self and whatever else is implemented in its class (or that you implement yourself).

Categories