In writing a tkinter root window as a class, I'm using the following code:
class RootWin(Tk):
def __init__(self,...args go here...):
super().__init__()
Although the code is correct, and works, I am uncomfortable writing code that I don't fully understand, and despite the many explanations I have come across, none have clarified this for me.
I understand that the line class RootWin(Tk): indicates that I am creating a class called RootWin that inherits from Tk. In the next line, self refers to the instance of this class I will create later in my code, and the args specify the parameters I want to pass to this specific instance. That much is very clear.
Then, the explanations I've come across indicate that super().__init__() runs the init method of Tk (the parent class).
But why is it necessary to run the init method of the Tk class? If class RootWin(Tk) already indicates that my new RootWin class inherits from Tk, then why would anything more be required?
Perhaps the best way to pose this question is to ask it in three explicit parts, and request three answers, with apologies, if that's asking a lot. I really want to understand this!
Question 1: What is accomplished by the line
class RootWin(Tk):
Question 2: What is accomplished by the line
def __init__(self,...args go here...):
Question 3: what is accomplished by the following line that has not already been accomplished by the two previous lines?
super().__init__()
Any advice appreciated.
But why is it necessary to run the init method of the Tk class?
Your own class has some initialization it performs, correct? There is code in your __init__ that must run for your class to be useful. This is where, for example, you would create other widgets for your app, variables, etc.
The tkinter base classes are the same way. They have code in their own __init__ method that must be run for the class to be useful. This code doesn't run automatically if you create your own __init__. Therefore, you must call it so that the widget is properly initialized.
Question 1: What is accomplished by the line class RootWin(Tk):
Answer: it begins the definition of a new class name RootWin that inherits from the class Tk
Question 2: What is accomplished by the line def __init__(self,...args go here...)
Answer: it defines a method that is automatically called by python when you create an instance of your custom class. It also defines the arguments that your function may require.
When you do foo = RootWin(), python will automatically call RootWin.__init__ and pass in the instance (self) as the first argument. The rest are to be supplied by the caller.
Question 3: what is accomplished by the following line that has not already been accomplished by the two previous lines? super().__init__()
Answer: First, it has not been accomplished by the two previous lines. Because of the two previous lines, python will not automatically call the __init__ method of the base class. That responsibility becomes yours when you define a custom __init__. When you call super().__init__() you are explicitly requesting that the __init__ method of the base class be called.
The advantage to requiring you to explicitly call it is that you now have a choice of when or if to call it. While you almost always should, you might choose to do some custom initialization either before or after the base class has been initialized.
Note that none of this is unique to tkinter. This is how all python objects work.
I have base class that cannot be instanciated (BaseFruit in my simplified example) and a few derived classes (for instance Apple in my example) that should all share a same method (printfuture). However, they are many possible variants, and I would like to let the user choose what variant should be used (for instance sadfuture and saddestfuture).
As my printfuture method is common to all my derived class, I thought that it would be appropriated to catch the user argument for the variant with the __new__ method of my base class and assign the method to the base class itself. As written in the example below:
# my custom methods
def sadfuture(self):
"""A sad future"""
print 'It looks {}!'.format(self.aspect)
print 'Mmm yummy !'
def saddestfuture(self):
"""A saddest future"""
print 'It looks {}'.format(self.aspect)
print 'Garbage !'
# my base class
class BaseFruit(object):
"""Fruit base class"""
def __new__(cls, *args, **kwargs):
setattr(cls, 'printfuture', kwargs['usermethod'])
return object.__new__(cls)
# My class
class Apple(BaseFruit):
"""An apple class"""
def __init__(self, aspect, usermethod=sadfuture):
self.aspect = aspect
if __name__ == '__main__':
goodapple = Apple(aspect='delicious', usermethod=sadfuture)
goodapple.printfuture() # ==> ok
badapple = Apple(aspect='rotten', usermethod=saddestfuture)
goodapple.printfuture() # ==> not ok anymore
badapple.printfuture() # ==> ok
Which prints:
It looks delicious!
Mmm yummy !
It looks delicious
Garbage !
It looks rotten
Garbage !
instead of the expected behavior:
It looks delicious!
Mmm yummy !
It looks delicious!
Mmm yummy !
It looks rotten
Garbage !
I do understand that I have overwritten my base class and my first object has changed its behavior. So, my main question is: how can I achieve the expected behavior while keeping my custom methods out of the base class?
Comments on best practices and on proper designs for such problems are also welcome.
The "expected" behavior is truly what is actually printed. So, the behavior is not what "you were expecting", which is a different thing. Let's se why:
What you are doing is creating a new method on the instantiated class (in this case, Apple) each time you mak ea new instance of Apple. The line setattr(cls, 'printfuture', kwargs['usermethod']) does exactly that, each time you create a new instance of BaseFruit or any subclass of it. (By the way, this line could be simply cls.printfuture = kwargs['usermethod'], there is no need for setattr if the attribute name is hardcoded).
So, when you create your second instance of Apple, the callbadapple = Apple(aspect='rotten', usermethod=saddestfuture) just make saddestfuture the printfuture for the Apple class to be saddestfuture, not just the method for badapple, but for any instance of Apple.
Fixing that has no need for a metaclass - you can use the code in __new__ itself to create a "pseudomethod", attached to the instance instead - as you intend. But you have to do that on the instance, after it is created, when you have a reference to the instance, not before instantiation, whenyou just have a reference to the class itself.
Since there is no actual code you need to run on before instatianting the class, you may as well bind the method-like function in __init__, and leave customizing __new__ just for when it is really needed. And while at that, it won't hurt to use super instead of hardcoding the superclass's call:
...
# my base class
class BaseFruit(object):
"""Fruit base class"""
def __init__(self, *args, **kwargs):
printfuture = kwargs.pop('usermethod')
super(BaseFruit, self).__init__(*args, **kwargs)
# Wrap the call to the passed method in a function
# that captures "self" from here, since Python do not
# insert "self" in calls to functions
# attributed to instances.
self.printfuture = lambda: printfuture(self)
# My class
class Apple(BaseFruit):
"""An apple class"""
def __init__(self, aspect, usermethod=sadfuture):
super(Apple, self).__init__(usermethod)
self.aspect = aspect
And as for metaclasses, this has no need for them - to the contrary, you have to customize each instance as it is created. We usually make use of metaclasses when we have to customize the class itself. Your original code is doing that (customizing the class itself), but the code for that is run when each instance is created, which made for the behavior you were not expecting. If the code to create the printfuture method where on the metaclass __new__ method instead, what is not the same as being in a superclass, that would happen just once, when each subclass is declared (and all instances of that subclass would share the same printifuture method).
Now, once you grasp how this works, please, just move to Python 3 to continue learning this stuff. Python 2 will be at complete end of line in 2 years from now, and will be useless in any prokect. One thing is having to keep legacy code in Python 2, another is learning or starting new projects - you should only use Python 3 for that.
I think that the problem is coming from the base class.
When you used the BaseFruit and used it as base class for the Apple-Class, python will assign any value that exist in the Apple-Class and the BaseFruit-Class directly to the BaseFruit Class. Since both 'apples' are based on the same Base Class, they share the values that come from this class.
When you set the saddestfuture as function to be executed you set it kind of 'globally' for all objects based on the BaseFruit-Class.
You need a line
self.userm = usermethod
in the __init__ of the apple. Then you pass this self.userm to the BaseClass as an kwarg and not the string "usermethod".
I don't know excatly the syntax for this operation as I have not worked with python inheritance rules for a long time and I admit I have forgotten the syntax. Maybe somebody else can propose code in a comment or you find that out yourself :-) .
I downloaded a program to test on the laptop that only has python 2.4.4 on it and it keeps telling me syntax error on the parentheses of class main(): I have no experience with classes, so I am looking for a quick fix for this problem. How are classes different in python 2?
class main():
def __init__(self):
response=self.valid_input("New game or Load game?",["load","new"])
if response == "load":
the syntax is always on the ( part.
In python 2, There are two styles of classes, old and new, and they are different and not totally compatible with each other. In order to get new style classes (think classic OO class), they must explicitly inherit from object. Omitting the object inheritance is valid syntax but the class concept is not the same. So use:
class main(object): and know that it is not the same as class main:
In python 3, the object inheritance is implicit, so:
class main: is the same as class main(object): and is a new style class.
You should code with new style classes, as that is the future of Python and the only class style available in 3. See here for more detailed information.
Python class inherits object
I don't have a python2.4 interpreter to test this, but it seems that python2.4 you either don't use parenthesis class main: or you must specify at least one class to inherit from class main(object):
https://docs.python.org/release/2.4.4/ref/class.html
I'm enhancing an existing class that does some calculations in the __init__ function to determine the instance state. Is it ok to call __init__() from __getstate__() in order to reuse those calculations?
To summarize reactions from Kroltan and jonsrharpe:
Technically it is OK
Technically it will work and if you do it properly, it can be considered OK.
Practically it is tricky, avoid that
If you edit the code in future and touch __init__, then it is easy (even for you) to forget about use in __setstate__ and then you enter into difficult to debug situation (asking yourself, where it comes from).
class Calculator():
def __init__(self):
# some calculation stuff here
def __setstate__(self, state)
self.__init__()
The calculation stuff is better to get isolated into another shared method:
class Calculator():
def __init__(self):
self._shared_calculation()
def __setstate__(self, state)
self._shared_calculation()
def _shared_calculation(self):
#some calculation stuff here
This way you shall notice.
Note: use of "_" as prefix for the shared method is arbitrary, you do not have to do that.
It's usually preferable to write a method called __getnewargs__ instead. That way, the Pickling mechanism will call __init__ for you automatically.
Another approach is to Customize the constructor class __init__ in a subclass. Ideally it is better to have to one Constructor class & change according to your need in Subclass
class Person:
def __init__(self, name, job=None, pay=0):
self.name = name
self.job = job
self.pay = pay
class Manager(Person):
def __init__(self, name, pay):
Person.__init__(self, name, 'title', pay) # Run constructor with 'title'
Calling constructors class this way turns out to be a very common coding pattern in Python. By itself, Python uses inheritance to look for and call only one __init__ method at construction time—the lowest one in the class tree.
If you need higher __init__ methods to be run at construction time, you must call them manually, and usually through the superclass name as in shown in the code above. his way you augment the Superclass constructor & replace the logic in subclass altogether to your liking
As suggested by Jan it is tricky & you will enter difficult debug situation if you call it in same class
I'm teaching myself Python and my most recent lesson was that Python is not Java, and so I've just spent a while turning all my Class methods into functions.
I now realise that I don't need to use Class methods for what I would done with static methods in Java, but now I'm not sure when I would use them. All the advice I can find about Python Class methods is along the lines of newbies like me should steer clear of them, and the standard documentation is at its most opaque when discussing them.
Does anyone have a good example of using a Class method in Python or at least can someone tell me when Class methods can be sensibly used?
Class methods are for when you need to have methods that aren't specific to any particular instance, but still involve the class in some way. The most interesting thing about them is that they can be overridden by subclasses, something that's simply not possible in Java's static methods or Python's module-level functions.
If you have a class MyClass, and a module-level function that operates on MyClass (factory, dependency injection stub, etc), make it a classmethod. Then it'll be available to subclasses.
Factory methods (alternative constructors) are indeed a classic example of class methods.
Basically, class methods are suitable anytime you would like to have a method which naturally fits into the namespace of the class, but is not associated with a particular instance of the class.
As an example, in the excellent unipath module:
Current directory
Path.cwd()
Return the actual current directory; e.g., Path("/tmp/my_temp_dir"). This is a class method.
.chdir()
Make self the current directory.
As the current directory is process wide, the cwd method has no particular instance with which it should be associated. However, changing the cwd to the directory of a given Path instance should indeed be an instance method.
Hmmm... as Path.cwd() does indeed return a Path instance, I guess it could be considered to be a factory method...
Think about it this way: normal methods are useful to hide the details of dispatch: you can type myobj.foo() without worrying about whether the foo() method is implemented by the myobj object's class or one of its parent classes. Class methods are exactly analogous to this, but with the class object instead: they let you call MyClass.foo() without having to worry about whether foo() is implemented specially by MyClass because it needed its own specialized version, or whether it is letting its parent class handle the call.
Class methods are essential when you are doing set-up or computation that precedes the creation of an actual instance, because until the instance exists you obviously cannot use the instance as the dispatch point for your method calls. A good example can be viewed in the SQLAlchemy source code; take a look at the dbapi() class method at the following link:
https://github.com/zzzeek/sqlalchemy/blob/ab6946769742602e40fb9ed9dde5f642885d1906/lib/sqlalchemy/dialects/mssql/pymssql.py#L47
You can see that the dbapi() method, which a database backend uses to import the vendor-specific database library it needs on-demand, is a class method because it needs to run before instances of a particular database connection start getting created — but that it cannot be a simple function or static function, because they want it to be able to call other, supporting methods that might similarly need to be written more specifically in subclasses than in their parent class. And if you dispatch to a function or static class, then you "forget" and lose the knowledge about which class is doing the initializing.
I recently wanted a very light-weight logging class that would output varying amounts of output depending on the logging level that could be programmatically set. But I didn't want to instantiate the class every time I wanted to output a debugging message or error or warning. But I also wanted to encapsulate the functioning of this logging facility and make it reusable without the declaration of any globals.
So I used class variables and the #classmethod decorator to achieve this.
With my simple Logging class, I could do the following:
Logger._level = Logger.DEBUG
Then, in my code, if I wanted to spit out a bunch of debugging information, I simply had to code
Logger.debug( "this is some annoying message I only want to see while debugging" )
Errors could be out put with
Logger.error( "Wow, something really awful happened." )
In the "production" environment, I can specify
Logger._level = Logger.ERROR
and now, only the error message will be output. The debug message will not be printed.
Here's my class:
class Logger :
''' Handles logging of debugging and error messages. '''
DEBUG = 5
INFO = 4
WARN = 3
ERROR = 2
FATAL = 1
_level = DEBUG
def __init__( self ) :
Logger._level = Logger.DEBUG
#classmethod
def isLevel( cls, level ) :
return cls._level >= level
#classmethod
def debug( cls, message ) :
if cls.isLevel( Logger.DEBUG ) :
print "DEBUG: " + message
#classmethod
def info( cls, message ) :
if cls.isLevel( Logger.INFO ) :
print "INFO : " + message
#classmethod
def warn( cls, message ) :
if cls.isLevel( Logger.WARN ) :
print "WARN : " + message
#classmethod
def error( cls, message ) :
if cls.isLevel( Logger.ERROR ) :
print "ERROR: " + message
#classmethod
def fatal( cls, message ) :
if cls.isLevel( Logger.FATAL ) :
print "FATAL: " + message
And some code that tests it just a bit:
def logAll() :
Logger.debug( "This is a Debug message." )
Logger.info ( "This is a Info message." )
Logger.warn ( "This is a Warn message." )
Logger.error( "This is a Error message." )
Logger.fatal( "This is a Fatal message." )
if __name__ == '__main__' :
print "Should see all DEBUG and higher"
Logger._level = Logger.DEBUG
logAll()
print "Should see all ERROR and higher"
Logger._level = Logger.ERROR
logAll()
Alternative constructors are the classic example.
It allows you to write generic class methods that you can use with any compatible class.
For example:
#classmethod
def get_name(cls):
print cls.name
class C:
name = "tester"
C.get_name = get_name
#call it:
C.get_name()
If you don't use #classmethod you can do it with self keyword but it needs an instance of Class:
def get_name(self):
print self.name
class C:
name = "tester"
C.get_name = get_name
#call it:
C().get_name() #<-note the its an instance of class C
When a user logs in on my website, a User() object is instantiated from the username and password.
If I need a user object without the user being there to log in (e.g. an admin user might want to delete another users account, so i need to instantiate that user and call its delete method):
I have class methods to grab the user object.
class User():
#lots of code
#...
# more code
#classmethod
def get_by_username(cls, username):
return cls.query(cls.username == username).get()
#classmethod
def get_by_auth_id(cls, auth_id):
return cls.query(cls.auth_id == auth_id).get()
I think the most clear answer is AmanKow's one. It boils down to how u want to organize your code. You can write everything as module level functions which are wrapped in the namespace of the module i.e
module.py (file 1)
---------
def f1() : pass
def f2() : pass
def f3() : pass
usage.py (file 2)
--------
from module import *
f1()
f2()
f3()
def f4():pass
def f5():pass
usage1.py (file 3)
-------------------
from usage import f4,f5
f4()
f5()
The above procedural code is not well organized, as you can see after only 3 modules it gets confusing, what is each method do ? You can use long descriptive names for functions(like in java) but still your code gets unmanageable very quick.
The object oriented way is to break down your code into manageable blocks i.e Classes & objects and functions can be associated with objects instances or with classes.
With class functions you gain another level of division in your code compared with module level functions.
So you can group related functions within a class to make them more specific to a task that you assigned to that class. For example you can create a file utility class :
class FileUtil ():
def copy(source,dest):pass
def move(source,dest):pass
def copyDir(source,dest):pass
def moveDir(source,dest):pass
//usage
FileUtil.copy("1.txt","2.txt")
FileUtil.moveDir("dir1","dir2")
This way is more flexible and more maintainable, you group functions together and its more obvious to what each function do. Also you prevent name conflicts, for example the function copy may exist in another imported module(for example network copy) that you use in your code, so when you use the full name FileUtil.copy() you remove the problem and both copy functions can be used side by side.
Honestly? I've never found a use for staticmethod or classmethod. I've yet to see an operation that can't be done using a global function or an instance method.
It would be different if python used private and protected members more like Java does. In Java, I need a static method to be able to access an instance's private members to do stuff. In Python, that's rarely necessary.
Usually, I see people using staticmethods and classmethods when all they really need to do is use python's module-level namespaces better.
I used to work with PHP and recently I was asking myself, whats going on with this classmethod? Python manual is very technical and very short in words so it wont help with understanding that feature. I was googling and googling and I found answer -> http://code.anjanesh.net/2007/12/python-classmethods.html.
If you are lazy to click it. My explanation is shorter and below. :)
in PHP (maybe not all of you know PHP, but this language is so straight forward that everybody should understand what I'm talking about) we have static variables like this:
class A
{
static protected $inner_var = null;
static public function echoInnerVar()
{
echo self::$inner_var."\n";
}
static public function setInnerVar($v)
{
self::$inner_var = $v;
}
}
class B extends A
{
}
A::setInnerVar(10);
B::setInnerVar(20);
A::echoInnerVar();
B::echoInnerVar();
The output will be in both cases 20.
However in python we can add #classmethod decorator and thus it is possible to have output 10 and 20 respectively. Example:
class A(object):
inner_var = 0
#classmethod
def setInnerVar(cls, value):
cls.inner_var = value
#classmethod
def echoInnerVar(cls):
print cls.inner_var
class B(A):
pass
A.setInnerVar(10)
B.setInnerVar(20)
A.echoInnerVar()
B.echoInnerVar()
Smart, ain't?
Class methods provide a "semantic sugar" (don't know if this term is widely used) - or "semantic convenience".
Example: you got a set of classes representing objects. You might want to have the class method all() or find() to write User.all() or User.find(firstname='Guido'). That could be done using module level functions of course...
if you are not a "programmer by training", this should help:
I think I have understood the technical explanations above and elsewhere on the net, but I was always left with a question "Nice, but why do I need it? What is a practical, use case?". and now life gave me a good example that clarified all:
I am using it to control the global-shared variable that is shared among instances of a class instantiated by multi-threading module. in humane language, I am running multiple agents that create examples for deep learning IN PARALLEL. (imagine multiple players playing ATARI game at the same time and each saving the results of their game to one common repository (the SHARED VARIABLE))
I instantiate the players/agents with the following code (in Main/Execution Code):
a3c_workers = [A3C_Worker(self.master_model, self.optimizer, i, self.env_name, self.model_dir) for i in range(multiprocessing.cpu_count())]
it creates as many players as there are processor cores on my comp
A3C_Worker - is a class that defines the agent
a3c_workers - is a list of the instances of that class (i.e. each instance is one player/agent)
now i want to know how many games have been played across all players/agents thus within the A3C_Worker definition I define the variable to be shared across all instances:
class A3C_Worker(threading.Thread):
global_shared_total_episodes_across_all_workers = 0
now as the workers finish their games they increase that count by 1 each for each game finished
at the end of my example generation i was closing the instances but the shared variable had assigned the total number of games played. so when I was re-running it again my initial total number of episodes was that of the previous total. but i needed that count to represent that value for each run individually
to fix that i specified :
class A3C_Worker(threading.Thread):
#classmethod
def reset(cls):
A3C_Worker.global_shared_total_episodes_across_all_workers = 0
than in the execution code i just call:
A3C_Worker.reset()
note that it is a call to the CLASS overall not any INSTANCE of it individually. thus it will set my counter to 0 for every new agent I initiate from now on.
using the usual method definition def play(self):, would require us to reset that counter for each instance individually, which would be more computationally demanding and difficult to track.
What just hit me, coming from Ruby, is that a so-called class method and a so-called instance method is just a function with semantic meaning applied to its first parameter, which is silently passed when the function is called as a method of an object (i.e. obj.meth()).
Normally that object must be an instance but the #classmethod method decorator changes the rules to pass a class. You can call a class method on an instance (it's just a function) - the first argument will be its class.
Because it's just a function, it can only be declared once in any given scope (i.e. class definition). If follows therefore, as a surprise to a Rubyist, that you can't have a class method and an instance method with the same name.
Consider this:
class Foo():
def foo(x):
print(x)
You can call foo on an instance
Foo().foo()
<__main__.Foo instance at 0x7f4dd3e3bc20>
But not on a class:
Foo.foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method foo() must be called with Foo instance as first argument (got nothing instead)
Now add #classmethod:
class Foo():
#classmethod
def foo(x):
print(x)
Calling on an instance now passes its class:
Foo().foo()
__main__.Foo
as does calling on a class:
Foo.foo()
__main__.Foo
It's only convention that dictates that we use self for that first argument on an instance method and cls on a class method. I used neither here to illustrate that it's just an argument. In Ruby, self is a keyword.
Contrast with Ruby:
class Foo
def foo()
puts "instance method #{self}"
end
def self.foo()
puts "class method #{self}"
end
end
Foo.foo()
class method Foo
Foo.new.foo()
instance method #<Foo:0x000000020fe018>
The Python class method is just a decorated function and you can use the same techniques to create your own decorators. A decorated method wraps the real method (in the case of #classmethod it passes the additional class argument). The underlying method is still there, hidden but still accessible.
footnote: I wrote this after a name clash between a class and instance method piqued my curiosity. I am far from a Python expert and would like comments if any of this is wrong.
This is an interesting topic. My take on it is that python classmethod operates like a singleton rather than a factory (which returns a produced an instance of a class). The reason it is a singleton is that there is a common object that is produced (the dictionary) but only once for the class but shared by all instances.
To illustrate this here is an example. Note that all instances have a reference to the single dictionary. This is not Factory pattern as I understand it. This is probably very unique to python.
class M():
#classmethod
def m(cls, arg):
print "arg was", getattr(cls, "arg" , None),
cls.arg = arg
print "arg is" , cls.arg
M.m(1) # prints arg was None arg is 1
M.m(2) # prints arg was 1 arg is 2
m1 = M()
m2 = M()
m1.m(3) # prints arg was 2 arg is 3
m2.m(4) # prints arg was 3 arg is 4 << this breaks the factory pattern theory.
M.m(5) # prints arg was 4 arg is 5
I was asking myself the same question few times. And even though the guys here tried hard to explain it, IMHO the best answer (and simplest) answer I have found is the description of the Class method in the Python Documentation.
There is also reference to the Static method. And in case someone already know instance methods (which I assume), this answer might be the final piece to put it all together...
Further and deeper elaboration on this topic can be found also in the documentation:
The standard type hierarchy (scroll down to Instance methods section)
#classmethod can be useful for easily instantiating objects of that class from outside resources. Consider the following:
import settings
class SomeClass:
#classmethod
def from_settings(cls):
return cls(settings=settings)
def __init__(self, settings=None):
if settings is not None:
self.x = settings['x']
self.y = settings['y']
Then in another file:
from some_package import SomeClass
inst = SomeClass.from_settings()
Accessing inst.x will give the same value as settings['x'].
A class defines a set of instances, of course. And the methods of a class work on the individual instances. The class methods (and variables) a place to hang other information that is related to the set of instances over all.
For example if your class defines a the set of students you might want class variables or methods which define things like the set of grade the students can be members of.
You can also use class methods to define tools for working on the entire set. For example Student.all_of_em() might return all the known students. Obviously if your set of instances have more structure than just a set you can provide class methods to know about that structure. Students.all_of_em(grade='juniors')
Techniques like this tend to lead to storing members of the set of instances into data structures that are rooted in class variables. You need to take care to avoid frustrating the garbage collection then.
Classes and Objects concepts are very useful in organizing things. It's true that all the operations that can be done by a method can also be done using a static function.
Just think of a scenario, to build a Students Databases System to maintain student details.
You need to have details about students, teachers and staff. You need to build functions to calculate fees, salary, marks, etc. Fees and marks are only applicable for students, salary is only applicable for staff and teachers. So if you create separate classes for every type of people, the code will be organized.