python modules, call main class in modules - python

this is my main module, I can call other module's function with it:
#main.py
from module_1 import module_1
class APP:
def option(self):
print("[1]:.....\n[2]:.......")
option = int(input(":"))
if (option == 1):
module_1().do_something_1() #I called module_1
APP().option()
Let's say the user chose the first option, the program will call the function in the module_1
#module_1.py
class module_1:
def do_something_1(self):
#.........................
again = input("Do you wanna start again?")
if (again == "Y"):
#what I'm trying to do is here, I want to return the option function again in main.py
else:
#................
And user wanted to restart program and Asked question with Y, How can the program return the main.py(APP().option())?
[UPTADE_1]
I actually thought a circular import would work for me here but got an error
main.py:
#main.py
from module_1 import module_1
class APP:
def option(self):
print("[1]:.....\n[2]:.......")
option = int(input(":"))
if (option == 1):
module_1().do_something_1() #I called module_1
APP().option()
module_1.py:
from main import APP
class module_1:
def do_something_1(self):
print(".........")
again = input("Do you wanna start again?")
if again=="Y":
return APP().option()
else:
#.......
I get this error:
.............................
from main import APP
ImportError: cannot import name 'APP' from partially initialized module 'main' (most likely due to a circular import)
.............................
[UPTADE_2](after Grismar's answer)
main.py:
from File import*
class Options:
def __init__(self):
#...................
def ask(self):
try:
option=int(input("==>"))
except Exception as error:
#......................................
return Options().ask()
if option==1:
File().delete_all_files_extension()
elif option==2:
#.........
elif option==3:
#.........
elif option==4:
#..........
elif option==5:
#.........
elif option==6:
sys.exit()
Options().ask()
Let's say the user chose the first option, the program will call the function in the File
import sys
import os
class File:
def __init__(self):
self.deleted_files_number = 0
self.deleted_files_extension_number = 0
def delete_all_files(self):
try:
path = input("[+]Path:")
except Exception as error:
print("ERROR:%s"%error)
#.....................
#I want to return the option function again in main.py
try:
file = open("log[delete_all_files].txt","w")
if (os.path.exists(path)==True):
for r,d,f in os.walk(path):
for file in f:
time.sleep(0.001)
print("[*]%s" %os.path.join(r,file))
os.remove(os.path.join(r,file))
open("log[delete_all_files].txt","a+").write(str(os.path.join(r,file)+"\n"))
self.deleted_files_number +=1
print("[*]Deleted %s file" %self.deleted_files_number)
again=input("[+]Do you wanna start again?[Y/N]").upper()
if (again=="Y"):
#here it is, the program has to go back to main.py so it can show what can be selected again.
else:
exit()
I mean, the program will return if an error or something different occurs within the module, not after the module has finished its work.

In your example, in main.py, you call APP.option(). APP.option() calls module_1().do_something_1(), where module_1 is a class imported from module_1.
You want module_1().do_something_1 to then call APP.option() again (under certain conditions).
A few remarks:
you would do well to follow Python naming conventions and name APP App.
there appears to be little or no utility in wrapping the module module_1's functionality in a class called module_1; all this gets you is two completely different entities sharing the same name. If you want to capture the functionality in a class, you should probably name it something that reminds developers using it of its function.
you're calling APP().option(), which means you're instantiating a new object every time, creating a new APP() with a new menu every time
most importantly, since APP.option() can call module_1().do_something_1 indefinitely and module_1().do_something_1 can call APP.option(), you run the risk of creating an application that ends up exhausting the recursion depth.
Object-oriented software design is all about picking the correct scope for your classes, giving them clear jobs that encapsulate behaviour and data that clearly goes together; to create software that's easier to maintain and extend.
The main function of your App class appears to be to present the user with options and execute matching functionality. The main function of module_1 appears to be to 'do something' and then return to the main loop. You want to present the user with the options again or allow them to exit the app, after 'doing something'.
From a design perspective, it seems to make more sense to have the App present the user with the 'start again' question - after all, it has nothing to do with whatever was done in module_1. If it does depend on what happens there, it makes sense to have the do_something_1() method return a result and base the decision to continue on the returned value.
You could even base that return value on the question you ask the user, although I don't like that design decision:
if again=="Y":
return True
else:
# ... do something else ...
return False
And in main.py:
def option(self):
again = True
while again:
print("[1]:.....\n[2]:.......")
option = int(input(":"))
if (option == 1):
again = module_1().do_something_1()
Another solution could be to create the module_1, passing the App it belongs to as an owner and keeping a reference to it in the object. Something like:
def __init__(self, owner):
self.owner = owner
# ...
if again=="Y":
return self.owner.option()
That solves the object creation issue, but is still circular of course.
You can have modules importing parts of each other and the correct way to do that would be by adding them to a package. However, you would still have to find a way to get around circular creation of the classes - if Class1 creates an instance of Class2, but
Class2 creates an instance of Class1, you can see where that's going.
What's wrong with something like this:
main.py
from module_1 import Class_1
class App:
def menu(self):
run_menu = True
while run_menu :
print("[1]:.....\n[2]:.......")
option = int(input(":"))
if (option == 1):
Class_1().do_something_1()
print(".........")
run_menu = input("Do you wanna start again?") == "Y"
App().menu()
module_1.py
class Class_1:
def do_something_1():
print('doing something')
return
The problem you're having is more a problem of design, not so much of the language. If you have an example of functionality where you feel the circular import is a requirement, you should probably post a question with that example specifically - but you're likely to be marked a duplicate of one of many questions asking about circular imports.
The short answer is: there's almost always a better solution.

Related

Registering classes to factory with classes in different files

I have a factory as shown in the following code:
class ClassFactory:
registry = {}
#classmethod
def register(cls, name):
def inner_wrapper(wrapped_class):
if name in cls.registry:
print(f'Class {name} already exists. Will replace it')
cls.registry[name] = wrapped_class
return wrapped_class
return inner_wrapper
#classmethod
def create_type(cls, name):
exec_class = cls.registry[name]
type = exec_class()
return type
#ClassFactory.register('Class 1')
class M1():
def __init__(self):
print ("Starting Class 1")
#ClassFactory.register('Class 2')
class M2():
def __init__(self):
print("Starting Class 2")
This works fine and when I do
if __name__ == '__main__':
print(ClassFactory.registry.keys())
foo = ClassFactory.create_type("Class 2")
I get the expected result of dict_keys(['Class 1', 'Class 2']) Starting Class 2
Now the problem is that I want to isolate classes M1 and M2 to their own files m1.py and m2.py, and in the future add other classes using their own files in a plugin manner.
However, simply placing it in their own file
m2.py
from test_ import ClassFactory
#MethodFactory.register('Class 2')
class M2():
def __init__(self):
print("Starting Class 2")
gives the result dict_keys(['Class 1']) since it never gets to register the class.
So my question is: How can I ensure that the class is registered when placed in a file different from the factory, without making changes to the factory file whenever I want to add a new class? How to self register in this way? Also, is this decorator way a good way to do this kind of thing, or are there better practices?
Thanks
How can I ensure that the class is registered when placed in a file different from the factory, without making changes to the factory file whenever I want to add a new class?
I'm playing around with a similar problem, and I've found a possible solution. It seems too much of a 'hack' though, so set your critical thinking levels to 'high' when reading my suggestion below :)
As you've mentioned in one of your comments above, the trick is to force the loading of the individual *.py files that contain individual class definitions.
Applying this to your example, this would involve:
Keeping all class implementations in a specific folders, e.g., structuring the files as follows:
.
└- factory.py # file with the ClassFactory class
└─ classes/
└- __init__.py
└- m1.py # file with M1 class
└- m2.py # file with M2 class
Adding the following statement to the end of your factory.py file, which will take care of loading and registering each individual class:
from classes import *
Add a piece of code like the snippet below to your __init__.py within the classes/ foder, so that to dynamically load all classes [1]:
from inspect import isclass
from pkgutil import iter_modules
from pathlib import Path
from importlib import import_module
# iterate through the modules in the current package
package_dir = Path(__file__).resolve().parent
for (_, module_name, _) in iter_modules([package_dir]):
# import the module and iterate through its attributes
module = import_module(f"{__name__}.{module_name}")
for attribute_name in dir(module):
attribute = getattr(module, attribute_name)
if isclass(attribute):
# Add the class to this package's variables
globals()[attribute_name] = attribute
If I then run your test code, I get the desired result:
# test.py
from factory import ClassFactory
if __name__ == "__main__":
print(ClassFactory.registry.keys())
foo = ClassFactory.create_type("Class 2")
$ python test.py
dict_keys(['Class 1', 'Class 2'])
Starting Class 2
Also, is this decorator way a good way to do this kind of thing, or are there better practices?
Unfortunately, I'm not experienced enough to answer this question. However, when searching for answers to this problem, I've came across the following sources that may be helpful to you:
[2] : this presents a method for registering class existence based on Python Metaclasses. As far as I understand, it relies on the registering of subclasses, so I don't know how well it applies to your case. I did not follow this approach, as I've noticed that the new edition of the book suggests the use of another technique (see bullet below).
[3], item 49 : this is the 'current' suggestion for subclass registering, which relies on the definition of the __init_subclass__() function in a base class.
If I had to apply the __init_subclass__() approach to your case, I'd do the following:
Add a Registrable base class to your factory.py (and slightly re-factor ClassFactory), like this:
class Registrable:
def __init_subclass__(cls, name:str):
ClassFactory.register(name, cls)
class ClassFactory:
registry = {}
#classmethod
def register(cls, name:str, sub_class:Registrable):
if name in cls.registry:
print(f'Class {name} already exists. Will replace it')
cls.registry[name] = sub_class
#classmethod
def create_type(cls, name):
exec_class = cls.registry[name]
type = exec_class()
return type
from classes import *
Slightly modify your concrete classes to inherit from the Registrable base class, e.g.:
from factory import Registrable
class M2(Registrable, name='Class 2'):
def __init__(self):
print ("Starting Class 2")

How to pass input argument from one python file to another

To simplify the problem, lets say I have 4 files. And right now I arrange them like the following
main.py (where I start the program)
global important_arg
important_arg = sys.argv[1]
if __name__ == "__main__":
import worker_a
import worker_b
if important_arg == "0":
worker = worker_a.worker()
worker.run()
elif important_arg == "1":
worker = worker_b.worker()
worker.run()
worker_a/worker_b.py
import main
import helper
class worker:
def run(self):
a = helper.dosth()
b = helper.dosth_2()
....blah blah blah
helper.py (where worker_a and b both needed static function)
import main
important_arg = main.important_arg #This is not work I know, the problem is how to make this work.
def dosth():
...
#I have tiny part need important_arg
if important_arg == "0":
print "This is worker A."
elif important_arg == "1":
print "This is worker B."
...
def dosth_2():
...
For sure in this pattern, my helper.py can no longer retrieve the important_arg from the main.py.
If I force it to run, no surprise,
The error will be
'module' object has no attribute 'important_arg'
How should I redesign the pattern, or anyway to pass that arg from the main.py to helper.py?
Besides, my last method is to covert the whole helper.py into a 'class'. But this is tedious as I need to add back tons of 'self.', unless I find passing the variable is impossible, I am unlikely to use this method right now.

Best practices for importing rarely used package in Python

My Python package depends on an external library for a few of it's functions. This is a non-Python package and can be difficult to install, so I'd like users to still be able to use my package but have it fail when using any functions that depend on this non-Python package.
What is the standard practice for this? I could only import the non-Python package inside the methods that use it, but I really hate doing this
My current setup:
myInterface.py
myPackage/
--classA.py
--classB.py
The interfaces script myInterface.py imports classA and classB and classB imports the non-Python package. If the import fails I print a warning. If myMethod is called and the package isn't installed there will be some error downstream but I do not catch it anywhere, nor do I warn the user.
classB is imported every time the interface script is called so I can't have anything fail there, which is why I included the pass. Like I said above, I could import inside the method and have it fail there, but I really like keeping all of my imports in one place.
From classB.py
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
pass
class ClassB():
...
def myMethod():
swp = someWeirdPackage()
...
If you are only importing one external library, I would go for something along these lines:
try:
import weirdModule
available = True
except ImportError:
available = False
def func_requiring_weirdmodule():
if not available:
raise ImportError('weirdModule not available')
...
The conditional and error checking is only needed if you want to give more descriptive errors. If not you can omit it and let python throw the corresponding error when trying to calling a non-imported module, as you do in your current setup.
If multiple functions do use weirdModule, you can wrap the checking into a function:
def require_weird_module():
if not available:
raise ImportError('weirdModule not available')
def f1():
require_weird_module()
...
def f2():
require_weird_module()
...
On the other hand, if you have multiple libraries to be imported by different functions, you can load them dynamically. Although it doesn't look pretty, python caches them and there is nothing wrong with it. I would use importlib
import importlib
def func_requiring_weirdmodule():
weirdModule = importlib.import_module('weirdModule')
Again, if multiple of your functions import complicated external modules you can wrap them into:
def import_external(name):
return importlib.import_module(name)
def f1():
weird1 = import_external('weirdModule1')
def f2():
weird2 = import_external('weirdModule2')
And last, you could create a handler to prevent importing the same module twice, something along the lines of:
class Importer(object):
__loaded__ = {}
#staticmethod
def import_external(name):
if name in Importer.__loaded__:
return Importer.__loaded__[name]
mod = importlib.import_module(name)
Importer.__loaded__[name] = mod
return mod
def f1():
weird = Importer.import_external('weird1')
def f2():
weird = Importer.import_external('weird1')
Although I'm pretty sure that importlib does caching behing the scenes and you don't really need for manual caching.
In short, although it does look ugly, there is nothing wrong with importing modules dynamically in python. In fact, a lot of libraries rely on this. On the other hand, if it is just for an special case of 3 methods accessing 1 external function, do use your approach or my first one in case you cant to add custom sception handling.
I'm not really sure that there's any best practice in this situation, but I would redefine the function if it's not supported:
def warn_import():
print("Cannot import someWeirdPackage")
try:
import someWeirdPackage
external_func = someWeirdPackage
except ImportError:
external_func = warn_import
class ClassB():
def myMethod(self):
swp = external_func()
b = ClassB()
b.myMethod()
You can create two separate classes for the two cases. The first will be used when the the package exist . The second will used when the package does not exist.
class ClassB1():
def myMethod(self):
print("someWeirdPackage exist")
# do something
class ClassB2(ClassB1):
def myMethod(self):
print("someWeirdPackage does not exist")
# do something or raise Exception
try:
import someWeirdPackage
class ClassB(ClassB1):
pass
except ImportError:
class ClassB(ClassB2):
pass
You can also use given below approach to overcome the problem that you're facing.
class UnAvailableName(object):
def __init__(self, name):
self.target = name
def __getattr_(self, attr):
raise ImportError("{} is not available.".format(attr))
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
someWeirdPackage = someWeirdPackage("someWeirdPackage")
class ClassB():
def myMethod():
swp = someWeirdPackage.hello()
a = ClassB()
a.myMethod()

Is it possible to create a dynamic localized scope in Python?

I have a scenario where I'm dynamically running functions at run-time and need to keep track of a "localized" scope. In the example below, "startScope" and "endScope" would actually be creating levels of "nesting" (in reality, the stuff contained in this localized scope isn't print statements...it's function calls that send data elsewhere and the nesting is tracked there. startScope / endScope just set control flags that are used to start / end the current nesting depth).
This all works fine for tracking the nested data, however, exceptions are another matter. Ideally, an exception would result in "falling out" of the current localized scope and not end the entire function (myFunction in the example below).
def startScope():
#Increment our control object's (not included in this example) nesting depth
control.incrementNestingDepth()
def endScope():
#Decrement our control object's (not included in this example) nesting depth
control.decrementNestingDepth()
def myFunction():
print "A"
print "B"
startScope()
print "C"
raise Exception
print "D"
print "This print statement and the previous one won't get printed"
endScope()
print "E"
def main():
try:
myFunction()
except:
print "Error!"
Running this would (theoretically) output the following:
>>> main()
A
B
C
Error!
E
>>>
I'm quite certain this isn't possible as I've written it above - I just wanted to paint a picture of the sort of end-result I'm trying to achieve.
Is something like this possible in Python?
Edit: A more relevant (albeit lengthy) example of how this is actually being used:
class Log(object):
"""
Log class
"""
def __init__(self):
#DataModel is defined elsewhere and contains a bunch of data structures / handles nested data / etc...
self.model = DataModel()
def Warning(self, text):
self.model.put("warning", text)
def ToDo(self, text):
self.model.put("todo", text)
def Info(self, text):
self.model.put("info", text)
def StartAdvanced(self):
self.model.put("startadvanced")
def EndAdvanced(self):
self.model.put("endadvanced")
def AddDataPoint(self, data):
self.model.put("data", data)
def StartTest(self):
self.model.put("starttest")
def EndTest(self):
self.model.put("endtest")
def Error(self, text):
self.model.put("error", text)
#myScript.py
from Logger import Log
def test_alpha():
"""
Crazy contrived example
In this example, there are 2 levels of nesting...everything up to StartAdvanced(),
and after EndAdvanced() is included in the top level...everything between the two is
contained in a separate level.
"""
Log.Warning("Better be careful here!")
Log.AddDataPoint(fancyMath()[0])
data = getSerialData()
if data:
Log.Info("Got data, let's continue with an advanced test...")
Log.StartAdvanced()
#NOTE: If something breaks in one of the following methods, then GOTO (***)
operateOnData(data)
doSomethingCrazy(data)
Log.ToDo("Fill in some more stuff here later...")
Log.AddDataPoint(data)
Log.EndAdvanced()
#(***) Ideally, we would resume here if an exception is raised in the above localized scope
Log.Info("All done! Log some data and wrap everything up!")
Log.AddDataPoint({"data": "blah"})
#Done
#framework.py
import inspect
from Logger import Log
class Framework(object):
def __init__(self):
print "Framework init!"
self.tests = []
def loadTests(self, file):
"""
Simplifying this for the sake of clarity
"""
for test in file:
self.tests.append(test)
def runTests(self):
"""
Simplifying this for the sake of clarity
"""
#test_alpha() as well as any other user tests will be run here
for test in self.tests:
Log.StartTest()
try:
test()
except Exception,e :
Log.Error(str(e))
Log.EndTest()
#End
You can achieve a similar effect with a context manager using a with statement. Here I use the contextlib.contextmanager decorator:
#contextlib.contextmanager
def swallower():
try:
yield
except ZeroDivisionError:
print("We stopped zero division error")
def foo():
print("This error will be trapped")
with swallower():
print("Here comes error")
1/0
print("This will never be reached")
print("Merrily on our way")
with swallower():
print("This error will propagate")
nonexistentName
print("This won't be reached")
>>> foo()
This error will be trapped
Here comes error
We stopped zero division error
Merrily on our way
This error will propagate
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
foo()
File "<pyshell#3>", line 10, in foo
nonexistentName
NameError: global name 'nonexistentName' is not defined
It cannot be done with an ordinary function call as in your example. In your example, the function startScope returns before the rest of the body of myFunction executes, so startScope can't have any effect on it. To handle exceptions, you need some kind of explicit structure (either a with statement or a regular try/except) inside myFunction; there's no way to make a simple function call magically intercept exceptions that are raised in its caller.
You should read up on context managers as they seem to fit what you're trying to do. The __enter__ and __exit__ methods of the context manager would correspond to your startScope and endScope. Whether it will do exactly what you want depends on exactly what you want those "manager" functions to do, but you will probably have more luck doing it with a context manager than trying to do it with simple function calls.

NameError: global name 'thing' is not defined

I don't think there is quite the need for the continued downvoting, I am just trying to learn here!
One.py
from two import *
ADooDah = Doodah()
something = Thing(ADooDah)
something.DoThis()
something.DoThat
something.DoAnother
if (something.has_done_stuff() == True)
self.SomeFunction
Two.py
class Thing(var):
def __init__(self, var)
self.SomeVar = var
def has_done_stuff(self):
while True:
id, newMessage = SomeVar.get_next_message()
if id == 0:
return true
else:
return false
I get...
Traceback (most recent call last):
File "C:\One.py", line 9, in <module>
has_done_stuff = thing.HasDoneStuff()
NameError: global name 'thing' is not defined
EDITS: The code was indeed peppered with errors. I was trying to show my situation rather than any real code. Rush typing causes foolish typing. Even I'm not that bad! Well, most of the time ;) .
I hope the edits make it all make more sense and you fine people can stop focusing on the crazy syntax errors and explain a bit more about my scope (I assume) problem. I'm fairly new to Python/IronPython and the rules around implicit types and scoping I am still in the process of learning!
I have solved my problem though. Thanks. It was fairly unrelated to the above as it turns out.
Something = Thing(ADooDah)
thing.DoThis()
Your thing is called Something.
Also, your class Thing has none of the methods you are calling/not calling (missing parens). This is pretty much non-sense code.
There are a few issues:
You claim Thing is defined in Two.py. If so, you need to import it thus:
from Two import Thing
or (not recommended):
from Two import *
Next, you need class, not Class.
Next, you need to define thing, which you haven't done. I will take a wild guess that you want thing to be a Thing object:
thing = Thing(ADooDah)
then there is the pfoblem with the if inside HasDoneStuff that someone has referred to in a comment, and the fact that Thing is incomplete (also mentioned in another answer).
I give the following codes.
I don't know for what they will be usable...... But they CAN run.
.
two.py
from time import time
class Thing():
def __init__(self, var):
self.SomeVar = enumerate(var)
def HasDoneStuff(self):
while True:
id, newMessage = self.SomeVar.next()
print newMessage
print 'id==',id
return id == 0
def DoThis(self):
print "DoThis' result"
def DoThat(self):
print 'DoingThat ;;;;;;;;;;;;;;;;;;;;;'
def DoAnother(self):
print 'DoAnother time',time()
def SomeFunction(self):
print 'Humpty Dumpty sat on a wall'
.
one.py
from two import *
def Doodah(ss):
return ss.split()
ADooDah = Doodah('once upon a time')
Something = Thing(ADooDah)
Something.DoThis()
Something.DoThat()
Something.DoAnother()
print '\n==========================\n'
while True:
try:
if Something.HasDoneStuff():
Something.SomeFunction()
print '---------------'
except StopIteration:
print "That's all folks"
break

Categories