I create a LazyLoader class which downloads files from S3 to the local file system, but only if they were not downloaded before. This class deletes the stuff it downloaded once it gets destroyed:
def __del__(self):
"""Remove downloaded files when this object is deleted."""
for downloaded_file in self.loaded_data.values():
os.remove(downloaded_file)
The tests pass, but after pytest tells me that the tests passed I get:
Exception ignored in: <bound method LazyLoader.__del__ of LazyLoader({})>
Traceback (most recent call last):
File "my_lazy_loader.py", line 47, in __del__
TypeError: 'NoneType' object is not callable
Line 47 is os.remove(downloaded_file). So os.remove is None evaluates to True. Why? How can I fix that?
If I move the code in __del__ to a method clean(), I don't have that problem.
https://docs.python.org/3/reference/datamodel.html#object.del
'del()' can be executed during interpreter shutdown. As a consequence, the global variables it needs to access (including other modules) may already have been deleted or set to None. Python guarantees that globals whose name begins with a single underscore are deleted from their module before other globals are deleted; if no other references to such globals exist, this may help in assuring that imported modules are still available at the time when the del() method is called.
If it's just unit tests, use tearDown method.
If the problem occurs when running your application and you want to do a cleanup at the end of the program, consider using atexit.register.
You might also use tempfile module for temporary files automatically removed when closed.
If your use-case allows that, turn LazyLoader into a context manager.
If none of the above applies, simply call clean() explicitly. It will follow explicit is better than implicit rule.
Related
I used Pickle to serialize a Python object locally, and after the deserialization on the remote host, an error was reported.suggesting that I ModuleNotFoundError: No module named 'plugins'.
I want the remote host to not see my source file while executing my function
This is my file structure
|plugins/
|one/
|--test1.py
|main.py
The file is located in the plugins directory
class Plugin:
def start(self):
pass
o=Plugin()
import pickle
pickle.loads(o)
This is the code that is executed remotely
import pickle,requests
result=requests.get('http://127.0.0.1:8000/').content
o=pickle.loads(result)
o.start()
Remote host throws an exception
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-16-bc409313ddfa> in <module>
----> 1 o=pickle.loads(result)
ModuleNotFoundError: No module named 'plugins'
It might help if you would explain in more detail what you want to accomplish here. It seems to me as if you want to implement a protocol where arbitrary code can be executed on a remote host.
I think it is quite possible that the pickle module is not what you are looking for.
Quote from the documentation:
Note that functions (built-in and user-defined) are pickled by “fully qualified” name reference, not by value. This means that only the function name is pickled, along with the name of the module the function is defined in. Neither the function’s code, nor any of its function attributes are pickled. Thus the defining module must be importable in the unpickling environment, and the module must contain the named object, otherwise an exception will be raised.
So in order to make your example work as it is, the remote host must get the source of the 'plugins' module or at least the source of the 'Plugin' class.
Update:
As you explained in your comment, your aim is to execute code remotely. This cannot be done with the pickle module.
You may want to have a look at RPyC, a library for remote procedure calls and distributed-computing:
https://en.wikipedia.org/wiki/RPyC
https://rpyc.readthedocs.io/en/latest/
I think the dill could save your question. It's extends the pickle.
dill.dump(your_class_obj, local_file) # dump to file
dill.load(local_file) # load in other place
I'm writing a module manager for a program that I am writing now, and I wanted to store the module names in a dictionary then reference them and call them from globals().
module_number = 5
module_names = ["", "quiz", "scores", "gender", "help", "exit"]
I would then expect that I can call
globals()[module_names[module_number]]()
Which would call exit() and close the script, instead I get the resulting error:
Traceback (most recent call last): File "a2.py", line 103, in start() File "a2.py", line 44, in start menu() File "a2.py", line 36, in menu call_module(choice) File "a2.py", line 50, in call_module globals()converter[int(module_number)]
KeyError: 'exit'
exit isn't in globals() because it isn't a global, it's a builtin.
In Python, the "global" namespace is per-module, not system-wide. There's a special "builtins" module that holds the things that are truly system-wide, like the normal builtin functions and a few special things like exit.
You can access this module with import builtins.
The way the interpreter accesses this module is a little funky. Global lookup works roughly like this:
def get_global(global_namespace, name):
try:
return global_namespace[name]
except KeyError:
pass
try:
builtins = global_namespace['__builtins__']
except KeyError:
raise NameError(name)
if isinstance(builtins, types.ModuleType):
builtins = builtins.__dict__
try:
return builtins[name]
except KeyError:
raise NameError(name)
There's special code in places like exec, and the internal code for building function objects, that makes sure that if you override the normal globals dictionary, __builtins__ gets copied over (unless you explicitly tell it not to). When the import system builds a module object out of your module source (or compiled .pyc), it calls exec, so every module's globals ends up with the right __builtins__.
Most things in the builtins module are there because they're compiled into it (as you'd expect from the name); for CPython, you can see the source in Python/bltinmodule.c.
But notice that exit isn't there. In fact, it's injected into the builtins module by the site module, which is imported (unless you disable it) as part of the normal startup sequence. You can see the code that does this in Lib/site.py and Lib/_sitebuiltins.py. And the exit constant says it's injected this way.
So, when you type exit in your code, or at the interactive prompt, it's found in globals()['__builtins__']['exit'] or globals()['__builtins__'].__dict__['exit'].
But if you want to access it manually, you're better off doing an import builtins and accessed it as builtins.exit.
Although really, you rarely want to access builtins.exit anyway; if you want to exit programmatically, call sys.exit, which is a normal function. builtins.exit is a special Quitter object that's made specifically for interactive use. (It has a repr that gives a helpful message if you forget the parentheses, and some extra code to make it play nice with IDLE.)
In fact, the docs on the exit constant explicitly say that it's:
… useful for the interactive interpreter shell and should not be used in programs.
First of all: Feel free to tell me that this is an antipattern!
In my code, I have some functions responsible for calling external API's. This is a prime candidate for mocking in the tests to make sure that the external API is not hit when tests are run.
The thing is, the way mocking works in python (at least the way I have been taught), we mock a position in the imported module structure explicitly, e.g.
import mymodule
def test_api():
mocker.patch('mymodule.mysubmodule.json_apis.my_api_wrapper_function')
[...]
This will mock out the my_api_wrapper_function function for the test. However, what if refactoring moves the function or renames it, etc.? If the test is not updated, it will most likely pass, AND the external API is hit, because the new location of the function has not been mocked.
I see two solutions to this question, but I am not sure how to implement any of them
Mock stuff in a better way, so that I am sure not to have problems when refactoring
Create a decorator, which will wrap a function and raise an exception if the function is called in a test context (I suppose this depends on the test runner that is used? In my case, it is pytest)
First of all the sentence
If the test is not updated, it will most likely pass, AND the external API is hit, because the new location of the function has not been mocked.
is wrong.
If you try to mock something that not exist and you don't use create=True attribute the patch fail!
>>> from mock import patch
>>> patch("doesnt.exist").start()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1396, in start
result = self.__enter__()
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1252, in __enter__
self.target = self.getter()
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1414, in <lambda>
getter = lambda: _importer(target)
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1098, in _importer
thing = __import__(import_path)
ImportError: No module named doesnt
Moreover if you use some good refactoring tool like the one integrated in PyCharm it will fix the string path too when you move something.
I just started with python a couple of days ago, coming from a C++ background. When I write a class, call it by a script, and afterwards update the interface of the class, I get some behaviour I find very unintuitive.
Once successfully compiled, the class seems to be not changeable anymore. Here an example:
testModule.py:
class testClass:
def __init__(self,_A):
self.First=_A
def Method(self, X, Y):
print X
testScript.py:
import testModule
tm=testModuleB.testClass(10)
tm.Method(3, 4)
Execution gives me
3
Now I change the argument list of Method:
def Method(self, X):
, I delete the testModule.pyc and in my script I call
tm.Method(3)
As result, I get
TypeError: Method() takes exactly 3 arguments (2 given)
What am I doing wrong? Why does the script not use the updated version of the class? I use the Canopy editor but I saw this behaviour also with the python.exe interpreter.
And apologies, if something similar was asked before. I did not find a question related to this one.
Python loads the code objects into memory; the class statement is executed when a file is first imported an a class object is created and stored in the module namespace. Subsequent imports re-use the already created objects.
The .pyc file is only used the next time the module is imported for the first time that Python session. Replacing the file will not result in a module reload.
You can use the reload() function to force Python to replace an already-loaded module with fresh code from disk. Note that any and all other direct references to a class are not replaced; an instance of the testClass class (tm in your case) would still reference the old class object.
When developing code, it is often just easier to restart the Python interpreter and start afresh. That way you don't have to worry about hunting down all direct references and replacing those, for example.
testModule is already loaded in your interpreter. Deleting the pyc file won't change anything. You will need to do reload(testModule), or even better restart the interpreter.
Deleting the .pyc file cannot do the change in your case. When you import a module for the first time on the interpreter, it gets completely loaded on the interpreter and deleting the files or modifying won't change anything.
Better restart the interpreter or use the built-in reload function.
I have been running win32com to access AutoCAD for quite some time without issue. I learned that applying the makepy utility could create a more user friendly experience at the interactive prompt, so I ran it and added the "AutoCAD 2006 Type Library". Now, some common attributes that I used to access are no longer available. Consider the code:
acad = win32com.client("AutoCAD.Application")
doc = acad.Documents.Open('mydoc.dwg')
ms = doc.ModelSpace
count = ms.Count #counts all entities in dwg
for i in range(count):
item = ms.Item(i)
if 'block' in item.ObjectName.lower():
print item.Name
Traceback (most recent call last):
File "<interactive input>", line 1, in <module>
File "C:\Python27\lib\site-packages\win32com\client\__init__.py", line 462, in
__getattr__ raise AttributeError("'%s' object has no attribute '%s'"
(repr(self),attr))
AttributeError: '<win32com.gen_py.AutoCAD 2006 Type Library.IAcadEntity instance
at 0x34894552>' object has no attribute 'Name'
Name has always been an accessible attribute of a block until I ran makepy. I've tried reinstalling Python for windows to see if that would reset it, but it didn't work.
Does makepy create another file upon implementation that I need to remove?
So many of my scripts depend upon the Name attribute. Any suggestions or help you could offer would be greatly appreciated.
The main reason for this attribute error is because your COM-server has shifted from late-binding (dynamic) to early binding (static).
In Late Binding, whenever a method is called, the object is queried for the method and if it succeeds, then the call can be made.
In Early Binding, the information of the object model is determined in advance from type information supplied by the object call. Early binding makes use of MakePy. Also, early binding is case sensitive.
There are two ways to fix this issue:
Use the dynamic module to force your code to work in a late-bound oriented way. Example use:
win32com.client.dynamic.Dispatch() instead of win32com.client.Dispatch()
Use camelcase sensitive keywords for the early bound oriented way. Example use:
excel.Visible() instead of excel.VISIBLE() or excel.visible()
So try using dynamic.Dispatch or case-sensitive variable names.
I encounter a similar problem when I run win32com to access DELMIA. I find out that delmia has lots of com stuff as .tlb files. When I type:
from win32com.client.gencache import EnsureDispatch
EnsureDispatch('DELMIA.Application')
Python will automatically generate some stuff in win32com\gen_py directory, same as example of Word or Excel. But, the difference is that the generated stuff is from only one .tlb file of delmia. If I access some variable in the `.tlb, it's ok:
docs = delmia.Documents
If I access some variable in other .tlb, I get a similar error:
pdoc = docs.Add('Process')
Python says that:
... object has no attribute ...
So I delete all files in the win32com\gen_py directory but keep gen_py directory, and it is ok now.