Import error in relation of DLL functions - python

I'm using python 2.7.2.
i'm using DLL to talk with external hardware , by using the below lines:
main.py
comm_dll = ctypes.cdll.LoadLibrary("extcomm.dll")
ret_val = comm_dll.open(0)
the ret_val is needed in all other DLL functions because several hardware of the same type can be connected to the same PC
the comm_dll is needed in several modules that needs access to this DLL functions
My question is how I make other modules to know comm_dll and ret_val variables
I try to import them from main by import from main comm_dll,ret_val or by using global keyword on both of the variables and then import them
No matter what I do , other modules failed on import statement
I know I can pass these variables to all the functions that uses them , but its seems big overhead
what is the pythonic way to do such import?
Note : ret_val type is ctypes.c_int
CODE
main.py
import ctypes
from drv_spi import *
def main():
comm_dll = ctypes.cdll.LoadLibrary("extcomm.dll")
comm_dll.open.argtypes = [ctypes.c_int]
comm_dll.open.restypes = ctypes.c_int
comm_handle = comm_dll.open(0)
drv_spi_init()
main()
drv_spi.py
import ctypes
def drv_spi_init():
comm_dll.spi_config.argtypes = [ctypes.c_int, ctypes.c_int]
comm_dll.spi_config.restypes = ctypes.c_int
ret_val = comm_dll.spi_config(comm_handle,0x45)
I get an error of NameError: global name 'comm_dll' is not defined
using from main import comm_dll is not working either because main to run again...

It sounds like you should probably implement your hardware device as a class. Something like
class MyHardwareDevice(object):
comm_dll = ctypes.cdll.LoadLibrary("extcomm.dll")
def __init__(self):
pass # or whatever initialization you need
def connect(self, port_number):
self.ret_val = comm_dll.open(port_number)
Then you can use the class like
device = MyHardwareDevice()
device.connect(0)
# your ret_val is now available as device.ret_val

Related

Python: what's the right way to initialize a variable (enum) imported before it's initialization

I'm writing some client code that uses a backend service to get some of the types that it uses.
Some of the types will be represented using an Enum in the code. However, these enums are being imported to the file before they have been initialized.
Here's an example similar to my code:
types.py
EnumA = None
EnumB = None
update_types():
global EnumA
global EnumB
backend_types = get_backend_types(...)
EnumA = Enum("A", backend_types["A"])
EnumB = Enum("B", backend_types["B"])
client.py
from types import EnumA, EnumB, update_types
class Client:
def setup(...):
update_types()
def some_method():
# uses the enums here
a = EnumA(1)
Client.setup() is called early on, but not early enough.
The problem is that when I get to execute some_method, EnumA is still None rather than initialized from the backend.
Adding a local import in some_method would solve that but I'd like to find a more generic solution.
You have two options:
import the module instead of the pieces:
import types
...
a = types.EnumA(1)
move the call to get_backend_types() and Enum creation to module level instead of hiding them in a function:
from enum import Enum
...
backend_types = get_backend_types(...)
EnumA = Enum("A", backend_types["A"])
EnumB = Enum("B", backend_types["B"])

Best practices for importing rarely used package in Python

My Python package depends on an external library for a few of it's functions. This is a non-Python package and can be difficult to install, so I'd like users to still be able to use my package but have it fail when using any functions that depend on this non-Python package.
What is the standard practice for this? I could only import the non-Python package inside the methods that use it, but I really hate doing this
My current setup:
myInterface.py
myPackage/
--classA.py
--classB.py
The interfaces script myInterface.py imports classA and classB and classB imports the non-Python package. If the import fails I print a warning. If myMethod is called and the package isn't installed there will be some error downstream but I do not catch it anywhere, nor do I warn the user.
classB is imported every time the interface script is called so I can't have anything fail there, which is why I included the pass. Like I said above, I could import inside the method and have it fail there, but I really like keeping all of my imports in one place.
From classB.py
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
pass
class ClassB():
...
def myMethod():
swp = someWeirdPackage()
...
If you are only importing one external library, I would go for something along these lines:
try:
import weirdModule
available = True
except ImportError:
available = False
def func_requiring_weirdmodule():
if not available:
raise ImportError('weirdModule not available')
...
The conditional and error checking is only needed if you want to give more descriptive errors. If not you can omit it and let python throw the corresponding error when trying to calling a non-imported module, as you do in your current setup.
If multiple functions do use weirdModule, you can wrap the checking into a function:
def require_weird_module():
if not available:
raise ImportError('weirdModule not available')
def f1():
require_weird_module()
...
def f2():
require_weird_module()
...
On the other hand, if you have multiple libraries to be imported by different functions, you can load them dynamically. Although it doesn't look pretty, python caches them and there is nothing wrong with it. I would use importlib
import importlib
def func_requiring_weirdmodule():
weirdModule = importlib.import_module('weirdModule')
Again, if multiple of your functions import complicated external modules you can wrap them into:
def import_external(name):
return importlib.import_module(name)
def f1():
weird1 = import_external('weirdModule1')
def f2():
weird2 = import_external('weirdModule2')
And last, you could create a handler to prevent importing the same module twice, something along the lines of:
class Importer(object):
__loaded__ = {}
#staticmethod
def import_external(name):
if name in Importer.__loaded__:
return Importer.__loaded__[name]
mod = importlib.import_module(name)
Importer.__loaded__[name] = mod
return mod
def f1():
weird = Importer.import_external('weird1')
def f2():
weird = Importer.import_external('weird1')
Although I'm pretty sure that importlib does caching behing the scenes and you don't really need for manual caching.
In short, although it does look ugly, there is nothing wrong with importing modules dynamically in python. In fact, a lot of libraries rely on this. On the other hand, if it is just for an special case of 3 methods accessing 1 external function, do use your approach or my first one in case you cant to add custom sception handling.
I'm not really sure that there's any best practice in this situation, but I would redefine the function if it's not supported:
def warn_import():
print("Cannot import someWeirdPackage")
try:
import someWeirdPackage
external_func = someWeirdPackage
except ImportError:
external_func = warn_import
class ClassB():
def myMethod(self):
swp = external_func()
b = ClassB()
b.myMethod()
You can create two separate classes for the two cases. The first will be used when the the package exist . The second will used when the package does not exist.
class ClassB1():
def myMethod(self):
print("someWeirdPackage exist")
# do something
class ClassB2(ClassB1):
def myMethod(self):
print("someWeirdPackage does not exist")
# do something or raise Exception
try:
import someWeirdPackage
class ClassB(ClassB1):
pass
except ImportError:
class ClassB(ClassB2):
pass
You can also use given below approach to overcome the problem that you're facing.
class UnAvailableName(object):
def __init__(self, name):
self.target = name
def __getattr_(self, attr):
raise ImportError("{} is not available.".format(attr))
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
someWeirdPackage = someWeirdPackage("someWeirdPackage")
class ClassB():
def myMethod():
swp = someWeirdPackage.hello()
a = ClassB()
a.myMethod()

What's the closest I can get to calling a Python function using a different Python version?

Say I have two files:
# spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
and
# beans.py
import library_Python2_only as l2
...
Now suppose I wish to call spam from within beans. It's not directly possible since both files depend on incompatible Python versions. Of course I can Popen a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?
Here is a complete example implementation using subprocess and pickle that I actually tested. Note that you need to use protocol version 2 explicitly for pickling on the Python 3 side (at least for the combo Python 3.5.2 and Python 2.7.3).
# py3bridge.py
import sys
import pickle
import importlib
import io
import traceback
import subprocess
class Py3Wrapper(object):
def __init__(self, mod_name, func_name):
self.mod_name = mod_name
self.func_name = func_name
def __call__(self, *args, **kwargs):
p = subprocess.Popen(['python3', '-m', 'py3bridge',
self.mod_name, self.func_name],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
stdout, _ = p.communicate(pickle.dumps((args, kwargs)))
data = pickle.loads(stdout)
if data['success']:
return data['result']
else:
raise Exception(data['stacktrace'])
def main():
try:
target_module = sys.argv[1]
target_function = sys.argv[2]
args, kwargs = pickle.load(sys.stdin.buffer)
mod = importlib.import_module(target_module)
func = getattr(mod, target_function)
result = func(*args, **kwargs)
data = dict(success=True, result=result)
except Exception:
st = io.StringIO()
traceback.print_exc(file=st)
data = dict(success=False, stacktrace=st.getvalue())
pickle.dump(data, sys.stdout.buffer, 2)
if __name__ == '__main__':
main()
The Python 3 module (using the pathlib module for the showcase)
# spam.py
import pathlib
def listdir(p):
return [str(c) for c in pathlib.Path(p).iterdir()]
The Python 2 module using spam.listdir
# beans.py
import py3bridge
delegate = py3bridge.Py3Wrapper('spam', 'listdir')
py3result = delegate('.')
print py3result
Assuming the caller is Python3.5+, you have access to a nicer subprocess module. Perhaps you could user subprocess.run, and communicate via pickled Python objects sent through stdin and stdout, respectively. There would be some setup to do, but no parsing on your side, or mucking with strings etc.
Here's an example of Python2 code via subprocess.Popen
p = subprocess.Popen(python3_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = p.communicate(pickle.dumps(python3_args))
result = pickle.load(stdout)
You could create a simple script as such :
import sys
import my_wrapped_module
import json
params = sys.argv
script = params.pop(0)
function = params.pop(0)
print(json.dumps(getattr(my_wrapped_module, function)(*params)))
You'll be able to call it like that :
pythonx.x wrapper.py myfunction param1 param2
This is obviously a security hazard though, be careful.
Also note that if your params are anything else than string or integers, you'll have some issues, so maybe think about transmitting params as a json string, and convert it using json.loads() in the wrapper.
It's possible to use the multiprocessing.managers module to achieve what you want. It does require a small amount of hacking though.
Given a module that has functions you want to expose then you need to create a Manager that can create proxies for those functions.
manager process that serves proxies to the py3 functions:
from multiprocessing.managers import BaseManager
import spam
class SpamManager(BaseManager):
pass
# Register a way of getting the spam module.
# You can use the exposed arg to control what is exposed.
# By default only "public" functions (without a leading underscore) are exposed,
# but can only ever expose functions or methods.
SpamManager.register("get_spam", callable=(lambda: spam), exposed=["add", "sub"])
# specifying the address as localhost means the manager is only visible to
# processes on this machine
manager = SpamManager(address=('localhost', 50000), authkey=b'abc',
serializer='xmlrpclib')
server = manager.get_server()
server.serve_forever()
I've redefined spam to contain two function called add and sub.
# spam.py
def add(x, y):
return x + y
def sub(x, y):
return x - y
client process that uses the py3 functions exposed by the SpamManager.
from __future__ import print_function
from multiprocessing.managers import BaseManager
class SpamManager(BaseManager):
pass
SpamManager.register("get_spam")
m = SpamManager(address=('localhost', 50000), authkey=b'abc',
serializer='xmlrpclib')
m.connect()
spam = m.get_spam()
print("1 + 2 = ", spam.add(1, 2)) # prints 1 + 2 = 3
print("1 - 2 = ", spam.sub(1, 2)) # prints 1 - 2 = -1
spam.__name__ # Attribute Error -- spam is a module, but its __name__ attribute
# is not exposed
Once set up, this form gives an easy way of accessing functions and values. It also allows these functions and values to be used them in a similar way that you might use them if they were not proxies. Finally, it allows you to set a password on the server process so that only authorised processes can access the manager. That the manager is long running, also means that a new process doesn't have to be started for each function call you make.
One limitation is that I've used the xmlrpclib module rather than pickle to send data back and forth between the server and the client. This is because python2 and python3 use different protocols for pickle. You could fix this by adding your own client to multiprocessing.managers.listener_client that uses an agreed upon protocol for pickling objects.

python module __init__ function

Is there any way to make an implicit initializer for modules (not packages)?
Something like:
#file: mymodule.py
def __init__(val):
global value
value = 5
And when you import it:
#file: mainmodule.py
import mymodule(5)
The import statement uses the builtin __import__ function.
Therefore it's not possible to have a module __init__ function.
You'll have to call it yourself:
import mymodule
mymodule.__init__(5)
These things often are not closed as duplicates, so here's a really nice solution from Pass Variable On Import. TL;DR: use a config module, configure that before importing your module.
[...] A cleaner way to do it which is very useful for multiple configuration
items in your project is to create a separate Configuration module
that is imported by your wrapping code first, and the items set at
runtime, before your functional module imports it. This pattern is
often used in other projects.
myconfig/__init__.py :
PATH_TO_R_SOURCE = '/default/R/source/path'
OTHER_CONFIG_ITEM = 'DEFAULT'
PI = 3.14
mymodule/__init__.py :
import myconfig
PATH_TO_R_SOURCE = myconfig.PATH_TO_R_SOURCE
robjects.r.source(PATH_TO_R_SOURCE, chdir = True) ## this takes time
class SomeClass:
def __init__(self, aCurve):
self._curve = aCurve
if myconfig.VERSION is not None:
version = myconfig.VERSION
else:
version = "UNDEFINED"
two_pi = myconfig.PI * 2
And you can change the behaviour of your module at runtime from the
wrapper:
run.py :
import myconfig
myconfig.PATH_TO_R_SOURCE = 'actual/path/to/R/source'
myconfig.PI = 3.14159
# we can even add a new configuration item that isn't present in the original myconfig:
myconfig.VERSION="1.0"
import mymodule
print "Mymodule.two_pi = %r" % mymodule.two_pi
print "Mymodule.version is %s" % mymodule.version
Output:
> Mymodule.two_pi = 6.28318
> Mymodule.version is 1.0

How do I call a plugin module that's loaded?

Either it's lack of sleep but I feel silly that I can't get this. I have a plugin, I see it get loaded but I can't instantiate it in my main file:
from transformers.FOMIBaseClass import find_plugins, register
find_plugins()
Here's my FOMIBaseClass:
from PluginBase import MountPoint
import sys
import os
class FOMIBaseClass(object):
__metaclass__ = MountPoint
def __init__(self):
pass
def init_plugins(self):
pass
def find_plugins():
plugin_dir = os.path.dirname(os.path.realpath(__file__))
plugin_files = [x[:-3] for x in os.listdir(plugin_dir) if x.endswith("Transformer.py")]
sys.path.insert(0, plugin_dir)
for plugin in plugin_files:
mod = __import__(plugin)
Here's my MountPoint:
class MountPoint(type):
def __init__(cls,name,bases,attrs):
if not hasattr(cls,'plugins'):
cls.plugins = []
else:
cls.plugins.append(cls)
I see it being loaded:
# /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.pyc matches /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.py
import SctyDistTransformer # precompiled from /Users/carlos/Desktop/ws_working_folder/python/transformers/SctyDistTransformer.pyc
But, for the life of me, I can't instantiate the 'SctyDistTransformer' module from the main file. I know I'm missing something trivial. Basically, I want to employ a class loading plugin.
To dymically load Python modules from arbitrary folders use imp module:
http://docs.python.org/library/imp.html
Specifically the code should look like:
mod = imp.load_source("MyModule", "MyModule.py")
clz = getattr(mod, "MyClassName")
Also if you are building serious plug-in architecture I recommend using Python eggs and entry points:
http://wiki.pylonshq.com/display/pylonscookbook/Using+Entry+Points+to+Write+Plugins
https://github.com/miohtama/vvv/blob/master/vvv/main.py#L104

Categories