I'm new to OOP and trying to use COM objects (arcobjects) in Python. Program is GIS related, but I did not get any answers on GIS.SE, so I am asking here. Below is piece of my code. I am stuck at the end where I receive iFrameElement. ESRI describe it as member/interface of Abstract Class, which can not create objects itself. I need to pass information contained in it to object in its CoClass (MapFrame).
Any suggestions how to do this?
Also where can I find name conventions for objects in Python? There are p, i as prefix and I am not sure where they come from.
from comtypes.client import CreateObject, GetModule
import arcpy
def CType(obj, interface):
"""Casts obj to interface and returns comtypes POINTER or None"""
try:
newobj = obj.QueryInterface(interface)
return newobj
except:
return None
def NewObj(MyClass, MyInterface):
"""Creates a new comtypes POINTER object where\n\
MyClass is the class to be instantiated,\n\
MyInterface is the interface to be assigned"""
from comtypes.client import CreateObject
try:
ptr = CreateObject(MyClass, interface=MyInterface)
return ptr
except:
return None
esriCarto = GetModule(r"C:\Program Files (x86)\ArcGIS\Desktop10.0\com\esriCarto.olb")
esriCartoUI = GetModule(r"C:\Program Files (x86)\ArcGIS\Desktop10.0\com\esriCartoUI.olb")
esriMapUI = GetModule(r"C:\Program Files (x86)\ArcGIS\Desktop10.0\com\esriArcMapUI.olb")
esriFrame = GetModule(r"C:\Program Files (x86)\ArcGIS\Desktop10.0\com\esriFramework.olb")
arcpy.SetProduct('Arcinfo')
pApp = NewObj(esriFrame.AppROT, esriFrame.IAppROT).Item(0)
pDoc = pApp.Document
pMxDoc = CType(pDoc, esriMapUI.IMxDocument)
pLayout = pMxDoc.PageLayout
pGraphContLayout = CType(pLayout, esriCarto.IGraphicsContainer)
iFrameElement = pGraphContLayout.FindFrame(pMxDoc.ActiveView.FocusMap)
As far as I understand, iFrameElement is an interface of an abstract class from which I need to inherit attributes (pointer) to MapFrame object. How do I do that? How do it get to object with IMapGrids interface? Any suggestions?
IFrameElement is an interface, so you can't create an instance of it per se. This interface is implemented by various classes, including MapFrame, which means (in basic terms) that an instance of any of those objects 'behaves' like an IFrameElement. So if you get an IFrameElement from IGraphicsContainer.FindFrame(), you can pass it to something else that expects an IFrameElement without having to find out what the actual type of the object is.
I would suggest reading up on what Interfaces mean in OOP, because ESRI's code uses them a lot.
On naming convetions - there is no hard & fast rule on what to name your variables.
By the looks of your code, the p refers to an object with a distinct type, and i refers to an object defined only by an interface. But on that note, calling a variable by the same name as the interface it's referencing (except with a lower-case 'i') is a bad way to do things, and will lead to confusion. (IMO)
Edit:
To answer your final question (sorry, I missed it originally):
If pGraphContLayout.FindFrame() returns an object of type MapFrame (and there is no guarantee that it does) then you should be able to simply cast it across to IMapGrids:
pGraphContLayout = CType(pLayout, esriCarto.IGraphicsContainer)
pFrame = pGraphContLayout.FindFrame(pMxDoc.ActiveView.FocusMap)
pGrids = CType(pFrame, IMapGrids)
It sounds like you may be getting confused by Python's abstract base classes, which seem to serve the purpose of interfaces...? This thread is useful: Difference between abstract class and interface in Python
Related
It seems simple, but I can't figure out how to get a class constructor if given only a type.
For clarity I created a code snippet demonstrating the problem as a function stub.
Google is not turning up anything useful, maybe I can't find the right keywords to avoid the SEO driving me to popular intro to python sites.
from typing import NamedTuple
class Person(NamedTuple):
name: str
age: int
def magic_thing_maker(the_type, ctor_args):
pass # I am stuck with what to do here
some_type = type(Person)
the_person = magic_thing_maker(some_type, {'name':'Saul Goodman',
'age':37})
assert (the_person == Person('Saul Goodman', 37))
Feel free to call my unpythonic for asking this; I get that asking this question indicates I should seek healthier approaches to the problem at hand.
Any help much appreciated.
As type(Person) is type, you could use the class name as alternative. The class can be instantiated from globals, using the **-operator to pass the dictionary to the constructor. Here is an example:
def magic_thing_maker(the_type, ctor_args):
return globals()[the_type](**ctor_args)
some_type = Person.__name__
the_person = magic_thing_maker(some_type, {'name':'Saul Goodman',
'age':37})
See this question for further information.
For instantiating classes from imported modules, you can use getattr, see this question.
I'm new to python and I'm tring to make a class for a modul which checking curses in texts.
can someone help please?
import urllib
class Checktext:
def __init__(self, text):
self.text = text
def gettext(self):
file = open(self.text, "r")
filetext = open.read()
for word in filetext.split():
openurl = urllib.request.urlopen("http://www.wdylike.appspot.com/?q=" + word)
output = openurl.read()
truer = "true" in str(output)
print(truer)
s = Checktext(r"C:\Users\Tzach\.atom\Test\Training\readme.txt")
Checktext.gettext()
You declared s as a new Checktext object, so you need to call s.gettext() not an un-instantiated Checktext.gettext(), as that has no self to refer to
The urllib is a package. You have to import the module request that is located in the package:
import urllib.request
The open(filename) return a file object. You want to call the method of that object:
filetext = file.read()
And as G. Anderson wrote, you want to call s.gettext() instead of Checktext.gettext(). The self inside is actually equal to the s outside. If you want to be weird then you actually can use also:
Checktext.gettext(s)
Notice the s passed as your missing parameter. Here Python actually reveals how the Object Oriented things are implemented internally. In majority of OO languages, it is carefully hidden, but calling a method of an object is always internally translated as passing one more special argument that points to the instance of the class, that is the object. When defining a Python method, that special argument is explicitly named self (by convention; you can name it differently -- you can try as the lecture, but you should always keep that convention).
Thinking about it thoroughly, you can get the key idea of the hidden magic of an OO language syntax. The instance of the class (the object) is actually only a portion of memory that stores the data part, and that is passed to the functions that implement the methods. The Checktext.gettext is actually the function, the s is the object. The s.gettext() is actually only a different way to express exactly the same. AS s is the instance of the Checktext class, the fact is stored inside the s. Therefore, the s.gettext() creates the illusion that the rigth code will be called magically. It fits with the trained brain better than the function approach if the s is thought as a tangible something.
Question
Is there a "pythonic" (i.e. canonical, official, PEP8-approved, etc) way to re-use string literals in python internal (and external) APIs?
Background
For example, I'm working with some (inconsistent) JSON-handling code (thousands of lines) where there are various JSON "structs" we assemble, parse, etc. One of the recurring problems that comes up during code reviews is different JSON structs that use the same internal parameter names, causing confusion and eventually causing bugs to arise, e.g.:
pathPacket['src'] = "/tmp"
pathPacket['dst'] = "/home/user/out"
urlPacket['src'] = "localhost"
urlPacket['dst'] = "contoso"
These two (example) packets that have dozens of identically named fields, but they represent very different types of data. There was no code-reuse justification for this implementation. People typically use code-completion engines to get the members of the JSON struct, and this eventually leads to hard-to-debug problems down the road due to mis-typed string literals causing functional issues, and not triggering an error earlier on. When we have to change these APIs, it takes a lot of time to hunt down the string literals to find out which JSON structs use which fields.
Question - Redux
Is there a better approach to this that is common amongst members of the python community? If I was doing this in C++, the earlier example would be something like:
const char *JSON_PATH_SRC = "src";
const char *JSON_PATH_DST = "dst";
const char *JSON_URL_SRC = "src";
const char *JSON_URL_DST = "dst";
// Define/allocate JSON structs
pathPacket[JSON_PATH_SRC] = "/tmp";
pathPacket[JSON_PATH_DST] = "/home/user/out";
urlPacket[JSON_URL_SRC] = "localhost";
urlPacket[JSON_URL_SRC] = "contoso";
My initial approach would be to:
Use abc to make an abstract base class that can't be initialized as an object, and populate it with read-only constants.
Use that class as a common module throughout my project.
By using these constants, I can reduce the chance of a monkey-patching error as the symbols won't exist if mis-spelled, whereas a string literal typo can slip through code reviews.
My Proposed Solution (open to advice/criticism)
from abc import ABCMeta
class Custom_Structure:
__metaclass__ = ABCMeta
#property
def JSON_PATH_SRC():
return self._JSON_PATH_SRC
#property
def JSON_PATH_DST():
return self._JSON_PATH_DST
#property
def JSON_URL_SRC():
return self._JSON_URL_SRC
#property
def JSON_URL_DST():
return self._JSON_URL_DST
The way this is normally done is:
JSON_PATH_SRC = "src"
JSON_PATH_DST = "dst"
JSON_URL_SRC = "src"
JSON_URL_DST = "dst"
pathPacket[JSON_PATH_SRC] = "/tmp"
pathPacket[JSON_PATH_DST] = "/home/user/out"
urlPacket[JSON_URL_SRC] = "localhost"
urlPacket[JSON_URL_SRC] = "contoso"
Upper-case to denote "constants" is the way it goes. You'll see this in the standard library, and it's even recommended in PEP8:
Constants are usually defined on a module level and written in all
capital letters with underscores separating words. Examples include
MAX_OVERFLOW and TOTAL.
Python doesn't have true constants, and it seems to have survived without them. If it makes you feel more comfortable wrapping this in a class that uses ABCmeta with properties, go ahead. Indeed, I'm pretty sure abc.ABCmeta doesn't not prevent object initialization. Indeed, if it did, your use of property would not work! property objects belong to the class, but are meant to be accessed from an instance. To me, it just looks like a lot of rigamarole for very little gain.
The easiest way in my opinion to make constants is just to set them as variables in your module (and not modify them).
JSON_PATH_SRC = "src"
JSON_PATH_DST = "dst"
JSON_URL_SRC = "src"
JSON_URL_DST = "dst"
Then if you need to reference them from another module they're already namespaced for you.
>>> that_module.JSON_PATH_SRC
'src'
>>> that_module.JSON_PATH_DST
'dst'
>>> that_module.JSON_URL_SRC
'src'
>>> that_module.JSON_URL_DST
'dst'
The simplest way to create a bunch of constants is to place them into a module, and import them as necessary. For example, you could have a constants.py module with
JSON_PATH_SRC = "src"
JSON_PATH_DST = "dst"
JSON_URL_SRC = "src"
JSON_URL_DST = "dst"
Your code would then do something like
from constants import JSON_URL_SRC
...
urlPacket[JSON_URL_SRC] = "localhost"
If you would like a better defined grouping of the constants, you can either stick them into separate modules in a dedicated package, allowing you to access them like constants.json.url.DST for example, or you could use Enums. The Enum class allows you to group related sets of constants into a single namespace. You could write a module constants.py like this:
from enum import Enum
class JSONPath(Enum):
SRC = 'src'
DST = 'dst'
class JSONUrl(Enum):
SRC = 'src'
DST = 'dst'
OR
from enum import Enum
class JSON(Enum):
PATH_SRC = 'src'
PATH_DST = 'dst'
URL_SRC = 'src'
URL_DST = 'dst'
How exactly you separate your constants is up to you. You can have a single giant enum, one per category or something in between. You would access the in your code like this:
from constants import JSONURL
...
urlPacket[JSONURL.SRC.value] = "localhost"
OR
from constants import JSON
...
urlPacket[JSON.URL_SRC.value] = "localhost"
Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser
I'm trying to mimic methods.grep from Ruby which simply returns a list of available methods for any object (class or instance) called upon, filtered by regexp pattern passed to grep.
Very handy for investigating objects in an interactive prompt.
def methods_grep(self, pattern):
""" returns list of object's method by a regexp pattern """
from re import search
return [meth_name for meth_name in dir(self) \
if search(pattern, meth_name)]
Because of Python's limitation not quite clear to me it unfortunately can't be simply inserted in the object class ancestor:
object.mgrep = classmethod(methods_grep)
# TypeError: can't set attributes of built-in/extension type 'object'
Is there some workaround how to inject all classes or do I have to stick with a global function like dir ?
There is a module called forbiddenfruit that enables you to patch built-in objects. It also allows you to reverse the changes. You can find it here https://pypi.python.org/pypi/forbiddenfruit/0.1.1
from forbiddenfruit import curse
curse(object, "methods_grep", classmethod(methods_grep))
Of course, using this in production code is likely a bad idea.
There is no workaround AFAIK. I find it quite annoying that you can't alter built-in classes. Personal opinion though.
One way would be to create a base object and force all your objects to inherit from it.
But I don't see the problem to be honest. You can simply use methods_grep(object, pattern), right? You don't have to insert it anywhere.