how to find all properties and methods of specific object in dart? - python

in python, there is a built-in function called dir which displays all properties and methods of the specified object so is there a function like that in Dart if it's what is called?
and also in python, there is another function called type which displays the type of the argument you give so is there a function like that in Dart also?

Dart does not have this kind of feature in runtime unless you are running your code with the Dart VM, in which case you can use dart:mirrors: https://api.dart.dev/stable/2.13.4/dart-mirrors/dart-mirrors-library.html
One reason for this restriction is that Dart are a compiled language, so when compiled to native or JavaScript, the compiler identifies what code are used and only takes what is needed for your program to be executed. It also makes optimizations based on how the code are being used. Both kind of optimizations is rather difficult if you are allowing every method to be accessed.
The Dart VM can do this since it does has access to all the code when running the program.
There are some ongoing discussion about adding meta programming to Dart which might be the solution for most cases where dart:mirrors are used today:
https://github.com/dart-lang/language/issues/1518
https://github.com/dart-lang/language/issues/1565
An alternative solution is use some kind of builder which makes static analysis of your code and compiles some classes based on that which you are then includes in your project. E.g. a class which contains the name and type of all members of the classes you need that information from.

Related

Trigger wrapped C/C++ rebuild when instantiating a Julia "object" (that calls the wrapped C/C++ functions)

I found the following paradigm quite useful and I would love to be able to reproduce it somehow in Julia to take advantage of Julia's speed and C wrapping capabilities.
I normally maintain a set of objects in Python/Matlab that represent the blocks (algorithms) of a pipeline. Set up unit-tests etc.
Then develop the equivalent C/C++ code by having equivalent python/Matlab objects (same API) that wrap C/C++ to implement the same functionality and have to pass the same tests (by this I mean the exact same tests written in python/Matlab where either I generate synthetic data or I load recorded data).
I will maintain the full-python and python/C++ objects in parallel enforcing parity with extensive test suites. The python only and python/C++ versions are fully interchangeable.
Every time I need to modify the behavior of the pipeline, or debug an issue, I first use the fully pythonic version of the specific object/block I need to modify, typically in conjunction with other blocks running in python/C++ mode for speed, then update the tests to match the behavior of the modified python block and finally update the C++ version until it reaches parity and passes the updated tests.
Evey time I instantiate the Python/C++ version on the block, in the constructor I run a "make" that rebuilds the C++ code if there was any modification. To make sure I always test the latest version of the C++.
Is there any elegant way to reproduce the same paradigm with the Julia/C++ combination? Maintaining julia/C++ versions in parallel via automatic testing.
I.e. how do I check/rebuild the C++ only once when I instantiate the object and not per function call (it would be way too slow).
I guess I could call the "make" once at the test-suite level before I run all the tests of the different blocks. But then I will have to manually call it if I'm writing a quick python script for a debugging session.
Let's pick the example of a little filter object with a configure method that changes the filter parameters and a filter method the filters the incoming data.
We will have something like:
f1 = filter('python');
f2 = filter('C++'); % rebuild C++ as needed
f1.configure(0.5);
f2.configure(0.5);
x1 = data;
x2 = data;
xf1 = f1.filter(x1);
xf2 = f2.filter(x2);
assert( xf1 == xf2 )
In general there will be a bunch of tests that instantiate the objects in both python-only mode or python/C++ mode and test them.
I guess what I'm trying to say is that since in Julia the paradigm is to have a filter type, and then "external" methods that modify/use the filter type there is no centralized way to check/rebuild all its methods that wrap C code. Unless the type contains a list of variable that keep track of the relevant methods. Seems awkward.
I would appreciate comments / ideas.
Is there a reason why you can't wrap your functions in a struct like this?
struct Filter
FilterStuff::String
Param::Int
function Filter(stuff::String, param::Int)
# Make the C++ code here
# Return the created object here
obj = new(stuff, param)
end
end

monkey-patching/decorating/wrapping an entire python module

I would like, given a python module, to monkey patch all functions, classes and attributes it defines. Simply put, I would like to log every interaction a script I do not directly control has with a module I do not directly control. I'm looking for an elegant solution that will not require prior knowledge of either the module or the code using it.
I found several high-level tools that help wrapping, decorating, patching etc... and i've went over the code of some of them, but I cannot find an elegant solution to create a proxy of any given module and automatically proxy it, as seamlessly as possible, except for appending logic to every interaction (record input arguments and return value, for example).
in case someone else is looking for a more complete proxy implementation
Although there are several python proxy solutions similar to those OP is looking for, I could not find a solution that will also proxy classes and arbitrary class objects, as well as automatically proxy functions return values and arguments. Which is what I needed.
I've got some code written for that purpose as part of a full proxying/logging python execution and I may make it into a separate library in the future. If anyone's interested you can find the core of the code in a pull request. Drop me a line if you'd like this as a standalone library.
My code will automatically return wrapper/proxy objects for any proxied object's attributes, functions and classes. The purpose is to log and replay some of the code, so i've got the equivalent "replay" code and some logic to store all proxied objects to a json file.

How to document and test interfaces required of formal parameters in Python 2?

To ask my very specific question I find I need quite a long introduction to motivate and explain it -- I promise there's a proper question at the end!
While reading part of a large Python codebase, sometimes one comes across code where the interface required of an argument is not obvious from "nearby" code in the same module or package. As an example:
def make_factory(schema):
entity = schema.get_entity()
...
There might be many "schemas" and "factories" that the code deals with, and "def get_entity()" might be quite common too (or perhaps the function doesn't call any methods on schema, but just passes it to another function). So a quick grep isn't always helpful to find out more about what "schema" is (and the same goes for the return type). Though "duck typing" is a nice feature of Python, sometimes the uncertainty in a reader's mind about the interface of arguments passed in as the "schema" gets in the way of quickly understanding the code (and the same goes for uncertainty about typical concrete classes that implement the interface). Looking at the automated tests can help, but explicit documentation can be better because it's quicker to read. Any such documentation is best when it can itself be tested so that it doesn't get out of date.
Doctests are one possible approach to solving this problem, but that's not what this question is about.
Python 3 has a "parameter annotations" feature (part of the function annotations feature, defined in PEP 3107). The uses to which that feature might be put aren't defined by the language, but it can be used for this purpose. That might look like this:
def make_factory(schema: "xml_schema"):
...
Here, "xml_schema" identifies a Python interface that the argument passed to this function should support. Elsewhere there would be code that defines that interface in terms of attributes, methods & their argument signatures, etc. and code that allows introspection to verify whether particular objects provide an interface (perhaps implemented using something like zope.interface / zope.schema). Note that this doesn't necessarily mean that the interface gets checked every time an argument is passed, nor that static analysis is done. Rather, the motivation of defining the interface is to provide ways to write automated tests that verify that this documentation isn't out of date (they might be fairly generic tests so that you don't have to write a new test for each function that uses the parameters, or you might turn on run-time interface checking but only when you run your unit tests). You can go further and annotate the interface of the return value, which I won't illustrate.
So, the question:
I want to do exactly that, but using Python 2 instead of Python 3. Python 2 doesn't have the function annotations feature. What's the "closest thing" in Python 2? Clearly there is more than one way to do it, but I suspect there is one (relatively) obvious way to do it.
For extra points: name a library that implements the one obvious way.
Take a look at plac that uses annotations to define a command-line interface for a script. On Python 2.x it uses plac.annotations() decorator.
The closest thing is, I believe, an annotation library called PyAnno.
From the project webpage:
"The Pyanno annotations have two functions:
Provide a structured way to document Python code
Perform limited run-time checking "

Module vs object-oriented programming in vba

My first "serious" language was Java, so I have comprehended object-oriented programming in sense that elemental brick of program is a class.
Now I write on VBA and Python. There are module languages and I am feeling persistent discomfort: I don't know how should I decompose program in a modules/classes.
I understand that one module corresponds to one knowledge domain, one module should ba able to test separately...
Should I apprehend module as namespace(c++) only?
I don't do VBA but in python, modules are fundamental. As you say, the can be viewed as namespaces but they are also objects in their own right. They are not classes however, so you cannot inherit from them (at least not directly).
I find that it's a good rule to keep a module concerned with one domain area. The rule that I use for deciding if something is a module level function or a class method is to ask myself if it could meaningfully be used on any objects that satisfy the 'interface' that it's arguments take. If so, then I free it from a class hierarchy and make it a module level function. If its usefulness truly is restricted to a particular class hierarchy, then I make it a method.
If you need it work on all instances of a class hierarchy and you make it a module level function, just remember that all the the subclasses still need to implement the given interface with the given semantics. This is one of the tradeoffs of stepping away from methods: you can no longer make a slight modification and call super. On the other hand, if subclasses are likely to redefine the interface and its semantics, then maybe that particular class hierarchy isn't a very good abstraction and should be rethought.
It is matter of taste. If you use modules your 'program' will be more procedural oriented. If you choose classes it will be more or less object oriented. I'm working with Excel for couple of months and personally I choose classes whenever I can because it is more comfortable to me. If you stop thinking about objects and think of them as Components you can use them with elegance. The main reason why I prefer classes is that you can have it more that one. You can't have two instances of module. It allows me use encapsulation and better code reuse.
For example let's assume that you like to have some kind of logger, to log actions that were done by your program during execution. You can write a module for that. It can have for example a global variable indicating on which particular sheet logging will be done. But consider the following hypothetical situation: your client wants you to include some fancy report generation functionality in your program. You are smart so you figure out that you can use your logging code to prepare them. But you can't do log and report simultaneously by one module. And you can with two instances of logging Component without any changes in their code.
Idioms of languages are different and thats the reason a problem solved in different languages take different approaches.
"C" is all about procedural decomposition.
Main idiom in Java is about "class or Object" decomposition. Functions are not absent, but they become a part of exhibited behavior of these classes.
"Python" provides support for both Class based problem decomposition as well as procedural based.
All of these uses files, packages or modules as concept for organizing large code pieces together. There is nothing that restricts you to have one module for one knowledge domain.
These are decomposition and organizing techniques and can be applied based on the problem at hand.
If you are comfortable with OO, you should be able to use it very well in Python.
VBA also allows the use of classes. Unfortunately, those classes don't support all the features of a full-fleged object oriented language. Especially inheritance is not supported.
But you can work with interfaces, at least up to a certain degree.
I only used modules like "one module = one singleton". My modules contain "static" or even stateless methods. So in my opinion a VBa module is not namespace. More often a bunch of classes and modules would form a "namespace". I often create a new project (DLL, DVB or something similar) for such a "namespace".

Running unexported .dll functions with python

This may seem like a weird question, but I would like to know how I can run a function in a .dll from a memory 'signature'. I don't understand much about how it actually works, but I needed it badly. Its a way of running unexported functions from within a .dll, if you know the memory signature and adress of it.
For example, I have these:
respawn_f "_ZN9CCSPlayer12RoundRespawnEv"
respawn_sig "568BF18B06FF90B80400008B86E80D00"
respawn_mask "xxxxx?xxx??xxxx?"
And using some pretty nifty C++ code you can use this to run functions from within a .dll.
Here is a well explained article on it:
http://wiki.alliedmods.net/Signature_Scanning
So, is it possible using Ctypes or any other way to do this inside python?
If you can already run them using C++ then you can try using SWIG to generate python wrappers for the C++ code you've written making it callable from python.
http://www.swig.org/
Some caveats that I've found using SWIG:
Swig looks up types based on a string value. For example
an integer type in Python (int) will look to make sure
that the cpp type is "int" otherwise swig will complain
about type mismatches. There is no automatic conversion.
Swig copies source code verbatim therefore even objects in the same namespace
will need to be fully qualified so that the cxx file will compile properly.
Hope that helps.
You said you were trying to call a function that was not exported; as far as I know, that's not possible from Python. However, your problem seems to be merely that the name is mangled.
You can invoke an arbitrary export using ctypes. Since the name is mangled, and isn't a valid Python identifier, you can use getattr().
Another approach if you have the right information is to find the export by ordinal, which you'd have to do if there was no name exported at all. One way to get the ordinal would be using dumpbin.exe, included in many Windows compiled languages. It's actually a front-end to the linker, so if you have the MS LinK.exe, you can also use that with appropriate commandline switches.
To get the function reference (which is a "function-pointer" object bound to the address of it), you can use something like:
import ctypes
func = getattr(ctypes.windll.msvcrt, "##myfunc")
retval = func(None)
Naturally, you'd replace the 'msvcrt' with the dll you specifically want to call.
What I don't show here is how to unmangle the name to derive the calling signature, and thus the arguments necessary. Doing that would require a demangler, and those are very specific to the brand AND VERSION of C++ compiler used to create the DLL.
There is a certain amount of error checking if the function is stdcall, so you can sometimes fiddle with things till you get them right. But if the function is cdecl, then there's no way to automatically check. Likewise you have to remember to include the extra this parameter if appropriate.

Categories