Say I have a function in src/f1.py with the following signature:
def my_func(a1: int, a2: bool) -> float:
...
In a separate file src/f2.py I create a dictionary:
from src.f1 import my_func
my_dict = {
"func": my_func
}
In my final file src/test1.py I call the function:
from src.f2 import my_dict
print(my_dict["func"](1, True))
I'm getting IDE auto-suggestions for my_func in src/f2.py, but not in src/test1.py. I have tried using typing.Callable, but it doesn't create the same signature and it loses the function documentation. Is there any way I can get these in src/test1.py without changing the structure of my code?
I don't want to change the files in which my functions, dictionaries, or tests are declared.
I use VSCode version 1.73.1 and Python version 3.8.13. I cannot change my Python version.
I tried creating different types of Callable objects, but had problems getting the desired results.
They seem to have no docstring support
Some types are optional. I couldn't get that to work.
They do not work with variable names, only data types. I want the variable names (argument names to the function) to be there in the IDE suggestion.
What am I trying to do really?
I am trying to implement a mechanism where a user can set the configurations for a library in a single file. That single file is where all the dictionaries are stored (and it imports all essential functions).
This "configuration dictionary" is called in the main python file (or wherever needed). I have functions in a set of files for accomplishing a specific set of tasks.
Say functions fa1 and fa2 for task A; fb1, fb2, and fb3 for task B; and fc1 for task C. I want configurations (choices) for task A, followed by B, then C. So I do
work_ABC = {"A": fa1, "B": fb2, "C": fc1};
Now, I have a function like
wd = work_ABC
def do_work_abc(i, do_B=True):
res_a = wd["A"](i)
res_b = res_a
if do_B:
res_b = wd["B"](res_a)
res_c = wd["C"](res_b)
return res_c
If you have a more efficient way how I can implement the same thing, I'd love to hear it.
I want IntelliSense to give me the function signature of the function set for the dictionary.
There is no type annotation construct in Python that covers docstrings or parameter names of functions. There isn't even one for positional-only or keyword-only parameters (which would be actually meaningful in a type sense).
As I already mentioned in my comment, docstrings and names are not type-related. Ultimately, this is an IDE issue. PyCharm for example has no problem inferring those details with the setup you provided. I get the auto-suggestion for my_dict["func"] with the parameter names because PyCharm is smart/heavy enough to track it to the source. But it has its limits. If I change the code in f2 to this:
from src.f1 import my_func
_other_dict = {
"func": my_func
}
my_dict = {}
my_dict.update(_other_dict)
Then the suggestion engine is lost.
The reason is simply the discrepancy between runtime and static analysis. At some point it becomes silly/unreasonable to expect a static analysis tool to essentially run the code for you. This is why I always say:
Static type checkers don't execute your code, they just read it.
Even the fact that PyCharm "knows" the signature of my_func with your setup entails it running some non-trivial code in the background to back-track from the dictionary key to the dictionary to the actual function definition.
So in short: It appears you are out of luck with VSCode. And parameter names and docstrings are not part of the type system.
Related
I have a file functional.py which defines a number of useful functions. For each function, I want to create an alias that when called will give a reference to a function. Something like this:
foo/functional.py
def fun1(a):
return a
def fun2(a):
return a+1
...
foo/__init__.py
from inspect import getmembers, isfunction
from . import functional
for (name, fun) in getmembers(functional, isfunction):
dun = lambda f=fun: f
globals()[name] = dun
>> bar.fun1()(1)
>> 1
>> bar.fun2()(1)
>> 2
I can get the functions from functional.py using inspect and dynamically define a new set of functions that are fit for my purpose.
But why? you might ask... I am using a configuration manager Hydra where one can instantiate objects by specifying the fully qualified name. I want to make use of the functions in functional.py in the config and have hydra pass a reference to the function when creating an object that uses the function (more details can be found in the Hydra documentation).
There are many functions and I don't want to write them all out ... people have pointed out in similar questions that modifying globals() for this purpose is bad practice. My use case is fairly constrained - documentation wise there is a one-one mapping (but obviously an IDE won't be able to resolve it).
Basically, I am wondering if there is a better way to do it!
Is your question related to this feature request and in particular to this comment?
FYI: In Hydra 1.1, instantiate fully supports positional arguments so I think you should be able to call functools.partial directly without redefining it.
I want to attach functional stubs to a data process job I've written, and it would be convenient to be able to apply these via config file.
I can load and run these by means of the eval function, but want to be able to control the available namespace "sandbox" in which the evaluated functions can operate, so I can avoid malicious code injection.
In the python docs, it suggests blocking off __builtins__ and then populating either (or is it both? it's not clear) globals and locals as dictionaries containing the objects in the execution namespace.
When I do this, the code I had been running successfully stops working.
I think this is because one of my test lambdas is referencing functions normally imported from the datetime module - but it's not clear to me how to get these to successfully attach to the namespace.
from datetime import datetime
now = datetime.now()
lambdas = { "Report Date" : "lambda x : now.strftime(\"%d/%m/%Y\")",
"Scary Test" : "lambda x : os.curdir " }
compiled_funcs = {k:eval(v) for k,v in lambdas.items()}
compiled_funcs ['Report Date'](1)
>>> '15/04/2019'
compiled_funcs ['Scary Test'](1)
>>> '.'
Now, I want to edit the eval() function to limit the available scope so that the datetime function continues to work, but the os module fails (if I can call an os command, then I could do something scary like wipe the disk, or worse)
I have tried constructions like:
compiled_funcs = {k:eval(v,{'__builtins__':None, "now" : now, "datetime" : datetime, } , { }) for k,v in lambdas.items()}
But when I do this, I get the following error:
AttributeError: 'NoneType' object has no attribute '__import__'
Which suggests that somewhere/somehow, the function I want to apply is trying to call/import some content down the line - and (presumably) this is correctly being blocked by having borked the __builtins__ content. Is there a way to pre-package such functions and inject them into the eval globals, or locals dictionaries to enable a pre-defined set of functional tools?
Once I've got this working, I should be able to extend it so I can curate my own subset of safe function calls to be exposed to this run-time collection from configuration files.
N.B. I know in the above, I could define my lambdæ with no arguments - but in my real code, it would be normal to feed a single parameter, so have built my test code accordingly.
I am new to programming, I defined a function in c++ with tuple method for two returned variables, after I compiled the files, In the python file, I try to access the two returned variables inside the dynamic library which have been compiled, but it is not working, there is segmentation error happened when I tried to run the python program. But I actually success with single return variable from c++, I think there is might be special trick for accessing the two returned variable with tuple method from python.
The following is the c++ code with two returned variables with tuple method
std::tuple<double, double> Cassie2d::Step(ControllerTorque* action)
{
dyn_model_.setState(mj_data_->qpos, mj_data_->qvel);
dyn_state_.UpdateDynamicState(&dyn_model_);
mju_copy(mj_data_->ctrl, action->torques, nU);
mj_step(mj_model_, mj_data_);
return std::make_tuple(mj_data_->qacc,mj_data_->time);
Render();
}
The following is the python method I was applied, due to both of the returned variables are double type.
lib.StepTorque.argtypes = [ctypes.c_void_p, ctypes.POINTER(ControllerTorque)]
lib.StepTorque.restype = ctypes.c_double
I guess the restype is not just equal to ctypes.c_double, because it worked for one returned variable and it might not work for two returned variables.
Really appreciate for the help!
With return std::make_tuple(mj_data_->qacc,mj_data_->time) your creating a structure which holds two variables, defined in parenthesis. If the function would be call from native C++ code you should be able to unpack it using std:tie, eq:
double a, b;
std::tie(a,b) = Step(arg);
However you need to call it directly from python, which has totally different syntax for returning multiple variables (C++ pair or tuple is kind of bypass missing functionality). I see two option in python.
Give your C++ function a two callbacks to python, and send variables separately this way:
void Cassie2d::Step(ControllerTorque* action, callback1, callback2)
{
dyn_model_.setState(mj_data_->qpos, mj_data_->qvel);
dyn_state_.UpdateDynamicState(&dyn_model_);
mju_copy(mj_data_->ctrl, action->torques, nU);
mj_step(mj_model_, mj_data_);
callback1(mj_data_->qacc);
callback2(mj_data_->time);
Render();
}
(Recommened) You can try to use native python option to serve multiple return. Left you C++ function in its initial form and call in python this way:
VarA, VarB = Step(action)
I assume you have a reference to Cassie2d::Step(ControllerTorque* action) method in your python code, however you provided very small snippet.
I am creating Python bindings for a C library.
In C the code to use the functions would look like this:
Ihandle *foo;
foo = MethFunc();
SetArribute(foo, 's');
I am trying to get this into Python. Where I have MethFunc() and SetAttribute() functions that could be used in my Python code:
import mymodule
foo = mymodule.MethFunc()
mymodule.SetAttribute(foo)
So far my C code to return the function looks like this:
static PyObject * _MethFunc(PyObject *self, PyObject *args) {
return Py_BuildValue("O", MethFunc());
}
But that fails by crashing (no errors)
I have also tried return MethFunc(); but that failed.
How can I return the function foo (or if what I am trying to achieve is completely wrong, how should I go about passing MethFunc() to SetAttribute())?
The problem here is that MethFunc() returns an IHandle *, but you're telling Python to treat it as a PyObject *. Presumably those are completely unrelated types.
A PyObject * (or any struct you or Python defines that starts with an appropriate HEAD macro) begins with pointers to a refcount and a type, and the first thing Python is going to do with any object you hand it is deal with those pointers. So, if you give it an object that instead starts with, say, two ints, Python is going to end up trying to access a type at 0x00020001 or similar, which is almost certain to segfault.
If you need to pass around a pointer to some C object, you have to wrap it up in a Python object. There are three ways to do this, from hackiest to most solid.
First, you can just cast the IHandle * to a size_t, then PyLong_FromSize_t it.
This is dead simple to implement. But it means these objects are going to look exactly like numbers from the Python side, because that's all they are.
Obviously you can't attach a method to this number; instead, your API has to be a free function that takes a number, then casts that number back to an IHandle* and calls a method.
It's more like, e.g., C's stdio, where you have to keep passing stdin or f as an argument to fread, instead of Python's io, where you call methods on sys.stdin or f.
But even worse, because there's no type checking, static or dynamic, to protect you from some Python code accidentally passing you the number 42. Which you'll then cast to an IHandle * and try to dereference, leading to a segfault…
And if you were hoping Python's garbage collector would help you know when the object is still referenced, you're out of luck. You need to make your users manually keep track of the number and call some CloseHandle function when they're done with it.
Really, this isn't that much better than accessing your code from ctypes, so hopefully that inspires you to keep reading.
A better solution is to cast the IHandle * to a void *, then PyCapsule_New it.
If you haven't read about capsules, you need to at least skim the main chapter. But the basic idea is that it wraps up a void* as a Python object.
So, it's almost as simple as passing around numbers, but solves most of the problems. Capsules are opaque values which your Python users can't accidentally do arithmetic on; they can't send you 42 in place of a capsule; you can attach a function that gets called when the last reference to a capsule goes away; you can even give it a nice name to show up in the repr.
But you still can't attach any behavior to capsules.
So, your API will still have to be a MethSetAttribute(mymodule, foo) instead of mymeth.SetAttribute(foo) if mymodule is a capsule, just as if it's an int. (Except now it's type-safe.)
Finally, you can build a new Python extension type for a struct that contains an IHandle *.
This is a lot more work. And if you haven't read the tutorial on Defining Extension Types, you need to go thoroughly read through that whole chapter.
But it means that you have an actual Python type, with everything that goes with it.
You can give it a SetAttribute method, and Python code can just call that method. You can give it whatever __str__ and __repr__ you want. You can give it a __doc__. Python code can do isinstance(mymodule, MyMeth). And so on.
If you're willing to use C++, or D, or Rust instead of C, there are some great libraries (PyCxx, boost::python, Pyd, rust-python, etc.) that can do most of the boilerplate for you. You just declare that you want a Python class and how you want its attributes and methods bound to your C attributes and methods and you get something you can use like a C++ class, except that it's actually a PyObject * under the covers. (And it'll even takes care of all the refcounting cruft for you via RAII, which will save you endless weekends debugging segfaults and memory leaks…)
Or you can use Cython, which lets you write C extension modules in a language that's basically Python, but extended to interface with C code. So your wrapper class is just a class, but with a special private cdef attribute that holds the IHandle *, and your SetAttribute(self, s) can just call the C SetAttribute function with that private attribute.
Or, as suggested by user, you can also use SWIG to generate the C bindings for you. For simple cases, it's pretty trivial—just feed it your C API, and it gives you back the code to build your Python .so. For less simple cases, I personally find it a lot more painful than something like PyCxx, but it definitely has a lower learning curve if you don't already know C++.
I'm attempting to instantiate an object from a string. Specifically, I'm trying to change this:
from node.mapper import Mapper
mapper = Mapper(file)
mapper.map(src, dst)
into something like this:
with open('C:.../node/mapper.py', 'r') as f:
mapping_script = f.read()
eval(mapping_script)
mapper = Mapper(file)
mapper.map(src, dst)
The motivation for this seemingly bizarre task is to be able to store different versions of mapping scripts in a database and then retrieve/use them as needed (with emphasis on the polymorphism of the map() method).
The above does not work. For some reason, eval() throws SyntaxError: invalid syntax. I don't understand this since it's the same file that's being imported in the first case. Is there some reason why eval() cannot be used to define classes?
I should note that I am aware of the security concerns around eval(). I would love to hear of alternative approaches if there are any. The only other thing I can think of is to fetch the script, physically save it into the node package directory, and then import it, but that seems even crazier.
You need to use exec:
exec(mapping_script)
eval() works only for expressions. exec() works for statements. A typical Python script contains statements.
For example:
code = """class Mapper: pass"""
exec(code)
mapper = Mapper()
print(mapper)
Output:
<__main__.Mapper object at 0x10ae326a0>
Make sure you either call exec() (Python 3, in Python 2 it is a statement) at the module level. When you call it in a function, you need to add globals(), for example exec(code, globals()), to make the objects available in the global scope and to the rest of the function as discussed here.