I'm trying to find the source code for torch.mean and am unable to find it in the pytorch github. It is under math operations but I can't find it at all.
I've looked everywhere and inspected most pages under pytorch/torch and am still unable to find it.
I even did ?? in a jupyter notebook but it just returned a useless docstring
Since operations components are written in C++, they are not callable with operations such as ?? or "__file__" or "getsourcefile" type of operations.
The files appear to be here, written in C++:
https://github.com/pytorch/pytorch/blob/master/caffe2/operators/mean_op.cc
https://github.com/pytorch/pytorch/blob/master/caffe2/operators/mean_op.h
Related
I am trying to use https://github.com/matterport/Mask_RCNN, but get the error
AttributeError: module 'keras.engine' has no attribute 'Layer'.
I'm not the only one who's had this problem - here are some suggestions for how to solve it.
Now, trying to better understand the problem, I have searched for documentation of the keras.engine module, but have found nothing. In the official Keras API reference, there seems to be no mentioning of anything called 'engine'. Why is this the case? How is one supposed to use a module for which there is no documentation?
TLDR
No one** is supposed to use the keras.engine module because it is not part of the public API.
Explanation
In most projects (e.g. Keras) the assumption is that you shouldn't rely on features that are not documented because they can change at any time. I think that's exactly what happened here. As Dr. Snooppy pointed out in the comments, Matterport shouldn't have called keras.engine.Layer for precisely this reason.
Finding docs and source
Keras is open source and keras.engine is full of docstrings, so if you really want to get to the source of something it's not that hard:
Find the site-packages folder with python -m site
Navigate to .../site-packages/tensorflow/keras/python/engine
Check out the source files. Pay particular attention to the class keywords and to class and method docstrings (These are delimited by """).
In my Tensorflow/Keras version 2.6.0, a file search yields that class Layer is now in a different place than the two mentioned in the linked answer: .../site-packages/tensorflow/keras/python/engine/base_layer.py, illustrating the point that anything that's not part of the public API may and does change all the time.
** Obvious exception: the engineers who write code for the Keras library.
I inherited an application with opencv, shiboken and pyside and my first task was to update to qt6, pyside6 and opencv 4.5.5. This has gone well so far, I can import the module and make class instances etc. However I have a crash when passing numpy arrays:
I am passing images in the form of numpy arrays through python to opencv and I am using pyopencv_to to convert from the array to cv::Mat. This worked in a previous version of opencv (4.5.3), but with 4.5.5 it seems to be broken.
When I try to pass an array through pyopencv_to, I get the exception opencv_ARRAY_API was nullptr. My predecessor solved this by directly calling PyInit_cv2(), which was apparently previously included via a header. But I cannot find any header in the git under the tag 4.5.3 that defines this function. Is this a file that is generated? I can see there is a pycompat.hpp, but that does not include the function either.
Is there a canonical way to initialize everything so that numpy arrays can be passed properly? Or a tutorial anyone can point me to? My searches have so far not produced any useful hints.
Thanks a lot in advance! :)
I finally found a solution. I dont know if this is the correct way of doing it, but it works.
I made a header file that contains
PyMODINIT_FUNC PyInit_cv2();
as a forward declaration and then copied over everything in the modules/python/src2 directory. I assumed this was already happening in the cv2.cpp file, because there is already exactly that line (in cv2.cpp).
But just adding that include works perfectly fine, apparently. Now I can call the init function when my own module is initialized and it seems to properly set all the needed state.
What is the underlying python code of numpy.correlate?
I am trying to understand the logic of cross correlation. the underlying python code will be of great help.
All the code is somewhere on your system, you just need to find where.
If you're using ipython, the help command (numpy.correlate?) includes the filepath (on the second line from the end).
On my system it's "/usr/local/lib/python3.5/dist-packages/numpy/core/numeric.py
"
If you're not using ipython, numpy.__file__ will give you a path to the installation directory for the module, and you'll have to look around a bit.
The module name given by help(numpy.correlate) will give some hints.
However, once you find the file you will see that numpy.correlate only does the following:
mode = _mode_from_name(mode)
return multiarray.correlate2(a, v, mode)
That is a compiled function, so it's a little harder to find.
You can view the file here, the main function is defined beginning on line 1353, and the actual algorithm begins on line 1190.
This is fairly optimized code, so it's doing quite a bit more than what is necessary for simple correlation: handling datatypes, multi-threading, and error handling.
If you just want to understand the general principles rather than specifics of what python is doing, I would recommend starting with a more basic explanation. Numeric operations such as correlation are very well defined, and numpy rarely does anything different from the standard definitions.
Recently, I have been working on a Python project with usual directory structure, and have received help from someone else who has given me a code snippet (a single function definition, about 30 lines long) which I would like to import into my code. What is the most proper directory/location in a Python project to store borrowed code of this size? Is it best to store the snippet into an entirely different module and import it from there?
I generally find it easiest to put such code in a separate file, because for clarity you don't want more than one different copyright/licensing term to apply within a single file. So in Python this does indeed mean a separate module. Then the file can contain whatever attribution and other legal boilerplate you need.
As long as your file headers don't accidentally claim copyright on something to which you do not own the copyright, I don't think it's actually a legal problem to mix externally-licensed or public domain code into files you mostly own. I may be wrong, though, which is why I normally avoid giving myself reason to think about it. A comment saying "this is external code from the following source with the following license:" may well be clearer than dividing code into different files that naturally wouldn't be. So I do occasionally do that.
I don't see any definite need for a separate directory (or package) per separate external source. If that's already part of your project structure (that is, it already uses external libraries by incorporating their source) then I suppose you might as well continue the trend.
I usually place scripts I copy off the internet in a folder/package called borrowed so I know all of the code here is stuff that I didn't write myself.
That is, if it's something more substantial than a one or two-liner demonstrating how something works.
This may seem like a weird question, but I would like to know how I can run a function in a .dll from a memory 'signature'. I don't understand much about how it actually works, but I needed it badly. Its a way of running unexported functions from within a .dll, if you know the memory signature and adress of it.
For example, I have these:
respawn_f "_ZN9CCSPlayer12RoundRespawnEv"
respawn_sig "568BF18B06FF90B80400008B86E80D00"
respawn_mask "xxxxx?xxx??xxxx?"
And using some pretty nifty C++ code you can use this to run functions from within a .dll.
Here is a well explained article on it:
http://wiki.alliedmods.net/Signature_Scanning
So, is it possible using Ctypes or any other way to do this inside python?
If you can already run them using C++ then you can try using SWIG to generate python wrappers for the C++ code you've written making it callable from python.
http://www.swig.org/
Some caveats that I've found using SWIG:
Swig looks up types based on a string value. For example
an integer type in Python (int) will look to make sure
that the cpp type is "int" otherwise swig will complain
about type mismatches. There is no automatic conversion.
Swig copies source code verbatim therefore even objects in the same namespace
will need to be fully qualified so that the cxx file will compile properly.
Hope that helps.
You said you were trying to call a function that was not exported; as far as I know, that's not possible from Python. However, your problem seems to be merely that the name is mangled.
You can invoke an arbitrary export using ctypes. Since the name is mangled, and isn't a valid Python identifier, you can use getattr().
Another approach if you have the right information is to find the export by ordinal, which you'd have to do if there was no name exported at all. One way to get the ordinal would be using dumpbin.exe, included in many Windows compiled languages. It's actually a front-end to the linker, so if you have the MS LinK.exe, you can also use that with appropriate commandline switches.
To get the function reference (which is a "function-pointer" object bound to the address of it), you can use something like:
import ctypes
func = getattr(ctypes.windll.msvcrt, "##myfunc")
retval = func(None)
Naturally, you'd replace the 'msvcrt' with the dll you specifically want to call.
What I don't show here is how to unmangle the name to derive the calling signature, and thus the arguments necessary. Doing that would require a demangler, and those are very specific to the brand AND VERSION of C++ compiler used to create the DLL.
There is a certain amount of error checking if the function is stdcall, so you can sometimes fiddle with things till you get them right. But if the function is cdecl, then there's no way to automatically check. Likewise you have to remember to include the extra this parameter if appropriate.