How to avoid Python script name being regarded as namespace with Doxygen - python

I am now making documentations with Doxygen for Python scrips. Suppose the name of the script is all_the_best.py, and at the beginning of the scripts, I document it as follows:
##
# #namespace scripts.python
# This is a python script
import os
...
I suppose that the script will belong to scripts.python namespace in the Namespaces tab of generated HTML file. However, I found that in the Namespaces tab, not only scripts.python is available but also all_the_best appears. Any ideas on how to avoid it? Thanks.

I tried a lot of things, and in my case I had to put EXTRACT_ALL = NO in the Doxyfile to have the second instance removed.

Related

PyInstaller: Executable to access user-specified Sourcecode

I'm using pyinstaller to distribute my code as executable within my team as most of them are not coding/scripting people and do not have Python Interpreter installed.
For some advanced usage of my tool, I want to make it possible for the user to implement a small custom function to adjust functionality slightly (for the few experienced people). Hence I want to let them input a python file which defines a function with a fixed name and a string as return.
Is that possible?
I mean the py-file could be drag/dropped for example, and I'd tell them that their user-defined function needs to have a certain name, e.g. "analyze()" - is it now possible to import that from the drag/dropped pythonfile within my PyInstaller Script and use it as this?
I know, it certainly will not be safe/secure and they could do evil things, delete files and so one... But that are things which we don#t care at this point, please no discussions about it. Thanks!
To answer my own question: yes it does actually work to import a module/function from a given path/pythonfile at runtime (that I knew already) even in PyInstaller (that was new for me).
I used this for my Py2.7 program:
f = r'C:\path\to\userdefined\filewithfunction.py'
if os.path.exists(f):
import imp
userdefined = imp.load_source('', f) # Only Python 2.x, for 3.x see: https://stackoverflow.com/a/67692/701049
print userdefined # just a debugging print
userdefined.imported() # here you should use try/catch; or check whether the function with the desired name really exists in the object "userdefined". This is only a small demo as example how to import, so didnt do it here.
filewithfunction.py:
--------------------
def imported():
print 'yes it worked :-)'
As written in the comments of the example code, you'll need a slightly different approach in Python 3.x. See this link: https://stackoverflow.com/a/67692/701049

Function not defined when Importing Maya Python Script

I have a tool I wrote in python that works completely fine when running in the maya script editor. However, I want to be able to import the script from the script directory. Which should be simple, and I am shocked I can't find the solution while searching the web.
My script format is like this example:
import maya.cmds as cmds
# GUI code with buttons, they call the functions below.
#
#
def function1():
#commands that do things
def function2():
#commands that do things
#List of functions continues
Like I said, the program functions perfectly when run in the script editor. When saving the script to the directory and using this method:
import module
reload (module)
module.function()
The GUI loads fine, but then when pushing the gui buttons, it says the functions are not defined. I don't understand what I am missing? If the script was loaded, shouldn't the functions be defined? Any help would be greatly appreciated, thank you!
Just because the GUI loads doesn't mean that all of your functions loaded properly. You need to put your file, (module.py) in a directory that is visible to your PYTHONPATH. If you're in Maya, you can also put it in the MAYA_SCRIPT_PATH
The PYTHONPATH / MAYA_SCRIPT_PATH are environment variables that you set before launching Maya. In a default Maya installation, some places where you could put your module.py file would be:
(Windows) C:\Users\YOUR_USER_NAME\Documents\maya\scripts
(Linux) ~/maya/scripts
(Mac) - Not sure, put probably also ~/maya/scripts
If you want to know where else you can place it, you run this
import os
print(os.getenv('MAYA_SCRIPT_PATH', ''))
print(os.getenv('PYTHONPATH', ''))
Any location in that list that you have write permissions to is OK to add your module.py file.
Also, it's worth noting that in your example module.function() would fail. It'd need to be module.function1() or module.function2() but I assume you know that. Hope this helps
Sup guys
So i know this is an old question that has already been answered but I have some extra information that helps in regards to GUI functions not working. (same error)
and there's basically nothing on this anywhere.
so the script director only helps when loading in the module through the shelf but will still return the same error "fuction is not defined"
this has todo with how the function is called through the UI element.
example.
this is will allow you to call function in the script editor
but gives you the function not defined error when the module is imported
def GUI_function():
pm.button( command = "function()")
def function():
do stuff
this on the other hand works.
def GUI_function():
pm.button( command = function)
def function(*_):
do stuff
i don't know why but maya tends to think function() is a nodetype
so remove the brackets if you not using arguments and you good to go

Use different config files - decision should be made via command-line argument

I have a script, that uses a config file called config.py. Actually this is rather a configuration module then. Anyways: the configuration-module contains a lot of parameters and dictionaries and lists of dictionaries and so on.
In the script today it is used like this
import config
def main():
myParameter = config.myParameter
Now I have another application scenario for this script that uses a related config ('config_advanced.py', but the parameters and dictionaries have different values.
My goal is now, to chose the name of the used config-modul as a passed command-line argument:
myScript.py -configuration config_advanced.py
Since the configuration-module is in the same folder than the main script, I guess I have to rename the passed configuration file to 'config.py' first. Afterwards I can perform import config. Otherwise, if I used `import config_advanced, I wouldn't be able to use a call like
config.myParameter
in the main script.
Another possibility could be, to put the configuration-modules in subfolders and keep the name config.py. The passed command-line-argument will have to contain the subfolder then.
Either way I won't be able to perform the import at the top of the main file, since I have to do the argument parsing first. This isn't a technically problem, but someone said that this it at least bad pratice.
What do you think?
What is a better way to do the trick with not much effort?
Thanks a lot
Edit:
One working solution has been
import sys fullpath = "d:\\python\\scripts\\projectA\\configurationFiles\\"
sys.path.append(fullpath)
config = __import__('config_advanced')
Without syspath it does NOT work, so those following tries won't work:
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced')
config = __import__('d:\\python\\scripts\\projectA\\configurationFiles\\config_advanced.py')
Another possibility that's similar to what you suggest in the question, but which doesn't need you to hide things in subfolders, is to put config_advanced.py and config_plain.py in the same folder as the main script and then dynamically make config.py a link to the actual config file you want to use.
However, martineau's suggestion is much simpler.
OTOH, georg brings up a very valid point, especially if this script isn't just for your own personal use. While using Python itself for the config data is flexible and powerful, it's perhaps a little too powerful. Config data should just be data, not live executable code. If you make a minor mistake when modifying config data you could cause havoc if it's in an executable file. And if a malicious user gets to it, there's no limit to the damage they could cause.
Bad data in a plain old data file will at worst cause a ValueError if it does something weird that your config parsing code isn't suspecting. But bad data in a live Python file could throw all sorts of nasty errors. Or even worse, it could do something evil in complete silence...
In reply to your comments, here's some code to illustrate the first point:
#! /usr/bin/env python
import os
config_file = "config.py"
def link_config(mode):
if os.path.exists(config_file):
os.remove(config_file)
config_name = "config_%s.py" % mode
os.symlink(config_name, config_file)
#.... parse command line to determine config_mode string, then do
link_config(config_mode)
#Now import the newly-linked config file
import config
If config_mode == "plain" the above code will cause config_plain.py to be imported as 'config'
and if config_mode == "advanced" it will cause config_advanced.py to be imported as 'config'
But as I said before, martineau's method is much simpler. And IIRC, os.symlink may not work on non-unix systems.
...
As for your second point, check out the docs for the json module

How to abstract 3rd party implementation details from the core functionality?

Here is an example of the problem and requirement .
Scenario : We have a Web App , The Controller has a function that does HTML to PDF conversion . We have to switch to various engines depending on the requirement and advancement in Technology . This means we need to keep changing the core file . It can even be a web service that does my job and not implemented locally .
Overall the current architecture in pseudo Python is :
Controller
def requestEntry():
"""Entry point"""
html = REQUEST['html'] , css = REQUEST['css']
createPDF(html,css)
def createPDF(html,css):
"""Sigh"""
#Hardcoding Begines
configured_engine = getXMLOption('configured_engine')
if configured_engine == "SuperPDFGen":
#Returns SuperGen's PDF Blob
supergen = SuperGen(html,css)
return supergen.pdfout()
if configured_engine = "PISA":
pisa = PISA(html,css)
return pisa.pdf_out()
The problem here is every new engine requirement is a code change in the controller .And suppose we find a new technology we have to upgrade the core software version .
Solving it :
One way I can think of is to define a simple class like this :
def createPdf(data,engine):
pdf_render = PDFRender(html,css,engine)
return pdf_render.pdf_out()
So now we keep out PDFRender astracted - core version need not be changed any more for a implemetation change , but we need to code the engines with a If ladder in the PDFRender Class .
To extend this idea , i could have a convention engine would be the module name too . But this own't adapt to a URL if its given as Engine .
def createPdf(data,engine):
#Convert the string called engine to a Module ! Sigh
import engine Engine!!
pdf_render = engine.Engine(data)
return pdf_render()
Are there any suggested paradigm's or a plugin adapter mechanism for this? It should take a URL or a Python Module as input and get the job done . New implementations should be standalone and pluggable to the existing code with no version changes in core feature . How can I do that ? I think the core should talk in terms of service (either a python module , sub process or a url ) this is what I trying to achieve .
Add a package where you stick your renderers, one Python module for each, the module being named after the "engine" and the renderer class being named "PDFRenderer". Then in the package's __init__ scan the modules in the directory, import the PDFRenderer class from each module, and build a enginename:PDFRenderer mapping, and use it to get the PDFRenderer class for the engine.
It looks like right now you have to change a couple of lines of code each time you change pdf renderer. The code is simple and easy to understand.
Sure, you can abstract it all out, and use configuration files, and add a plugin architecture. But what will you gain? Instead of changing a couple of lines of code, you'll instead change a plugin or a configuration file, and that machinery will need to be maintained: for example, if you decide you need to pass in some flags to one of the renderers, then you have to add that functionality to your configuration file or whatever abstraction you've chosen.
How many times do you think you'll change pdf renderer? My guess would be at most once a year. Keep your code simple, and don't build complex solutions until you really need them.

How do I load multiple lldb type summaries Python files?

A posted a question a while back on how to add custom LLDB type summaries into Xcode. I found out that we can do so by loading a Python script.
However, I want to know if there's a way to load multiple Python files? I work with many different projects, so I want to have 1 summaries files for the general types that are used in all my projects, and 1 summaries files for project-specific types.
~/MyGenericSummaries.py
import lldb
def __lldb_init_module(debugger, dictionary):
debugger.HandleCommand('type summary add --summary-string "these are words" MyGenericClass');
~/MyProjectSummaries.py
import lldb
def __lldb_init_module(debugger, dictionary):
debugger.HandleCommand('type summary add --summary-string "these are more words" MyProjectClass');
~/.lldbinit
command script import ~/MyGenericSummaries.py
command script import ~/MyProjectSummaries.py
This never loads the type summary of MyProjectSummaries.py -- LLDB just tells me
error: module importing failed: module already imported
Is it possible to keep the generic summaries and project summaries in seperate files? This would really help, because I have some type names that clash amongst different projects, so I'd rather split these off.
Many thanks :)
Ok I got it... With a bit of Python magic:
~/MyGenericSummaries.py
import lldb
def doLoad(debugger, dictionary):
debugger.HandleCommand('type summary add --summary-string "these are words" MyGenericClass');
def __lldb_init_module(debugger, dictionary):
doLoad(debugger, dictionary);
~/MyProjectSummaries.py
import lldb
from MyGenericSummaries import doLoad
def __lldb_init_module(debugger, dictionary):
doLoad(debugger, dictionary);
debugger.HandleCommand('type summary add --summary-string "these are more words" MyProjectClass');
~/.lldbinit
command script import ~/MyProjectSummaries.py
The only downside is that I'll need to tweak .lldbinit and restart Xcode everytime I switch project, but that's something I can live with.
I'm not clear why the original code did not work. From what you've quoted, I expect that to work.
You can certainly command script import multiple Python files in your ~/.lldbinit file - I do that all the time. From the error message, it looks like you had a command script import ~/MyProjectSummaries.py in your ~/.lldbinit already. Be careful to look for a ~/.lldbinit-Xcode which is also sourced when Xcode is run (or ~/.lldbinit-lldb if the command line lldb is being used. The general form is ~/.lldbinit-DRIVER_NAME for whatever lldb is being used. This feature is useful if you want to enable certain settings only when the lldb library is being used inside Xcode, for instance.)
You may want to put your type summary entries in per-project groups. If you do type summary list you will see that the built-in summaries are already grouped into categories like libcxx, VectorTypes, CoreGraphics, etc. These groups of summaries can be enabled or disabled with type category enable|disable|list|delete.
Command line lldb will also read a .lldbinit in the current working directory where it is run - although this doesn't help the Xcode case. For what you're doing, you really want a Project-specific lldbinit file. If you had these type summaries added in your ~/.lldbinit file, the project-specific lldbinit might just enable/disable the correct type summaries for this project. But there's no feature like this in Xcode right now.

Categories