Is there a way to add functions I create to the Python standard library on my local machine?
I come from the matlab world where things aren't really efficient and fast but there are looooads of functions at my fingertips without having to import their files. My problem is that, if I make a function in Python and want to use it, then i will need to also remember the module its in. My memory is shite. I understand that Python is structured that way for efficiency but if I'm adding only a handful of functions to the standard library that I consider very important, I'd guess that the impact to the performance is practically negligible.
Python has a namespace called __builtins__ in which you can stick stuff that you want available all the time. You probably shouldn't, but you can. Be careful not to clobber anything. Python won't stop you from using the same name as a built-in function, and if you do that, it'll probably break a lot of things.
# define function to always be available
def fart():
print("poot!")
__builtins__.fart = fart
# make re module always available without import
import re
__builtins__.re = re
Now the question is how to get Python to run that code for you each time you start up the interpreter. The answer is usercustomize.py. Follow these instructions to find out where the correct directory is on your machine, then put a new file called usercustomize.py in that directory that defines all the stuff you want to have in __builtins__.
There's also an environment variable, PYTHONSTARTUP, that you can set to have a Python script run whenever you start the interpreter in interactive mode (i.e. to a command prompt). I can see the benefit of e.g. having your favorite modules available when exploring in the REPL. More details here.
It sounds like you want to create your own packages & modules with tools you plan on using in the future on other projects. If that is the case, you want to look into the packaging your own project documentation:
https://packaging.python.org/tutorials/packaging-projects/
You may also find this useful:
How to install a Python package system-wide on Linux?
How to make my Python module available system wide on Linux?
How can I create a simple system wide python library?
Related
I am setting this sys.modules['os']=None for restricting OS modules in my python notebook. But I want to restrict it by default, is there any file in /bin where I can add this line.
If not, is it possible in RestrictedPython?
I don't think you can do that, but you could create a virualenv and delete those modules there
First, there is no true sandboxing in python (you also can try PyPy, they claim that this is achievable all the way down to syscalls via rather nontrivial hooking inside their VM). But what you can try right now is runpy module from stdlib. It provides a way to run your module inside a restricted environment (yet not a sandbox) via providing this environment explicitly as a dict. Since import statement runs __import__ function underkeens, this function can be overloaded to not accept certain module names. Though I am not sure how to force Jupiter (or whatever you are using) to run in discussed mode.
I need to ship a collection of Python programs that use multiple packages stored in a local Library directory: the goal is to avoid having users install packages before using my programs (the packages are shipped in the Library directory). What is the best way of importing the packages contained in Library?
I tried three methods, but none of them appears perfect: is there a simpler and robust method? or is one of these methods the best one can do?
In the first method, the Library folder is simply added to the library path:
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Library'))
import package_from_Library
The Library folder is put at the beginning so that the packages shipped with my programs have priority over the same modules installed by the user (this way I am sure that they have the correct version to work with my programs). This method also works when the Library folder is not in the current directory, which is good. However, this approach has drawbacks. Each and every one of my programs adds a copy of the same path to sys.path, which is a waste. In addition, all programs must contain the same three path-modifying lines, which goes against the Don't Repeat Yourself principle.
An improvement over the above problems consists in trying to add the Library path only once, by doing it in an imported module:
# In module add_Library_path:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), 'Library'))
and then to use, in each of my programs:
import add_Library_path
import package_from_Library
This way, thanks to the caching mechanism of CPython, the module add_Library_path is only run once, and the Library path is added only once to sys.path. However, a drawback of this approach is that import add_Library_path has an invisible side effect, and that the order of the imports matters: this makes the code less legible, and more fragile. Also, this forces my distribution of programs to inlude an add_Library_path.py program that users will not use.
Python modules from Library can also be imported by making it a package (empty __init__.py file stored inside), which allows one to do:
from Library import module_from_Library
However, this breaks for packages in Library, as they might do something like from xlutils.filter import …, which breaks because xlutils is not found in sys.path. So, this method works, but only when including modules in Library, not packages.
All these methods have some drawback.
Is there a better way of shipping programs with a collection of packages (that they use) stored in a local Library directory? or is one of the methods above (method 1?) the best one can do?
PS: In my case, all the packages from Library are pure Python packages, but a more general solution that works for any operating system is best.
PPS: The goal is that the user be able to use my programs without having to install anything (beyond copying the directory I ship them regularly), like in the examples above.
PPPS: More precisely, the goal is to have the flexibility of easily updating both my collection of programs and their associated third-party packages from Library by having my users do a simple copy of a directory containing my programs and the Library folder of "hidden" third-party packages. (I do frequent updates, so I prefer not forcing the users to update their Python distribution too.)
Messing around with sys.path() leads to pain... The modern package template and Distribute contain a vast array of information and were in part set up to solve your problem.
What I would do is to set up setup.py to install all your packages to a specific site-packages location or if you could do it to the system's site-packages. In the former case, the local site-packages would then be added to the PYTHONPATH of the system/user. In the latter case, nothing needs to changes
You could use the batch file to set the python path as well. Or change the python executable to point to a shell script that contains a modified PYTHONPATH and then executes the python interpreter. The latter of course, means that you have to have access to the user's machine, which you do not. However, if your users only run scripts and do not import your own libraries, you could use your own wrapper for scripts:
#!/path/to/my/python
And the /path/to/my/python script would be something like:
#!/bin/sh
PYTHONPATH=/whatever/lib/path:$PYTHONPATH /usr/bin/python $*
I think you should have a look at path import hooks which allow to modify the behaviour of python when searching for modules.
For example you could try to do something like kde's scriptengine does for python plugins[1].
It adds a special token to sys.path(like "<plasmaXXXXXX>" with XXXXXX being a random number just to avoid name collisions) and then when python try to import modules and can't find them in the other paths, it will call your importer which can deal with it.
A simpler alternative is to have a main script used as launcher which simply adds the path to sys.path and execute the target file(so that you can safely avoid putting the sys.path.append(...) line on every file).
Yet an other alternative, that works on python2.6+, would be to install the library under the per-user site-packages directory.
[1] You can find the source code under /usr/share/kde4/apps/plasma_scriptengine_python in a linux installation with kde.
I know that it is possible to write bash extension modules (loadable builtins) on C or lua (see luabash), but is it possible on Python/Cython? Is there any projects that make steps in this direction?
The way you would probably do this is start out with a C library which includes the appropriate exported functions, then within the exported function load and run the python interpreter, run your python code, then tear down the python interpreter.
You can see how to load the python interpreter into a C program/library here:
http://docs.python.org/extending/embedding.html
http://docs.python.org/extending/extending.html#calling-python-functions-from-c
http://www.linuxjournal.com/article/8497
If you do this a lot, then it may be simpler to write a single generic handler when you can use with multiple different python scripts.
I used Bash examples and the linked resources #tylerl mentioned to make bashpy. It's a proof of concept and currently lacks support for both passing variables and calling functions. So not very useful yet, but maybe it can help someone ending up here.
Recently I have become a fan of storing various settings used for my testing scripts in the OSX defaults system as it allows me to keep various scripts in git and push them to github without worrying about leaving passwords/settings/etc hardcoded into the script.
When writing a shell script using simple bash commands, it is easy enough to use backticks to call the defaults binary to read the preferences and if there is an error reading the preference, the script stops execution and you can see the error and fix it. When I try to do a similar thing in Python or Ruby it tends to be a little more annoying since you have to do additional work to check the return code of defaults to see if there is an error.
I have been attempting to search via google off and on for a library to use the OSX defaults system which ends up being somewhat difficult when "defaults" is part of your query string.
I thought of trying to read the plist files directly but it seems like the plist libraries I have found (such as the built in python one) are only able to read the XML ones (not the binary ones) which is a problem if I ever set anything with the defaults program since it will convert it back to a binary plist.
Recently while trying another search for a Python library I changed the search terms to something using something like NSUserDefaults (I have now forgotten the exact term) I found a Python library called userdefaults but it was developed for an older version of OSX (10.2) with an older version of Python (2.3) and I have not had much luck in getting it to compile on OSX 10.6 and Python 2.6
Ideally I would like to find a library that would make it easy to read from (and as a bonus write to) the OSX defaults system in a way similar to the following python psudo code.
from some.library.defaults import defaults
settings = defaults('com.example.app')
print settings['setting_key']
Since I am also starting to use Ruby more, I would also like to find a Ruby library with similar functionality.
It may be that I have to eventually just 'give up' and write my own simple library around the defaults binary but I thought it wouldn't hurt to try to query others to see if there was an existing solution.
You´ll want to use PyObjC: have a look at this article at mactech.com (in specific: scroll down to "Accessing plists Via Python"). And this article from oreilly on PyObjC.
Run this, for example:
from Foundation import *
standardUserDefaults = NSUserDefaults.standardUserDefaults()
persistentDomains = standardUserDefaults.persistentDomainNames()
persistentDomains.objectAtIndex_(14)
aDomain = standardUserDefaults.persistentDomainForName_(persistentDomains[14])
aDomain.keys()
This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.
Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.
It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).
There's a lot of power in this feature and something along these lines is probably what you're looking for. :)
[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do.
Otherwise AFAIK you have to use some conf objects.