I have a python script, distributed across several files, all within a single directory. I would like to distribute this as a single executable file (for linux systems, mainly), such that the file can be moved around and copied easily.
I've come as far as renaming my main file to __main__.py, and zipping everything into myscript.zip, so now I can run python myscript.zip. But that's one step too short. I want to run this as ./myscript, without creating an alias or a wrapper script.
Is this possible at all? What's the best way? I thought that maybe the zip file could be embedded in a (ba)sh script, that passes it to python (but without creating a temporary file, if possible).
EDIT:
After having another go at setuptools (I had not managed to make it work before), I could create sort of self-contained script, an "eggsecutable script". The trick is that in this case you cannot have your main module named __main__.py. Then there are still a couple of issues: the resulting script cannot be renamed and it still creates the __pycache__ directory when run. I solved these by modifying the shell-script code at the beginning of the file, and adding the -B flag to the python command there.
EDIT (2): Not that easy either, it was working because I still had my source .py files next to the "eggsecutable", move something and it stops working.
You can edit the raw zip file as a binary and insert the shebang into the first line.
#!/usr/bin/env python
PK...rest of the zip
Of course you need an appropriate editor for this, that can handle binary files (e.g.: vim -b) or you can make it with a small bash script.
{ echo '#!/usr/bin/env python'; cat myscript.zip; } > myscript
chmod +x myscript
./myscript
Firstly, there's the obligatory "thus isn't the way things are normally done, are you sure you want to do it THIS way?" warning. That said, to answer your question and not try to substitute it with what someone else thinks you should do...
You can write a script, and prepend it to a python egg. The script would extract the egg from itself, and obviously call exit before the egg file data is encountered. Egg files are importable but not executable, so the script would have to
Extract the egg from itself as an egg file with a known name in the current directory
Run python -m egg
Delete the file
Exit
Sorry, I'm on a phone at the moment, so I'll update with actual code later
Continuing my own attempts, I concocted something that works for me:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import glob, os.path
from setuptools import setup
files = [os.path.splitext(x)[0] for x in glob.glob('*.py')]
thisfile = os.path.splitext(os.path.basename(__file__))[0]
files.remove(thisfile)
setup(
script_args=['bdist_wheel', '-d', '.'],
py_modules=files,
)
# Now try to make the package runnable:
# Add the self-running trick code to the top of the file,
# rename it and make it executable
import sys, os, stat
exe_name = 'my_module'
magic_code = '''#!/bin/sh
name=`readlink -f "$0"`
exec {0} -c "import sys, os; sys.path.insert(0, '$name'); from my_module import main; sys.exit(main(my_name='$name'))" "$#"
'''.format(sys.executable)
wheel = glob.glob('*.whl')[0]
with open(exe_name, 'wb') as new:
new.write(bytes(magic_code, 'ascii'))
with open(wheel, 'rb') as original:
data = True
while (data):
data = original.read(4096)
new.write(data)
os.remove(wheel)
st = os.stat(exe_name)
os.chmod(exe_name, st.st_mode | stat.S_IEXEC)
This creates a wheel with all *.py files in the current directory (except itself), and then adds the code to make it executable. exe_name is the final name of the file, and the from my_module import main; sys.exit(main(my_name='$name')) should be modified depending on each script, in my case I want to call the main method from my_module.py, which takes an argument my_name (the name of the actual file being run).
There's no guarantee this will run in a system different from the one it was created, but it is still useful to create a self-contained file from the sources (to be placed in ~/bin, for instance).
Another less hackish solution (I'm sorry for answering my own question twice, but this doesn't fit in a comment, and I think it is better in a separate box).
#!/usr/bin/env python3
modules = [
[ 'my_aux', '''
def my_aux():
return 7
'''],
['my_func', '''
from my_aux import my_aux
def my_func():
print("and I'm my_func: {0}".format(my_aux()))
'''],
['my_script', '''
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import (unicode_literals, division, absolute_import, print_function)
import sys
def main(my_name):
import my_aux
print("Hello, I'm my_script: {0}".format(my_name))
print(my_aux.my_aux())
import my_func
my_func.my_func()
if (__name__ == '__main__'):
sys.exit(main(__file__))
'''],
]
import sys, types
for m in modules:
module = types.ModuleType(m[0])
exec(m[1], module.__dict__)
sys.modules[m[0]] = module
del modules
from my_script import main
main(__file__)
I think this is more clear, although probably less efficient. All the needed files are included as strings (they could be zipped and b64-encoded first, for space efficiency). Then they are imported as modules and the main method is run. Care should be taken to define the modules in the right order.
Related
How to run another python scripts in a different folder?
I have main program:
calculation_control.py
In the folder calculation_folder, there is calculation.py
How do I run calculation_folder/calculation.py from within calculation_control.py?
So far I have tried the following code:
calculation_file = folder_path + "calculation.py"
if not os.path.isfile(parser_file) :
continue
subprocess.Popen([sys.executable, parser_file])
There are more than a few ways. I'll list them in order of inverted
preference (i.e., best first, worst last):
Treat it like a module: import file. This is good because it's secure, fast, and maintainable. Code gets reused as it's supposed
to be done. Most Python libraries run using multiple methods stretched
over lots of files. Highly recommended. Note that if your file is
called file.py, your import should not include the .py
extension at the end.
The infamous (and unsafe) exec command: execfile('file.py'). Insecure, hacky, usually the wrong answer.
Avoid where possible.
Spawn a shell process: os.system('python file.py'). Use when desperate.
Source: How can I make one python file run another?
Solution
Python only searches the current directory for the file(s) to import. However, you can work around this by adding the following code snippet to calculation_control.py...
import sys
sys.path.insert(0, 'calculation_folder') # Note: if this relavtive path doesn't work or produces errors try replacing it with an absolute path
import calculation
I would like to see what is the best way to determine the current script directory in Python.
I discovered that, due to the many ways of calling Python code, it is hard to find a good solution.
Here are some problems:
__file__ is not defined if the script is executed with exec, execfile
__module__ is defined only in modules
Use cases:
./myfile.py
python myfile.py
./somedir/myfile.py
python somedir/myfile.py
execfile('myfile.py') (from another script, that can be located in another directory and that can have another current directory.
I know that there is no perfect solution, but I'm looking for the best approach that solves most of the cases.
The most used approach is os.path.dirname(os.path.abspath(__file__)) but this really doesn't work if you execute the script from another one with exec().
Warning
Any solution that uses current directory will fail, this can be different based on the way the script is called or it can be changed inside the running script.
os.path.dirname(os.path.abspath(__file__))
is indeed the best you're going to get.
It's unusual to be executing a script with exec/execfile; normally you should be using the module infrastructure to load scripts. If you must use these methods, I suggest setting __file__ in the globals you pass to the script so it can read that filename.
There's no other way to get the filename in execed code: as you note, the CWD may be in a completely different place.
If you really want to cover the case that a script is called via execfile(...), you can use the inspect module to deduce the filename (including the path). As far as I am aware, this will work for all cases you listed:
filename = inspect.getframeinfo(inspect.currentframe()).filename
path = os.path.dirname(os.path.abspath(filename))
In Python 3.4+ you can use the simpler pathlib module:
from inspect import currentframe, getframeinfo
from pathlib import Path
filename = getframeinfo(currentframe()).filename
parent = Path(filename).resolve().parent
You can also use __file__ (when it's available) to avoid the inspect module altogether:
from pathlib import Path
parent = Path(__file__).resolve().parent
#!/usr/bin/env python
import inspect
import os
import sys
def get_script_dir(follow_symlinks=True):
if getattr(sys, 'frozen', False): # py2exe, PyInstaller, cx_Freeze
path = os.path.abspath(sys.executable)
else:
path = inspect.getabsfile(get_script_dir)
if follow_symlinks:
path = os.path.realpath(path)
return os.path.dirname(path)
print(get_script_dir())
It works on CPython, Jython, Pypy. It works if the script is executed using execfile() (sys.argv[0] and __file__ -based solutions would fail here). It works if the script is inside an executable zip file (/an egg). It works if the script is "imported" (PYTHONPATH=/path/to/library.zip python -mscript_to_run) from a zip file; it returns the archive path in this case. It works if the script is compiled into a standalone executable (sys.frozen). It works for symlinks (realpath eliminates symbolic links). It works in an interactive interpreter; it returns the current working directory in this case.
The os.path... approach was the 'done thing' in Python 2.
In Python 3, you can find directory of script as follows:
from pathlib import Path
script_path = Path(__file__).parent
Note: this answer is now a package (also with safe relative importing capabilities)
https://github.com/heetbeet/locate
$ pip install locate
$ python
>>> from locate import this_dir
>>> print(this_dir())
C:/Users/simon
For .py scripts as well as interactive usage:
I frequently use the directory of my scripts (for accessing files stored alongside them), but I also frequently run these scripts in an interactive shell for debugging purposes. I define this_dir as:
When running or importing a .py file, the file's base directory. This is always the correct path.
When running an .ipyn notebook, the current working directory. This is always the correct path, since Jupyter sets the working directory as the .ipynb base directory.
When running in a REPL, the current working directory. Hmm, what is the actual "correct path" when the code is detached from a file? Rather, make it your responsibility to change into the "correct path" before invoking the REPL.
Python 3.4 (and above):
from pathlib import Path
this_dir = Path(globals().get("__file__", "./_")).absolute().parent
Python 2 (and above):
import os
this_dir = os.path.dirname(os.path.abspath(globals().get("__file__", "./_")))
Explanation:
globals() returns all the global variables as a dictionary.
.get("__file__", "./_") returns the value from the key "__file__" if it exists in globals(), otherwise it returns the provided default value "./_".
The rest of the code just expands __file__ (or "./_") into an absolute filepath, and then returns the filepath's base directory.
Alternative:
If you know for certain that __file__ is available to your surrounding code, you can simplify to this:
>= Python 3.4: this_dir = Path(__file__).absolute().parent
>= Python 2: this_dir = os.path.dirname(os.path.abspath(__file__))
Would
import os
cwd = os.getcwd()
do what you want? I'm not sure what exactly you mean by the "current script directory". What would the expected output be for the use cases you gave?
Just use os.path.dirname(os.path.abspath(__file__)) and examine very carefully whether there is a real need for the case where exec is used. It could be a sign of troubled design if you are not able to use your script as a module.
Keep in mind Zen of Python #8, and if you believe there is a good argument for a use-case where it must work for exec, then please let us know some more details about the background of the problem.
First.. a couple missing use-cases here if we're talking about ways to inject anonymous code..
code.compile_command()
code.interact()
imp.load_compiled()
imp.load_dynamic()
imp.load_module()
__builtin__.compile()
loading C compiled shared objects? example: _socket?)
But, the real question is, what is your goal - are you trying to enforce some sort of security? Or are you just interested in whats being loaded.
If you're interested in security, the filename that is being imported via exec/execfile is inconsequential - you should use rexec, which offers the following:
This module contains the RExec class,
which supports r_eval(), r_execfile(),
r_exec(), and r_import() methods, which
are restricted versions of the standard
Python functions eval(), execfile() and
the exec and import statements. Code
executed in this restricted environment
will only have access to modules and
functions that are deemed safe; you can
subclass RExec add or remove capabilities as
desired.
However, if this is more of an academic pursuit.. here are a couple goofy approaches that you
might be able to dig a little deeper into..
Example scripts:
./deep.py
print ' >> level 1'
execfile('deeper.py')
print ' << level 1'
./deeper.py
print '\t >> level 2'
exec("import sys; sys.path.append('/tmp'); import deepest")
print '\t << level 2'
/tmp/deepest.py
print '\t\t >> level 3'
print '\t\t\t I can see the earths core.'
print '\t\t << level 3'
./codespy.py
import sys, os
def overseer(frame, event, arg):
print "loaded(%s)" % os.path.abspath(frame.f_code.co_filename)
sys.settrace(overseer)
execfile("deep.py")
sys.exit(0)
Output
loaded(/Users/synthesizerpatel/deep.py)
>> level 1
loaded(/Users/synthesizerpatel/deeper.py)
>> level 2
loaded(/Users/synthesizerpatel/<string>)
loaded(/tmp/deepest.py)
>> level 3
I can see the earths core.
<< level 3
<< level 2
<< level 1
Of course, this is a resource-intensive way to do it, you'd be tracing
all your code.. Not very efficient. But, I think it's a novel approach
since it continues to work even as you get deeper into the nest.
You can't override 'eval'. Although you can override execfile().
Note, this approach only coveres exec/execfile, not 'import'.
For higher level 'module' load hooking you might be able to use use
sys.path_hooks (Write-up courtesy of PyMOTW).
Thats all I have off the top of my head.
Here is a partial solution, still better than all published ones so far.
import sys, os, os.path, inspect
#os.chdir("..")
if '__file__' not in locals():
__file__ = inspect.getframeinfo(inspect.currentframe())[0]
print os.path.dirname(os.path.abspath(__file__))
Now this works will all calls but if someone use chdir() to change the current directory, this will also fail.
Notes:
sys.argv[0] is not going to work, will return -c if you execute the script with python -c "execfile('path-tester.py')"
I published a complete test at https://gist.github.com/1385555 and you are welcome to improve it.
To get the absolute path to the directory containing the current script you can use:
from pathlib import Path
absDir = Path(__file__).parent.resolve()
Please note the .resolve() call is required, because that is the one making the path absolute. Without resolve(), you would obtain something like '.'.
This solution uses pathlib, which is part of Python's stdlib since v3.4 (2014). This is preferrable compared to other solutions using os.
The official pathlib documentation has a useful table mapping the old os functions to the new ones: https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module
This should work in most cases:
import os,sys
dirname=os.path.dirname(os.path.realpath(sys.argv[0]))
Hopefully this helps:-
If you run a script/module from anywhere you'll be able to access the __file__ variable which is a module variable representing the location of the script.
On the other hand, if you're using the interpreter you don't have access to that variable, where you'll get a name NameError and os.getcwd() will give you the incorrect directory if you're running the file from somewhere else.
This solution should give you what you're looking for in all cases:
from inspect import getsourcefile
from os.path import abspath
abspath(getsourcefile(lambda:0))
I haven't thoroughly tested it but it solved my problem.
If __file__ is available:
# -- script1.py --
import os
file_path = os.path.abspath(__file__)
print(os.path.dirname(file_path))
For those we want to be able to run the command from the interpreter or get the path of the place you're running the script from:
# -- script2.py --
import os
print(os.path.abspath(''))
This works from the interpreter.
But when run in a script (or imported) it gives the path of the place where
you ran the script from, not the path of directory containing
the script with the print.
Example:
If your directory structure is
test_dir (in the home dir)
├── main.py
└── test_subdir
├── script1.py
└── script2.py
with
# -- main.py --
import script1.py
import script2.py
The output is:
~/test_dir/test_subdir
~/test_dir
As previous answers require you to import some module, I thought that I would write one answer that doesn't. Use the code below if you don't want to import anything.
this_dir = '/'.join(__file__.split('/')[:-1])
print(this_dir)
If the script is on /path/to/script.py then this would print /path/to. Note that this will throw error on terminal as no file is executed. This basically parse the directory from __file__ removing the last part of it. In this case /script.py is removed to produce the output /path/to.
print(__import__("pathlib").Path(__file__).parent)
I have 3 python files.(first.py, second.py, third.py) I'm executing 2nd python file from the 1st python file. 2nd python file uses the 'import' statement to make use of 3rd python file. This is what I'm doing.
This is my code.
first.py
import os
file_path = "folder\second.py"
os.system(file_path)
second.py
import third
...
(rest of the code)
third.py (which contains ReportLab code for generating PDF )
....
canvas.drawImage('xyz.jpg',0.2*inch, 7.65*inch, width=w*scale, height=h*scale)
....
when I'm executing this code, it gives error
IOError: Cannot open resource "xyz.jpg"
But when i execute second.py file directly by writing python second.py , everything works fine..!!
Even i tried this code,
file_path = "folder\second.py"
execfile(file_path)
But it gives this error,
ImportError: No module named third
But as i stated everything works fine if i directly execute the second.py file. !!
why this is happening? Is there any better idea for executing such a kind of nested python files?
Any idea or suggestions would be greatly appreciated.
I used this three files just to give the basic idea of my structure. You can consider this flow of execution as a single process. There are too many processes like this and each file contains thousandth lines of codes. That's why i can't change the whole code to be modularize which can be used by import statement. :-(
So the question is how to make a single python file which will take care of executing all the other processes. (If we are executing each process individually, everything works fine )
This should be easy if you do it the right way. There's a couple steps that you can follow to set it up.
Step 1: Set your files up to be run or imported
#!/usr/bin/env python
def main():
do_stuff()
if __name__ == '__main__':
The __name__ special variable will contain __main__ when invoked as a script, and the module name if imported. You can use that to provide a file that can be used either way.
Step 2: Make your subdirectory a package
If you add an empty file called __init__.py to folder, it becomes a package that you can import.
Step 3: Import and run your scripts
from folder import first, second, third
first.main()
second.main()
third.main()
The way you are doing thing is invalid.
You should: create a main application, and import 1,2,3.
In 1,2,3: You should define the things as your functions. Then call them from the main application.
IMHO: I don't need that you have much code to put into separate files, you just also put them into one file with function definitions and call them properly.
I second S.Lott: You really should rethink your design.
But just to provide an answer to your specific problem:
From what I can guess so far, you have second.py and third.py in folder, along with xyz.jpg. To make this work, you will have to change your working directory first. Try it in this way in first.py:
import os
....
os.chdir('folder')
execfile('second.py')
Try reading about the os module.
Future readers:
Pradyumna's answer from here solved Moin Ahmed's second issue for me:
import sys, change "sys.path" by appending the path during run
time,then import the module that will help
[i.e. sys.path.append(execfile's directory)]
I don't think this has been asked before-I have a folder that has lots of different .py files. The script I've made only uses some-but some call others & I don't know all the ones being used. Is there a program that will get everything needed to make that script run into one folder?
Cheers!
Use the modulefinder module in the standard library, see e.g. http://docs.python.org/library/modulefinder.html
# zipmod.py - make a zip archive consisting of Python modules and their dependencies as reported by modulefinder
# To use: cd to the directory containing your Python module tree and type
# $ python zipmod.py archive.zip mod1.py mod2.py ...
# Only modules in the current working directory and its subdirectories will be included.
# Written and tested on Mac OS X, but it should work on other platforms with minimal modifications.
import modulefinder
import os
import sys
import zipfile
def main(output, *mnames):
mf = modulefinder.ModuleFinder()
for mname in mnames:
mf.run_script(mname)
cwd = os.getcwd()
zf = zipfile.ZipFile(output, 'w')
for mod in mf.modules.itervalues():
if not mod.__file__:
continue
modfile = os.path.abspath(mod.__file__)
if os.path.commonprefix([cwd, modfile]) == cwd:
zf.write(modfile, os.path.relpath(modfile))
zf.close()
if __name__ == '__main__':
main(*sys.argv[1:])
Freeze does pretty close to what you describe. It does an extra step of generating C files to create a stand-alone executable, but you could use the log output it produces to get the list of modules your script uses. From there it's a simple matter to copy them all into a directory to be zipped up
(or whatever).
I want to detect whether module has changed. Now, using inotify is simple, you just need to know the directory you want to get notifications from.
How do I retrieve a module's path in python?
import a_module
print(a_module.__file__)
Will actually give you the path to the .pyc file that was loaded, at least on Mac OS X. So I guess you can do:
import os
path = os.path.abspath(a_module.__file__)
You can also try:
path = os.path.dirname(a_module.__file__)
To get the module's directory.
There is inspect module in python.
Official documentation
The inspect module provides several useful functions to help get
information about live objects such as modules, classes, methods,
functions, tracebacks, frame objects, and code objects. For example,
it can help you examine the contents of a class, retrieve the source
code of a method, extract and format the argument list for a function,
or get all the information you need to display a detailed traceback.
Example:
>>> import os
>>> import inspect
>>> inspect.getfile(os)
'/usr/lib64/python2.7/os.pyc'
>>> inspect.getfile(inspect)
'/usr/lib64/python2.7/inspect.pyc'
>>> os.path.dirname(inspect.getfile(inspect))
'/usr/lib64/python2.7'
As the other answers have said, the best way to do this is with __file__ (demonstrated again below). However, there is an important caveat, which is that __file__ does NOT exist if you are running the module on its own (i.e. as __main__).
For example, say you have two files (both of which are on your PYTHONPATH):
#/path1/foo.py
import bar
print(bar.__file__)
and
#/path2/bar.py
import os
print(os.getcwd())
print(__file__)
Running foo.py will give the output:
/path1 # "import bar" causes the line "print(os.getcwd())" to run
/path2/bar.py # then "print(__file__)" runs
/path2/bar.py # then the import statement finishes and "print(bar.__file__)" runs
HOWEVER if you try to run bar.py on its own, you will get:
/path2 # "print(os.getcwd())" still works fine
Traceback (most recent call last): # but __file__ doesn't exist if bar.py is running as main
File "/path2/bar.py", line 3, in <module>
print(__file__)
NameError: name '__file__' is not defined
Hope this helps. This caveat cost me a lot of time and confusion while testing the other solutions presented.
I will try tackling a few variations on this question as well:
finding the path of the called script
finding the path of the currently executing script
finding the directory of the called script
(Some of these questions have been asked on SO, but have been closed as duplicates and redirected here.)
Caveats of Using __file__
For a module that you have imported:
import something
something.__file__
will return the absolute path of the module. However, given the folowing script foo.py:
#foo.py
print '__file__', __file__
Calling it with 'python foo.py' Will return simply 'foo.py'. If you add a shebang:
#!/usr/bin/python
#foo.py
print '__file__', __file__
and call it using ./foo.py, it will return './foo.py'. Calling it from a different directory, (eg put foo.py in directory bar), then calling either
python bar/foo.py
or adding a shebang and executing the file directly:
bar/foo.py
will return 'bar/foo.py' (the relative path).
Finding the directory
Now going from there to get the directory, os.path.dirname(__file__) can also be tricky. At least on my system, it returns an empty string if you call it from the same directory as the file. ex.
# foo.py
import os
print '__file__ is:', __file__
print 'os.path.dirname(__file__) is:', os.path.dirname(__file__)
will output:
__file__ is: foo.py
os.path.dirname(__file__) is:
In other words, it returns an empty string, so this does not seem reliable if you want to use it for the current file (as opposed to the file of an imported module). To get around this, you can wrap it in a call to abspath:
# foo.py
import os
print 'os.path.abspath(__file__) is:', os.path.abspath(__file__)
print 'os.path.dirname(os.path.abspath(__file__)) is:', os.path.dirname(os.path.abspath(__file__))
which outputs something like:
os.path.abspath(__file__) is: /home/user/bar/foo.py
os.path.dirname(os.path.abspath(__file__)) is: /home/user/bar
Note that abspath() does NOT resolve symlinks. If you want to do this, use realpath() instead. For example, making a symlink file_import_testing_link pointing to file_import_testing.py, with the following content:
import os
print 'abspath(__file__)',os.path.abspath(__file__)
print 'realpath(__file__)',os.path.realpath(__file__)
executing will print absolute paths something like:
abspath(__file__) /home/user/file_test_link
realpath(__file__) /home/user/file_test.py
file_import_testing_link -> file_import_testing.py
Using inspect
#SummerBreeze mentions using the inspect module.
This seems to work well, and is quite concise, for imported modules:
import os
import inspect
print 'inspect.getfile(os) is:', inspect.getfile(os)
obediently returns the absolute path. For finding the path of the currently executing script:
inspect.getfile(inspect.currentframe())
(thanks #jbochi)
inspect.getabsfile(inspect.currentframe())
gives the absolute path of currently executing script (thanks #Sadman_Sakib).
I don't get why no one is talking about this, but to me the simplest solution is using imp.find_module("modulename") (documentation here):
import imp
imp.find_module("os")
It gives a tuple with the path in second position:
(<open file '/usr/lib/python2.7/os.py', mode 'U' at 0x7f44528d7540>,
'/usr/lib/python2.7/os.py',
('.py', 'U', 1))
The advantage of this method over the "inspect" one is that you don't need to import the module to make it work, and you can use a string in input. Useful when checking modules called in another script for example.
EDIT:
In python3, importlib module should do:
Doc of importlib.util.find_spec:
Return the spec for the specified module.
First, sys.modules is checked to see if the module was already imported. If so, then sys.modules[name].spec is returned. If that happens to be
set to None, then ValueError is raised. If the module is not in
sys.modules, then sys.meta_path is searched for a suitable spec with the
value of 'path' given to the finders. None is returned if no spec could
be found.
If the name is for submodule (contains a dot), the parent module is
automatically imported.
The name and package arguments work the same as importlib.import_module().
In other words, relative module names (with leading dots) work.
This was trivial.
Each module has a __file__ variable that shows its relative path from where you are right now.
Therefore, getting a directory for the module to notify it is simple as:
os.path.dirname(__file__)
import os
path = os.path.abspath(__file__)
dir_path = os.path.dirname(path)
import module
print module.__path__
Packages support one more special attribute, __path__. This is
initialized to be a list containing the name of the directory holding
the package’s __init__.py before the code in that file is executed.
This variable can be modified; doing so affects future searches for
modules and subpackages contained in the package.
While this feature is not often needed, it can be used to extend the
set of modules found in a package.
Source
If you want to retrieve the module path without loading it:
import importlib.util
print(importlib.util.find_spec("requests").origin)
Example output:
/usr/lib64/python3.9/site-packages/requests/__init__.py
Command Line Utility
You can tweak it to a command line utility,
python-which <package name>
Create /usr/local/bin/python-which
#!/usr/bin/env python
import importlib
import os
import sys
args = sys.argv[1:]
if len(args) > 0:
module = importlib.import_module(args[0])
print os.path.dirname(module.__file__)
Make it executable
sudo chmod +x /usr/local/bin/python-which
you can just import your module
then hit its name and you'll get its full path
>>> import os
>>> os
<module 'os' from 'C:\\Users\\Hassan Ashraf\\AppData\\Local\\Programs\\Python\\Python36-32\\lib\\os.py'>
>>>
So I spent a fair amount of time trying to do this with py2exe
The problem was to get the base folder of the script whether it was being run as a python script or as a py2exe executable. Also to have it work whether it was being run from the current folder, another folder or (this was the hardest) from the system's path.
Eventually I used this approach, using sys.frozen as an indicator of running in py2exe:
import os,sys
if hasattr(sys,'frozen'): # only when running in py2exe this exists
base = sys.prefix
else: # otherwise this is a regular python script
base = os.path.dirname(os.path.realpath(__file__))
If you want to retrieve the package's root path from any of its modules, the following works (tested on Python 3.6):
from . import __path__ as ROOT_PATH
print(ROOT_PATH)
The main __init__.py path can also be referenced by using __file__ instead.
Hope this helps!
When you import a module, yo have access to plenty of information. Check out dir(a_module). As for the path, there is a dunder for that: a_module.__path__. You can also just print the module itself.
>>> import a_module
>>> print(dir(a_module))
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']
>>> print(a_module.__path__)
['/.../.../a_module']
>>> print(a_module)
<module 'a_module' from '/.../.../a_module/__init__.py'>
If you would like to know absolute path from your script you can use Path object:
from pathlib import Path
print(Path().absolute())
print(Path().resolve('.'))
print(Path().cwd())
cwd() method
Return a new path object representing the current directory (as returned by os.getcwd())
resolve() method
Make the path absolute, resolving any symlinks. A new path object is returned:
If you installed it using pip, "pip show" works great ('Location')
$ pip show detectron2
Name: detectron2
Version: 0.1
Summary: Detectron2 is FAIR next-generation research platform for object detection and segmentation.
Home-page: https://github.com/facebookresearch/detectron2
Author: FAIR
Author-email: None
License: UNKNOWN
Location: /home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages
Requires: yacs, tabulate, tqdm, pydot, tensorboard, Pillow, termcolor, future, cloudpickle, matplotlib, fvcore
Update:
$ python -m pip show mymodule
(author: wisbucky)
If the only caveat of using __file__ is when current, relative directory is blank (ie, when running as a script from the same directory where the script is), then a trivial solution is:
import os.path
mydir = os.path.dirname(__file__) or '.'
full = os.path.abspath(mydir)
print __file__, mydir, full
And the result:
$ python teste.py
teste.py . /home/user/work/teste
The trick is in or '.' after the dirname() call. It sets the dir as ., which means current directory and is a valid directory for any path-related function.
Thus, using abspath() is not truly needed. But if you use it anyway, the trick is not needed: abspath() accepts blank paths and properly interprets it as the current directory.
I'd like to contribute with one common scenario (in Python 3) and explore a few approaches to it.
The built-in function open() accepts either relative or absolute path as its first argument. The relative path is treated as relative to the current working directory though so it is recommended to pass the absolute path to the file.
Simply said, if you run a script file with the following code, it is not guaranteed that the example.txt file will be created in the same directory where the script file is located:
with open('example.txt', 'w'):
pass
To fix this code we need to get the path to the script and make it absolute. To ensure the path to be absolute we simply use the os.path.realpath() function. To get the path to the script there are several common functions that return various path results:
os.getcwd()
os.path.realpath('example.txt')
sys.argv[0]
__file__
Both functions os.getcwd() and os.path.realpath() return path results based on the current working directory. Generally not what we want. The first element of the sys.argv list is the path of the root script (the script you run) regardless of whether you call the list in the root script itself or in any of its modules. It might come handy in some situations. The __file__ variable contains path of the module from which it has been called.
The following code correctly creates a file example.txt in the same directory where the script is located:
filedir = os.path.dirname(os.path.realpath(__file__))
filepath = os.path.join(filedir, 'example.txt')
with open(filepath, 'w'):
pass
From within modules of a python package I had to refer to a file that resided in the same directory as package. Ex.
some_dir/
maincli.py
top_package/
__init__.py
level_one_a/
__init__.py
my_lib_a.py
level_two/
__init__.py
hello_world.py
level_one_b/
__init__.py
my_lib_b.py
So in above I had to call maincli.py from my_lib_a.py module knowing that top_package and maincli.py are in the same directory. Here's how I get the path to maincli.py:
import sys
import os
import imp
class ConfigurationException(Exception):
pass
# inside of my_lib_a.py
def get_maincli_path():
maincli_path = os.path.abspath(imp.find_module('maincli')[1])
# top_package = __package__.split('.')[0]
# mod = sys.modules.get(top_package)
# modfile = mod.__file__
# pkg_in_dir = os.path.dirname(os.path.dirname(os.path.abspath(modfile)))
# maincli_path = os.path.join(pkg_in_dir, 'maincli.py')
if not os.path.exists(maincli_path):
err_msg = 'This script expects that "maincli.py" be installed to the '\
'same directory: "{0}"'.format(maincli_path)
raise ConfigurationException(err_msg)
return maincli_path
Based on posting by PlasmaBinturong I modified the code.
If you wish to do this dynamically in a "program" try this code:
My point is, you may not know the exact name of the module to "hardcode" it.
It may be selected from a list or may not be currently running to use __file__.
(I know, it will not work in Python 3)
global modpath
modname = 'os' #This can be any module name on the fly
#Create a file called "modname.py"
f=open("modname.py","w")
f.write("import "+modname+"\n")
f.write("modpath = "+modname+"\n")
f.close()
#Call the file with execfile()
execfile('modname.py')
print modpath
<module 'os' from 'C:\Python27\lib\os.pyc'>
I tried to get rid of the "global" issue but found cases where it did not work
I think "execfile()" can be emulated in Python 3
Since this is in a program, it can easily be put in a method or module for reuse.
Here is a quick bash script in case it's useful to anyone. I just want to be able to set an environment variable so that I can pushd to the code.
#!/bin/bash
module=${1:?"I need a module name"}
python << EOI
import $module
import os
print os.path.dirname($module.__file__)
EOI
Shell example:
[root#sri-4625-0004 ~]# export LXML=$(get_python_path.sh lxml)
[root#sri-4625-0004 ~]# echo $LXML
/usr/lib64/python2.7/site-packages/lxml
[root#sri-4625-0004 ~]#
If your import is a site-package (e.g. pandas) I recommend this to get its directory (does not work if import is a module, like e.g. pathlib):
from importlib import resources # part of core Python
import pandas as pd
package_dir = resources.path(package=pd, resource="").__enter__()
In general importlib.resources can be considered when a task is about accessing paths/resources of a site package.
If you used pip, then you can call pip show, but you must call it using the specific version of python that you are using. For example, these could all give different results:
$ python -m pip show numpy
$ python2.7 -m pip show numpy
$ python3 -m pip show numpy
Location: /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python
Don't simply run $ pip show numpy, because there is no guarantee that it will be the same pip that different python versions are calling.