Importing Python files that are in a folder - python

I've been working on a project that creates its own .py files that store handlers for the method, I've been trying to figure out how to store the Python files in folder and open them. Here is the code I'm using to create the files if they don't already exist, then importing the file:
if os.path.isfile("Btn"+ str(self.ButtonSet[self.IntBtnID].IntPID) +".py") == False:
TestPy = open("Btn"+ str(self.ButtonSet[self.IntBtnID].IntPID) +".py","w+")
try:
TestPy.write(StrHandler)
except Exception as Error:
print(Error)
TestPy.close()
self.ButtonSet[self.IntBtnID].ImpHandler = __import__("Btn" + str(self.IntBtnID))
self.IntBtnID += 1
when I change this line:
self.ButtonSet[self.IntBtnID].ImpHandler = __import__("Btn" + str(self.IntBtnID))
to this:
self.ButtonSet[self.IntBtnID].ImpHandler = __import__("Buttons\\Btn" + str(self.IntBtnID))
the fill can't be found and ends up throwing an error because it can't find the file in the folder.
Do know why it doesn't work I just don't know how to get around the issue:/
My question is how do I open the .py when its stored in a folder?

There are a couple of unidiomatic things in your code that may be the cause of your issue. First of all, it is generally better to use the functions in os.path to manipulate paths to files. From your backslash usage, it appears you're working on Windows, but the os.path module ensures consistent behaviour across all platforms.
Also there is importlib.import_module, which is usually recommended over __import__. Furthermore, in case you want to load the generated module more than once during the lifetime of your program, you have to do that explicitly using imp.reload.
One last tip: I'd factor out the module path to avoid having to change it in more than one place.

You can't reference a path directory when you are importing files. Instead, you want to add the directory to your path and then import the name of the module.
import sys
sys.path.append( "Buttons" )
__import__("Btn"+str(self.IntBtnId))
See this so question for more information.

The first argument to the __import__() function is the name of the module, not a path to it. Therefore I think you need to use:
self.ButtonSet[self.IntBtnID].ImpHandler = __import__("Buttons.Btn" + str(self.IntBtnID))
You may also need to put an empty __init__.py file in the Buttons folder to indicate it's a package of modules.

Related

Replace string in group of python files

I have a project which executes large number of python files in a folder. All of them have a line PATH="defaultstring". I want to make it more dynamic i.e. replace "defaultstring" in all the python files to some "otherstring" provided on run time. If no string is provided on runtime then "defaultstring" will be default value.
I am building the project using gradle. One of the thing I can do is execute some python script say "main.py" before those group of files are executed. "main.py" can iterate to all the files, open them, and replace PATH="defaultstring" to "otherstring".
Is it possible to do this without changing all those files in folder every time I run ? This is because if I change "defaultstring" to "otherstring" in first run and in second run suppose I don't give any runtime input then by default PATH="otherstring" will be executed but default value I want to keep is "defaultstring". I can change "otherstring" to "defaultstring" in this case but I want to do with some method which does not iterate through all files and change those lines.
Can we do this using fixture injection (which I don't have much idea about so any other technique will also be helpful).
Assuming that your files have some naming pattern that is unique and can be recognised you can do the following:
Find all your files using glob or something similar
Import the files dynamically
Change the variable using the global keyword
If you create a function that does these steps and execute it as the first action of your code in the main function, you should be able to change the variable PATH globally at runtime without interacting with the file stored on your hard drive (and consequently not changing the default value).
Edit: While the above approach will hopefully work, it is nevertheless bad practice. It would be better if you have a single file that contains such variables that all other files import. Then you only have to change a single variable and the intention is much clearer than in the version that you originally intended to do.
Edit2: You can structure your project to contain the following files with respective functions or variables. This setup allows you to change the variable PATH at runtime upon starting the script if all file import a common file containing path. The other option of changing a variable in each file is more cumbersome and I would only add the option if really necessary.
globals.py
PATH = 'original/path'
def modify_path(new_path):
global PATH
PATH = new_path
some_file1.py
import globals
def get_path():
return globals.PATH
some_file2.py
import globals
def get_path():
return globals.PATH
main.py
import globals
if necessary:
globals.modify_path('new/path')

Flask and Google Calendar API Authentication Issues [duplicate]

I have a Python module which uses some resources in a subdirectory of the module directory. After searching around on stack overflow and finding related answers, I managed to direct the module to the resources by using something like
import os
os.path.join(os.path.dirname(__file__), 'fonts/myfont.ttf')
This works fine when I call the module from elsewhere, but it breaks when I call the module after changing the current working directory. The problem is that the contents of __file__ are a relative path, which doesn't take into account the fact that I changed the directory:
>>> mymodule.__file__
'mymodule/__init__.pyc'
>>> os.chdir('..')
>>> mymodule.__file__
'mymodule/__init__.pyc'
How can I encode the absolute path in __file__, or barring that, how can I access my resources in the module no matter what the current working directory is? Thanks!
Store the absolute path to the module directory at the very beginning of the module:
package_directory = os.path.dirname(os.path.abspath(__file__))
Afterwards, load your resources based on this package_directory:
font_file = os.path.join(package_directory, 'fonts', 'myfont.ttf')
And after all, do not modify of process-wide resources like the current working directory. There is never a real need to change the working directory in a well-written program, consequently avoid os.chdir().
Building on lunaryorn's answer, I keep a function at the top of my modules in which I have to build multiple paths. This saves me repeated typing of joins.
def package_path(*paths, package_directory=os.path.dirname(os.path.abspath(__file__))):
return os.path.join(package_directory, *paths)
To build the path, call it like this:
font_file = package_path('fonts', 'myfont.ttf')
Or if you just need the package directory:
package_directory = package_path()

Configuration variables for a collection of scripts in Python

I have a collection of scripts written in Python. Each of them can be executed independently. However, most of the time they should be executed one after the other, so there is a MainScript.py which calls them in the appropriate order. Each script has some configurable variables (let's call them Root_Dir, Data_Dir and LinWinFlag). If this collection of scripts is moved to a different computer, or different data needs to be processed, these variable values need to be changed. As there are many scripts this duplication is annoying and error-prone. I would like to group all configuration variables into a single file.
I tried making Config.py which would contain them as per this thread, but import Config produces ImportError: No module named Config because they are not part of a package.
Then I tried relying on variable inheritance: define them once in MainScript.py which calls all the others. This works, but I realized that each script would not be able to run on its own. To solve this, I tried adding useGlobal=True in MainScript.py and in other files:
if (useGlobal is None or useGlobal==False):
# define all variables
But this fails when scripts are run standalone: NameError: name 'useGlobal' is not defined. The workaround is to define useGlobal and set it to False when running the scripts independently of MainScript.py. It there a more elegant solution?
The idea is that python wants to access files - including the Config.py - primarily as part of a module.
The nice thing is that Python makes building modules (i.e. python packages) really easy - initializing it can be done by creating a
__init__.py
file in each directory you want as a module, a submodule, a subsubmodule, and so on.
So your import should go through if you have created this file.
If you have further questions, look at the excellent python documentation.
The best way to do this is to use a configuration file placed in your home directory (~/.config/yourscript/config.json).
You can then load the file on start and provide default values if the file does not exist :
Example (config.py) :
import json
default_config = {
"name": "volnt",
"mail": "oh#hi.com"
}
def load_settings():
settings = default_config
try:
with open("~/.config/yourscript/config.json", "r") as config_file:
loaded_config = json.loads(config_file.read())
for key in loaded_config:
settings[key] = loaded_config[key]
except IOError: # file does not exist
pass
return settings
For a configuration file it's a good idea to use json and not python, because it makes it easy to edit for people using your scripts.
As suggested by cleros, ConfigParser module seems to be the closest thing to what I wanted (one-line statement in each file which would set up multiple variables).

Get name of current script in Python

I'm trying to get the name of the Python script that is currently running.
I have a script called foo.py and I'd like to do something like this in order to get the script name:
print(Scriptname)
You can use __file__ to get the name of the current file. When used in the main module, this is the name of the script that was originally invoked.
If you want to omit the directory part (which might be present), you can use os.path.basename(__file__).
import sys
print(sys.argv[0])
This will print foo.py for python foo.py, dir/foo.py for python dir/foo.py, etc. It's the first argument to python. (Note that after py2exe it would be foo.exe.)
For completeness' sake, I thought it would be worthwhile summarizing the various possible outcomes and supplying references for the exact behaviour of each.
The answer is composed of four sections:
A list of different approaches that return the full path to the currently executing script.
A caveat regarding handling of relative paths.
A recommendation regarding handling of symbolic links.
An account of a few methods that could be used to extract the actual file name, with or without its suffix, from the full file path.
Extracting the full file path
__file__ is the currently executing file, as detailed in the official documentation:
__file__ is the pathname of the file from which the module was loaded, if it was loaded from a file. The __file__ attribute may be missing for certain types of modules, such as C modules that are statically linked into the interpreter; for extension modules loaded dynamically from a shared library, it is the pathname of the shared library file.
From Python3.4 onwards, per issue 18416, __file__ is always an absolute path, unless the currently executing file is a script that has been executed directly (not via the interpreter with the -m command line option) using a relative path.
__main__.__file__ (requires importing __main__) simply accesses the aforementioned __file__ attribute of the main module, e.g. of the script that was invoked from the command line.
From Python3.9 onwards, per issue 20443, the __file__ attribute of the __main__ module became an absolute path, rather than a relative path.
sys.argv[0] (requires importing sys) is the script name that was invoked from the command line, and might be an absolute path, as detailed in the official documentation:
argv[0] is the script name (it is operating system dependent whether this is a full pathname or not). If the command was executed using the -c command line option to the interpreter, argv[0] is set to the string '-c'. If no script name was passed to the Python interpreter, argv[0] is the empty string.
As mentioned in another answer to this question, Python scripts that were converted into stand-alone executable programs via tools such as py2exe or PyInstaller might not display the desired result when using this approach (i.e. sys.argv[0] would hold the name of the executable rather than the name of the main Python file within that executable).
If none of the aforementioned options seem to work, probably due to an atypical execution process or an irregular import operation, the inspect module might prove useful, as suggested in another answer to this question:
import inspect
source_file_path = inspect.getfile(inspect.currentframe())
However, inspect.currentframe() would raise an exception when running in an implementation without Python stack frame.
Note that inspect.getfile(...) is preferred over inspect.getsourcefile(...) because the latter raises a TypeError exception when it can determine only a binary file, not the corresponding source file (see also this answer to another question).
From Python3.6 onwards, and as detailed in another answer to this question, it's possible to install an external open source library, lib_programname, which is tailored to provide a complete solution to this problem.
This library iterates through all of the approaches listed above until a valid path is returned. If all of them fail, it raises an exception. It also tries to address various pitfalls, such as invocations via the pytest framework or the pydoc module.
import lib_programname
# this returns the fully resolved path to the launched python program
path_to_program = lib_programname.get_path_executed_script() # type: pathlib.Path
Handling relative paths
When dealing with an approach that happens to return a relative path, it might be tempting to invoke various path manipulation functions, such as os.path.abspath(...) or os.path.realpath(...) in order to extract the full or real path.
However, these methods rely on the current path in order to derive the full path. Thus, if a program first changes the current working directory, for example via os.chdir(...), and only then invokes these methods, they would return an incorrect path.
Handling symbolic links
If the current script is a symbolic link, then all of the above would return the path of the symbolic link rather than the path of the real file and os.path.realpath(...) should be invoked in order to extract the latter.
Further manipulations that extract the actual file name
os.path.basename(...) may be invoked on any of the above in order to extract the actual file name and os.path.splitext(...) may be invoked on the actual file name in order to truncate its suffix, as in os.path.splitext(os.path.basename(...)).
From Python 3.4 onwards, per PEP 428, the PurePath class of the pathlib module may be used as well on any of the above. Specifically, pathlib.PurePath(...).name extracts the actual file name and pathlib.PurePath(...).stem extracts the actual file name without its suffix.
Note that __file__ will give the file where this code resides, which can be imported and different from the main file being interpreted. To get the main file, the special __main__ module can be used:
import __main__ as main
print(main.__file__)
Note that __main__.__file__ works in Python 2.7 but not in 3.2, so use the import-as syntax as above to make it portable.
The Above answers are good . But I found this method more efficient using above results.
This results in actual script file name not a path.
import sys
import os
file_name = os.path.basename(sys.argv[0])
For modern Python versions (3.4+), Path(__file__).name should be more idiomatic. Also, Path(__file__).stem gives you the script name without the .py extension.
Try this:
print __file__
If you're doing an unusual import (e.g., it's an options file), try:
import inspect
print (inspect.getfile(inspect.currentframe()))
Note that this will return the absolute path to the file.
As of Python 3.5 you can simply do:
from pathlib import Path
Path(__file__).stem
See more here: https://docs.python.org/3.5/library/pathlib.html#pathlib.PurePath.stem
For example, I have a file under my user directory named test.py with this inside:
from pathlib import Path
print(Path(__file__).stem)
print(__file__)
running this outputs:
>>> python3.6 test.py
test
test.py
we can try this to get current script name without extension.
import os
script_name = os.path.splitext(os.path.basename(__file__))[0]
You can do this without importing os or other libs.
If you want to get the path of current python script, use: __file__
If you want to get only the filename without .py extension, use this:
__file__.rsplit("/", 1)[1].split('.')[0]
Since the OP asked for the name of the current script file I would prefer
import os
os.path.split(sys.argv[0])[1]
all that answers are great, but have some problems You might not see at the first glance.
lets define what we want - we want the name of the script that was executed, not the name of the current module - so __file__ will only work if it is used in the executed script, not in an imported module.
sys.argv is also questionable - what if your program was called by pytest ? or pydoc runner ? or if it was called by uwsgi ?
and - there is a third method of getting the script name, I havent seen in the answers - You can inspect the stack.
Another problem is, that You (or some other program) can tamper around with sys.argv and __main__.__file__ - it might be present, it might be not. It might be valid, or not. At least You can check if the script (the desired result) exists !
the library lib_programname does exactly that :
check if __main__ is present
check if __main__.__file__ is present
does give __main__.__file__ a valid result (does that script exist ?)
if not: check sys.argv:
is there pytest, docrunner, etc in the sys.argv ? --> if yes, ignore that
can we get a valid result here ?
if not: inspect the stack and get the result from there possibly
if also the stack does not give a valid result, then throw an Exception.
by that way, my solution is working so far with setup.py test, uwsgi, pytest, pycharm pytest , pycharm docrunner (doctest), dreampie, eclipse
there is also a nice blog article about that problem from Dough Hellman, "Determining the Name of a Process from Python"
BTW, it will change again in python 3.9 : the file attribute of the main module became an absolute path, rather than a relative path. These paths now remain valid after the current directory is changed by os.chdir()
So I rather want to take care of one small module, instead of skimming my codebase if it should be changed somewere ...
Disclaimer: I'm the author of the lib_programname library.
if you get script path in base class, use this code, subclass will get script path correctly.
sys.modules[self.__module__].__file__

Is this correct way to import python scripts residing in arbitrary folders?

This snippet is from an earlier answer here on SO. It is about a year old (and the answer was not accepted). I am new to Python and I am finding the system path a real pain. I have a few functions written in scripts in different directories, and I would like to be able to import them into new projects without having to jump through hoops.
This is the snippet:
def import_path(fullpath):
""" Import a file with full path specification. Allows one to
import from anywhere, something __import__ does not do.
"""
path, filename = os.path.split(fullpath)
filename, ext = os.path.splitext(filename)
sys.path.append(path)
module = __import__(filename)
reload(module) # Might be out of date
del sys.path[-1]
return module
Its from here:
How to do relative imports in Python?
I would like some feedback as to whether I can use it or not - and if there are any undesirable side effects that may not be obvious to a newbie.
I intend to use it something like this:
import_path(/home/pydev/path1/script1.py)
script1.func1()
etc
Is it 'safe' to use the function in the way I intend to?
The "official" and fully safe approach is the imp module of the standard Python library.
Use imp.find_module to find the module on your precisely-specified list of acceptable directories -- it returns a 3-tuple (file, pathname, description) -- if unsuccessful, file is actually None (but it can also raise ImportError so you should use a try/except for that as well as checking if file is None:).
If the search is successful, call imp.load_module (in a try/finally to make sure you close the file!) with the above three arguments after the first one which must be the same name you passed to find_module -- it returns the module object (phew;-).
As mentioned, please consider thread safety, if appropriate. I prefer something closer to a solution posted in a similar post. The main differences below: the use of insert to specify priority of the import, correct restoration of sys.path using try...finally, and setting the global namespace.
# inspired by Alex Martelli's solution to
# http://stackoverflow.com/questions/1096216/override-namespace-in-python/1096247#1096247
def import_from_absolute_path(fullpath, global_name=None):
"""Dynamic script import using full path."""
import os
import sys
script_dir, filename = os.path.split(fullpath)
script, ext = os.path.splitext(filename)
sys.path.insert(0, script_dir)
try:
module = __import__(script)
if global_name is None:
global_name = script
globals()[global_name] = module
sys.modules[global_name] = module
finally:
del sys.path[0]
It does feel like a bit of a hack, but at the moment, I can't think of any unintended side effects that are likely to occur, at least not as long as you're just using this for your own scripts. Basically what it does is temporarily add the parent directory of the specified file (in your example, /home/pydev/path1/) to the list of paths that Python checks when it's looking for a module to import.
The only risk I can think of right now would arise in a multithreaded environment, where two or more threads (or processes) are running this function simultaneously. If thread A wants to import module A from path dirA/A.py, and thread B wants to import module B from path dirB/B.py, you'd wind up with both dirA and dirB in sys.path for a short time. And if there is a file named B.py in dirA, it's possible that thread B will find that (dirA/B.py) instead of the file it's looking for (dirB/B.py), thus importing the wrong module. For this reason, I wouldn't use it in production code, or code that you're going to distribute to other people (at least not without warning them that this hack is in here!). In a situation like that, you could write a more complex function that allows you to specify the file to import without messing with the standard set of paths. (That's what mod_python does, for example)
I would be worried that your script name might correspond with a module that shows up earlier in the path. To dispel this fear, I would fully replace the path with a new list containing just the directory containing the module, then put it back once the import has completed. Also, you should wrap this in some sort of lock so that multiple threads trying to do the same thing don't interfere with each other.

Categories