I am running a web app on python2.7 with mod_wsgi/apache. Everything is fine but I can't find any .pyc files. Do they not get generated with mod_wsgi?
By default apache probably doesn't have any write access to your django app directory which is a good thing security wise.
Now Python will byte recompile your code once every apache restart then cache it in memory.
As it is a longlive process it is ok.
Note: if you really really want to have those pyc, give a write access to your apache user to the source directory.
Note2: This can create a hell lot of confusion when you start with manage.py a test instance shared by apache as this will create those pyc as root and will keep them if you then run apache despite a source code change.
When a module is imported for the first time, or when the source is more recent than the current compiled file, a .pyc file containing the compiled code will usually be created in the same directory as the .py file.
So if you are not importing the module then no files will be created.
Besides this, a .pyc file may not be created is permissions problems with the directory. This can happen, for example, if you develop as one user but run as another, such as if you are testing with a web server. Creation of a .pyc file is automatic if you’re importing a module and Python has the ability (permissions, free space, etc.) to write the compiled module back to the directory.
Note - Running a script is not considered an import and no .pyc will be created.
If you need to create a .pyc file for a module that is not imported, you can use the py_compile and compileall modules.
Related
We have a desktop app written in Python that allows for user plugins for customization. There is a folder for the user to add plugins into, and then upon loading of the app, it looks there for plugins. Each plugin has a manifest that lists all the files it needs to work, and these are imported using "importlib". We are also packaging this Python app as a .exe using Pyinstaller, which bundles it with a Python runtime so users who don't already have Python already installed can use it.
We have noticed that the process of looking for and loading plugins makes the initial load of the software very slow, far beyond what it was when we didn't have a plugin system and had all files manually spelled out and imported within the code. We don't know if it's importlib itself or an interaction with Pyinstaller that's creating this slowness, but we have noticed that after the .exe has been launched once and closed, subsequent re-launches without restarting the computer are much faster. However, after restarting the computer, it goes back to being super slow. We suspect that the first load of the plugins by importlib triggers their compilation into .pyc files, which are used from then on. However, once the computer is restarted, although the .pyc files remain, we suspect that the .py files are for some reason re-compiled (despite not having changed).
Is there a way to avoid this, so that the overhead we see on initial plugin load is restricted only the very first time the .exe is run after copying a new plugin to the plugins folder?
This answer tells me that a .pyc file gets created when a .py file is run, which I understand saves loading time when re-run. Which makes me wonder what the point of the .py file is after the .pyc is created.
When backing up my code, or sharing it, I don't want to include redundant or extraneous files. Which filetype should I focus on?
Side question: I have one script that calls another. After running them, the called script got a .pyc file written, but the master script that does the calling did not. Why would that be?
Python .pyc files are generated when a module is imported, not when a top level script is run. I'm not sure what you mean by calling, but if you ran your master script from the command line and it imported the other script, then only the imported one gets a .pyc.
As for distributing .pyc files, they are minor version sensitive. If you bundle your own python or distribute multiple python-version sensitive files, then maybe. But best practice is to distribute the .py files.
Python's script and module rules seem a bit odd until you consider its installation model. A common installation model is that executables are installed somewhere on the system's PATH and shared libraries are installed somewhere in a library path.
Python's setup.py does the same thing. Top level scripts go on the PATH but modules and packages go in an library path. For instance on my system, pdb3 (a top level script) is at /usr/bin/pdb3 and os (an imported module) is at /usr/lib/python3.4/os.py. Suppose python compiled pdb3 to pdb3.pyc. Well, I'd still call pdb3 and the .pyc is useless. So why clutter the path?
Its common for installs to run as root or administrator so you have write access on those paths. But you wouldn't have write access to them later as a regular user. You can have setup.py generate .pyc files during install. You get the right .pyc files for whatever python you happen to have, and since you are running as root/admin during install you still have acess to the directories. Trying to build .pyc files later is a problem because a regular user doesn't have access to the directories.
So, best practice is to distribute .py files and have setup.py build the .pyc during install.
If you simply want to run your Python script, all you really need is .pyc which is the bytecode generated from your source code. See here for details on running a .pyc file. I will warn that some of the detials are bit twisty.
However I recommend including your source code and leaving out your .pyc files as they are generated automatically by the Python Interpreter. Besides, if you, or another person would want to revise/revisit your source code at a later point, you would need the .py files. Furthermore, it is usually best practice to just include your source code.
After 3 intensive hour, I was testing my script on terminal. However, my editor messed up and it overwrote my script when it was being still executed on terminal. Well, I didn't terminate running script, so I was wondering that does python interpreter keep the currently running file in a temporary folder or somewhere else so that I can recover my script?
Python tries to cache your .pyc files. How that's done has changed over time (see PEP 3147 -- PYC Repository Directories. Top level scripts are not cached but imported ones are. So, you may not have one.
.pyc files are compiled byte codes so its not just a question of renaming it .py and you can't figure them out just by looking at them. There are decompilers out there like the one refenced here:
Decompiling .pyc files.
Personally, I create mercurial repos for my scripts and check them in frequently.... because I've made a similar mistake a time or two. git, svn, and etc... are other popular tools for maintaining repos.
Depending on your operating system and editor, you may have a copy in Trash or even saved by the editor. You may also be able to "roll back" the file system.
If you're running Linux, you may still be able to find a handle to open files in the /proc/ directory if the process is still running. This handle will keep the file from being deleted. Details see: https://superuser.com/questions/283102/how-to-recover-deleted-file-if-it-is-still-opened-by-some-process
I have a project that uses COM and 'Python' scripting. Earlier we were using 'ComTypes' now we use Win32Com. To keep backward compatibility I need to change name of some of the interfaces. So here is what I do
1) Use the 'makepy' utility to create a python file from my .tlb file, this creates a .py file at ..\Lib\site-packages\win32com\gen_py folder
2) I change the name of the interface that I am interested in changing in the created python file.
3) When I load my application a corresponding .pyc file gets created and everything works fine.
Now I don't want to repeat this exercise on every machine where my software is deployed. So through the installer I copy the .py and .pyc files to ..\Lib\site-packages\win32com\gen_py
But when my application is launched it does not recognize the changed interface. Behaves as if there is no .py or .pyc file. All other interfaces work, but the changed name one does not work. It dynamically seem to create compiled python behind the scene, ignoring the .pyc file.
If I delete the .dat file and .pyc file at those locations, it does create the .pyc file again when the application is launched. However its not utilized, because my changed interface does not work.
If I use the steps 1,2 and 3 everything works again !! I am puzzled.
Please help.
OK. I found out what is the problem. When you create a python file using makepy tool it updates the dicts.dat file in gen_py directory. So you need to copy over that file as well to other machines.
I understand that ".pyc" files are compiled versions of the plain-text ".py" files, created at runtime to make programs run faster. However I have observed a few things:
Upon modification of "py" files, program behavior changes. This indicates that the "py" files are compiled or at least go though some sort of hashing process or compare time stamps in order to tell whether or not they should be re-compiled.
Upon deleting all ".pyc" files (rm *.pyc) sometimes program behavior will change. Which would indicate that they are not being compiled on update of ".py"s.
Questions:
How do they decide when to be compiled?
Is there a way to ensure that they have stricter checking during development?
The .pyc files are created (and possibly overwritten) only when that python file is imported by some other script. If the import is called, Python checks to see if the .pyc file's internal timestamp is not older than the corresponding .py file. If it is, it loads the .pyc; if it isn't or if the .pyc does not yet exist, Python compiles the .py file into a .pyc and loads it.
What do you mean by "stricter checking"?
.pyc files generated whenever the corresponding code elements are imported, and updated if the corresponding code files have been updated. If the .pyc files are deleted, they will be automatically regenerated. However, they are not automatically deleted when the corresponding code files are deleted.
This can cause some really fun bugs during file-level refactors.
First of all, you can end up pushing code that only works on your machine and on no one else's. If you have dangling references to files you deleted, these will still work locally if you don't manually delete the relevant .pyc files because .pyc files can be used in imports. This is compounded with the fact that a properly configured version control system will only push .py files to the central repository, not .pyc files, meaning that your code can pass the "import test" (does everything import okay) just fine and not work on anyone else's computer.
Second, you can have some pretty terrible bugs if you turn packages into modules. When you convert a package (a folder with an __init__.py file) into a module (a .py file), the .pyc files that once represented that package remain. In particular, the __init__.pyc remains. So, if you have the package foo with some code that doesn't matter, then later delete that package and create a file foo.py with some function def bar(): pass and run:
from foo import bar
you get:
ImportError: cannot import name bar
because python is still using the old .pyc files from the foo package, none of which define bar. This can be especially problematic on a web server, where totally functioning code can break because of .pyc files.
As a result of both of these reasons (and possibly others), your deployment code and testing code should delete .pyc files, such as with the following line of bash:
find . -name '*.pyc' -delete
Also, as of python 2.6, you can run python with the -B flag to not use .pyc files. See How to avoid .pyc files? for more details.
See also: How do I remove all .pyc files from a project?