I'm trying to change some code in a production server, but when I test the new code it's not being executed. I think the issue is that when I upload the updated source file, it is not being compiled into a .pyc file.
-rw-r--r-- 1 ubuntu ubuntu 47872 Jul 13 04:39 admin_email.py
-rw-r--r-- 1 root root 48212 Feb 10 03:12 admin_email.pyc
As you can see from the timestamps above, the .pyc file has not been updated.
Am I correct in assuming the reason the changes in admin_email.py are not being applied is because it is not being compiled? If so, can someone please offer a suggestion on how to get it to do so? Could the issue be that I need to restart the server?
Resetting the application probably does the trick. Since you are using gunicorn/nginx I assume you are also supervisord (if this is not the case please say so, then I can update my answer to also add that). In that case you can do sudo suporvisorctl restart <app_name> to restart the application.
Another issue that might be there, as brainless coder and Marius also stated, is that it seems the application was (at least once) ran as root, you should avoid this. You should remove the .pyc files and change the user the application runs under.
The .pyc file is owned by the root user for some reason, and not writeable by other users. Your Python process probably runs as non-root, and can't create new .pyc files.
Either delete the old .pyc files (make a backup first), or chown them to the same user as the process that runs the .py files, and it will probably work.
You can delete it if needed but it's really not. I don't see how that's the reason your code isn't executing.
.pyc files are byte codes that the python interpreter compiles the source to, so as long as you have access to the .py file you're safe to delete.
Are you running the script through shell? Did you give the file executable (+x) permission before doing so?
Related
Had the weirdest issue at work. Cannot reproduce it now. Looking for anyone with similar experiences.
We have a python script, which creates some install-packages for our embedded platform. The python script invokes a subshell with fakeroot. Inside the fakeroot another python script is run, which in turn calls a bash script. Inside the bottom bash script a number of symlinks are created.
Once in a while we would get some errors like "Tar: Too many levels of symbolic links".
After adding some debugging, we saw that the symlinks where corrupted from time to time. Instead of "ln -s /foo/bar/bas /some/other/path" creating a proper symlink, we would instead see this in the filesystem:
-r-------- 1 root root 38 Oct 20 22:55 bas
A file with no permissions, no link-flag set and only the base-name of the link preserved.
After exiting the fakeroot, the symlink file would have "fixed" itself. It looked ok in the filesystem, with proper permissions and path. Only inside the fakeroot was the file "wrong".
This happened on a linux mint 20.2 (fakeroot 1.24). I managed to reproduce it inside a docker image, for a while. The next day, after rebuilding the docker image, I could not reproduce the error anymore.
Am I going crazy here? What is going on? Timing issues? delays? bugs in old fakeroot?
I have python 3 file that run a SVN deployment. Basically run "python3 deploy.py update" and following things happen:
Close site
Backup Ignore but secure files
SVN revert -R .
SVN update
Trigger tasks
Open site
That all sounds simple and logical, but for one thought going around my head "SVN is writing files, including python files and sub module helpers that are trigger the SVN subprocess"
I understand that python files are read and processed and only through some tricky reload will python reload. And I understand if SVN change python source then update would only take effect on next run.
But question is "should keep this structure or move file to root and run SVN to be safe side"
Applies to GIT or any python changes
From what I know it is safe to change python (i.e. .py) file while python is running, after .pyc file has been created by python (i.e. your situation). You can even remove .py file and run .pyc just fine.
On the other hand SVN revert -R . is dangerous here as it would attempt removing .pyc files, so either screw up your python or fail by itself.
After 3 intensive hour, I was testing my script on terminal. However, my editor messed up and it overwrote my script when it was being still executed on terminal. Well, I didn't terminate running script, so I was wondering that does python interpreter keep the currently running file in a temporary folder or somewhere else so that I can recover my script?
Python tries to cache your .pyc files. How that's done has changed over time (see PEP 3147 -- PYC Repository Directories. Top level scripts are not cached but imported ones are. So, you may not have one.
.pyc files are compiled byte codes so its not just a question of renaming it .py and you can't figure them out just by looking at them. There are decompilers out there like the one refenced here:
Decompiling .pyc files.
Personally, I create mercurial repos for my scripts and check them in frequently.... because I've made a similar mistake a time or two. git, svn, and etc... are other popular tools for maintaining repos.
Depending on your operating system and editor, you may have a copy in Trash or even saved by the editor. You may also be able to "roll back" the file system.
If you're running Linux, you may still be able to find a handle to open files in the /proc/ directory if the process is still running. This handle will keep the file from being deleted. Details see: https://superuser.com/questions/283102/how-to-recover-deleted-file-if-it-is-still-opened-by-some-process
I am running a web app on python2.7 with mod_wsgi/apache. Everything is fine but I can't find any .pyc files. Do they not get generated with mod_wsgi?
By default apache probably doesn't have any write access to your django app directory which is a good thing security wise.
Now Python will byte recompile your code once every apache restart then cache it in memory.
As it is a longlive process it is ok.
Note: if you really really want to have those pyc, give a write access to your apache user to the source directory.
Note2: This can create a hell lot of confusion when you start with manage.py a test instance shared by apache as this will create those pyc as root and will keep them if you then run apache despite a source code change.
When a module is imported for the first time, or when the source is more recent than the current compiled file, a .pyc file containing the compiled code will usually be created in the same directory as the .py file.
So if you are not importing the module then no files will be created.
Besides this, a .pyc file may not be created is permissions problems with the directory. This can happen, for example, if you develop as one user but run as another, such as if you are testing with a web server. Creation of a .pyc file is automatic if you’re importing a module and Python has the ability (permissions, free space, etc.) to write the compiled module back to the directory.
Note - Running a script is not considered an import and no .pyc will be created.
If you need to create a .pyc file for a module that is not imported, you can use the py_compile and compileall modules.
Using the Google App Engine to develop in python yesterday it stopped running the current version of the script.
Instead of executing the most recent version it seems to run the previously pre-compiled .pyc even if the .py source was changed.
Error messages actually quotes the correct line from the most current source. Except if the position of the line changed, then it quotes the line which is in the place where the error occurred previously.
Deleting .pyc files causes them to be recreated from the current version. Deleting all .pycs is a poor workaround for now.
How can I get to the root cause of the problem?
Did you check your system clock? I believe python determine whether to use the .pyc or .py based on timestamps. If your system clock got pushed back, then it would see the .pyc files as newer until the system clock caught up to the last time they were built.
Are you editing the .py files on a different system than where they are being compiled ?
The compiler recompiles the .py files if its modification date is newer than the modification date of the .pyc file.
The fact that it is picking the .pyc file for use points to the fact that your .py file has an older modification date. This is only possible if your .py file is being modified on a different system and then being copied to the one where it is to be used and the editing environment/system's clock is set behind the runtime enviroment/system's clock.
The following steps solved the issue temporarily:
Delete GoogleAppEngineLauncher from your Applications folder.
Rename the file ~/Library/Application Support/GoogleAppEngineLauncher/Projects.plist (e.g. Project.plist.backup
Rename the file ~/Library/Preferences/com.google.GoogleAppEngineLauncher.plist (e.g. com.google.GoogleAppEngineLauncher.plist.backup)
Download and install Google App Engine Launcher again.
Use "File", "Add existing application…" to add your projects again, do not forget to set any flags you had set before.
Alternatively it might even work starting GAEL once, closing it and putting your backed up preference files back into place as to avoid having to reconfigure.
Edit: Turns out that fixes it… temporarily. Not exactly a very easy issue to debug.
Weirdly enough it works when running the appserver from the command line, such as
dev_appserver.py testproject/ -p 8082 --debug