Django source code won't update on server - python

I have a Django website running and any updates I make to the source code won't update.
(Reason I'm changing the file is because one line of code is generating an error. What's weird is I commented out this line of code that causes the error, but the code still runs and thus still causes the error. In the django.log it shows that line causing the error still, but it also shows it commented out now. So the error log shows my new source code, but the application itself isn't executing the new code)
I am very new to Django, so I don't really know what's going on here (not my website, I got thrown on this project for work.)
Researching around for this, I have already tried to restart apache:
$ sudo apachectl restart
$ sudo service apache2 restart
and I've also tried to touch the wsgi.py file:
$ touch wsgi.py
and I have even deleted the .pyc file. Nothing has worked and the old line of code is still executing, even though the logs show it commented out.
Not sure where else to check or what else I'm missing.

Whichever service you are using, do a full stop and a full start (i.e., not just restart).
sudo service apache2 stop
sudo service apache2 start
If you are using uwsgi or gunicorn, you will have to do the same for them. Some init scripts when issuing restart do not restart the master worker process which might cause a cached compiled version of your file to still reside in memory (with the incorrect code).

With the help of #2ps I was able to figure out my problem. When I tried to stop Apache, the website was still up.
I realized there's another IP address for the website, so I'm guessing the first one must redirect to the second one?
Either way, I reopened SSH in the other IP address, restarted Apache and the source code updated immediately!
UPDATE:
As per #VidyaSagar request, I'm providing more info as it seems to be a weird fluke with Django. My OP was that a certain line of code was causing an error. I commented out this line, deleted the .pyc file, and restarted Apache. Another error occurred (as expected due to the code). So then I un-commented that line back to how it was previously, again deleted the .pyc and restarted Apache, and the system worked like normal. It seems that Django just wanted me to have a fresh re-compile of the file?
Django version: 1.7.4
Traceback of django.log
ERROR Internal Server Error: /upload/
Traceback (most recent call last):
File "/home/company/app/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/company/app/app/geo_app/views.py", line 306, in upload
shutil.make_archive(kml_dir, 'zip', root_dir=kml_dir)
File "/usr/lib/python2.7/shutil.py", line 521, in make_archive
save_cwd = os.getcwd()
OSError: [Errno 2] No such file or directory
ERROR Internal Server Error: /upload/
Traceback (most recent call last):
File "/home/company/app/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/company/app/app/geo_app/views.py", line 306, in upload
# shutil.make_archive(kml_dir, 'zip', root_dir=kml_dir)
File "/usr/lib/python2.7/shutil.py", line 521, in make_archive
save_cwd = os.getcwd()
OSError: [Errno 2] No such file or directory

For those of you with cPanel, if you go under "Setup Python App" and click "Restart" it should update. Saved me about 5 times.

Related

'socket' object has no attribute 'sendfile' while sending a file in flask + gunicorn + nginx + supervisor setup

Using flask, I'm trying to send a file to the user on clicking a button in UI using send_from_directory function. It used to work fine. I wanted to change the repo and since changing it, I'm no more able to download the file. On looking at the supervisor log, I see this:
[9617] [ERROR] Error handling request
Traceback (most recent call last):
File "path_to_file/venv/lib/python3.4/site-packages/gunicorn/workers/sync.py", line 182, in handle_request
resp.write_file(respiter)
File "path_to_file/venv/lib/python3.4/site-packages/gunicorn/http/wsgi.py", line 385, in write_file
if not self.sendfile(respiter):
File "path_to_file/venv/lib/python3.4/site-packages/gunicorn/http/wsgi.py", line 375, in sendfile
self.sock.sendfile(respiter.filelike, count=nbytes)
AttributeError: 'socket' object has no attribute 'sendfile'
In the same repo, this works fine locally. But when trying in remote server using the gunicorn + supervisor + nginx setup, I get the above error message. I do get 200 Ok response in the application log file. Spent a lot of time trying to fix but without success.
Also, the notable difference between the working app between the previous repo and the non-working current repo is the python version. Previous: python2.7, Current: python3.4
For me, usually when a script works locally but not when hosted its one (or more) of these possibilities:
Location / path to the files is different
older version of python
older version of the library
Pointing the virtual env to Python 3.6 and upgrading all the relevant libraries including Flask resolved the issue.

How to fix the permission denied error? (happens in any IDE)

I am trying to run a python file. But i got this error.
Traceback (most recent call last):
File "modeltraining.py", line 29, in <module>
sr,audio = read(source + path)
File "C:\Users\RAAM COMPUTERS\Anaconda3\lib\site-packages\scipy\io\wavfile.py", line 233, in read
fid = open(filename, 'rb')
PermissionError: [Errno 13] Permission denied: 'development_set/'
Run Spyder as administrator
Right click --> run as administrator
Or may be you can change the permissions of the directory you want to save to so that all users have read and write permissions.
After some time relaunching Anaconda and Spyder, I got an alert from Avast antivirus about protecting me from a malicious file, the one I was trying to create.
After allowing it, the "[Errno 13] Permission denied" error disappeared.
In my case, it seem the cause of the problem was Avast locking the directory.
numpy.save(array, path) worked fine, but PIL.Image().save(path) was blocked.
I am completely late to the party, but here is a tip for someone who tried everything and it didn't work.
In Spyder go to python->PYTHONPATH manager and add path to the folder with your data there.
Worked for me
I had the permission error when accessing a file on an external card. I guess the error has nothing to do with anaconda, that is just accidentally in the traceback.
Traceback (most recent call last):
File "C:\Users\Admin\anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-219c041de52a>", line 105, in <module>
bs = open(filename, 'rb').read()
PermissionError: [Errno 13] Permission denied: 'D:\\[MYFILEPATH]\\test.bson'
I have checked this error in Spyder and PyCharm, it seems to be independent from the IDE. As the (Windows) solutions here (run as admin, add pythonpath) could not help me, my workaround was to copy the directory to my local disk and work from there.
Later I realised that it is obviously just the one file that is accessed ant that throws the permission that needs to be copied to your local disk, while you can use all your codework on externally.
Example:
Error. Get the permission error by accessing external drive "D:\":
filename = "D:\\test.bson"
# This throws the permission error
bs = open(filename, 'rb').read()
Solution. Avoid the permission error by accessing local drive "C:\":
filename = "C:\\Users\\Admin\\Documents\\test.bson"
# This throws no permission error
bs = open(filename, 'rb').read()
The whole code can now be saved on external "D:\test.py".
It might be the Windows Defender Firewall which was also mentioned when I installed PyCharm (and needed some automatic configurations which also did not solve the issue, but could be linked with it). It is clearly a problem of access rights, the firewall as the cause is quite plausible. Perhaps someone else finds out more about this.

After installing jupyter themes, notebooks and docker container no longer working

I am running jupyter notebooks through a docker container. I have files, notebooks, etc within the container. I decide in class one day to attempt and install the jupyterthemes package because who doesn't like more colors. I opened a new ipynb and followed instructions per this site: https://github.com/dunovank/jupyter-themes
But it was basically just this:
!pip install jupyterthemes
!jt -t chesterish
The theme does not immediately appear and the directions suggest restarting the notebook or refreshing the browser. This is where the problems start, after trying to refresh or close and restart the notebook, it no longer works and just displays a large "500 : Internal Server Error" on the page. After trying to restart the home page of my notebook (this is locally hosted through docker and run on chrome btw), the jupyter window in chrome displays nothing at all.
Here I go back to terminal and docker and shut down the container. Then I try to restart the same container hoping it will work now. I try to start it as I usually would docker start -ai container_name but it is not successful. It displays these errors everytime
Executing the command: jupyter notebook
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py",
line 528, in get
value = obj._trait_values[self.name]
KeyError: 'allow_remote_access'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-
packages/notebook/notebookapp.py", line 869, in _default_allow_remote
addr = ipaddress.ip_address(self.ip)
File "/opt/conda/lib/python3.6/ipaddress.py", line 54, in ip_address
address)
ValueError: '' does not appear to be an IPv4 or IPv6 address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-
packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python3.6/site-
packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/opt/conda/lib/python3.6/site-
packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 1629, in initialize
self.init_webapp()
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 1379, in init_webapp
self.jinja_environment_options,
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 158, in __init__
default_url, settings_overrides, jinja_env_options)
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 251, in init_settings
allow_remote_access=jupyter_app.allow_remote_access,
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 556, in __get__
return self.get(obj, cls)
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 535, in get
value = self._validate(obj, dynamic_default())
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 872, in _default_allow_remote
for info in socket.getaddrinfo(self.ip, self.port, 0, socket.SOCK_STREAM):
File "/opt/conda/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
So I can no longer access the entire docker container and my files and notebooks within. So I have two questions then:
Can I somehow restore my docker container or at least retrieve the materials within?
and
Why did this error occur during theme installation and how could I go about doing this without breaking my jupyter server or docker container? I have built new containers and attempted again with exactly the same results.
Any advice about how to get files from a not-running docker container, or about compatibility issues between docker, jupyter and the theme package and how to solve them would be much appreciated. For the time being I can work from a new container and keep up with schoolwork, but in the future would be nice to get back my stuff from that container and learn how to successfully change my theme if I want.
So I have an answer to half the question, we found a way to copy and export all the files from my broken, not-running docker container. The files kind of 'invisible' when the container isnt running so it took some trickery to find where they are located and what path to use to call them from terminal.
I'm running docker on a macbook and the location of all the files in a new container we made were container:./home/jovyan/.
Also made a folder called 'Dump' on my normal user desktop to transfer container contents to. After messing around with new 'fake' containers we found a successful way to pull files from a not-running one. I used
docker cp container_name:./home/jovyan/. ./Dump
Where container name is obviously your container and Dump is where you want the files to go. /jovyan/ was the largest one I could call and took everything I had out of the container, but if you knew more folder and filenames you could specify farther and extract specific things.
This is probably pretty simple for most experienced programmers, but as a newbie the hard part was finding where docker stored my container files and what path to use. /home/jovyan/. worked on my mac but could be different for you. If you have a broken container just make a new test one with a recognizable file in it and mess around until you figure out how to pull it out. Opening a new terminal window within the test jupyter notebook helped me find what docker was labeling my pathways.
Still wondering how to actually install those themes though.... dont think itll work work docker and jupyter, probably just too much incompatibility already.
If you stil have the issue, this is what fixed it for me:
In docker terminal, make sure you are using bash; so that the prompt starts with "(base)". I just typed bash and then the prompt looked good.
conda install -c conda-forge jupyterthemes (or pip if you're not using anaconda)
conda update jupyterthemes (it found some updates of other packages, apparently necessary)
jt -t monokai -f fira -fs 10 -nf ptsans -nfs 11 -N -kl -cursw 2 -cursc r -cellw 95% -T (Or another setting; this was copied from Ashraf Khan on Kaggle).
Hard refresh page (chrome on windows: ctrl-F5
I think step 3 was key here, but not sure. However it works now.

Unable to import sendgrid into GAE application

I have a GAE application that I want to integrate with Sendgrid. I've followed the instructions (https://cloud.google.com/appengine/docs/python/mail/sendgrid) on how to install Sendgrid and everything works fine in my local dev environment.
However, when I push my application to GAE and run it, I immediately receive the following 500 Server Error:
Error: Server Error
The server encountered an error and could not complete your request.
Please try again in 30 seconds.
Even with debug on, that's all I get. But digging into the logs at GAE I can see the source of the problem:
Traceback (most recent call last): File
"/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py",
line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File
"/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py",
line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py",
line 85, in LoadObject
obj = ____import____(path[0]) File "/base/data/home/apps/....wsgi_app.py",
line 16, in
import sendgrid File "/base/data/home/apps/..../sendgrid/____init____.py",
line 7, in
from .client import SendGridAPIClient File "/base/data/home/apps/..../sendgrid/client.py",
line 1, in
import python_http_client ImportError: No module named python_http_client
So I went into sendgrid/client.py and commented out the following line of code...
import python_http_client
Once I do that, I can run my app without receiving the 500 Server Error but the test email I tried to send wasn't delivered (although I didn't receive any error messages when trying to initiate it).
It doesn't seem right that I need to comment out a line of the Sendgrid code to make the import work and I can't figure out why others that are running Sendgrid with Python and GAE aren't having the same problem. Any thoughts would be appreciated. Thanks.
sendgrid does need python_http_client, which Sendgrid itself maintains at https://github.com/sendgrid/python-http-client -- just copy the few files in directory https://github.com/sendgrid/python-http-client/tree/master/python_http_client to a directory named python_http_client, making the latter a sibling of the sendgrid directory. I'm not sure why the online docs don't mention that -- I'll work to get it fixed, but meanwhile I hope this workaround lets you get started.

Authentication in gitpython

I am trying to write a fairly simple function (I thought).
I want a user to specify a path to a checked out git repo on his computer. (A git repo that requires authentication).
I then want to do a git pull on that folder to grab any changes.
My code looks like this:
import git
repo=git.cmd.Git(GIT_PATH)
repo.pull()
This works perfectly on my Linux machine, it never asks for any credentials (I am guessing because ssh-agent has already unlocked the key and supplies credentials when my script needs it).
On Windows however, I can't get it to to work. I have installed putty and added the key to pageant. I can check out the repo using TortoiseGit for example and it works perfectly fine, but if I execute the code above on the Windows machine, I get:
Traceback (most recent call last):
File "test.py", line 2, in <module>
repo = git.repo.base.Repo.clone_from(GIT_PATH, "t
mp")
File "C:\Python34\lib\site-packages\git\repo\base.py", line 849, in clone_from
return cls._clone(Git(os.getcwd()), url, to_path, GitCmdObjectDB, progress,
**kwargs)
File "C:\Python34\lib\site-packages\git\repo\base.py", line 800, in _clone
finalize_process(proc)
File "C:\Python34\lib\site-packages\git\util.py", line 154, in finalize_proces
s
proc.wait()
File "C:\Python34\lib\site-packages\git\cmd.py", line 309, in wait
raise GitCommandError(self.args, status, self.proc.stderr.read())
git.exc.GitCommandError: 'git clone -v ssh://git#git.path/repo tmp' re
turned with exit code 128
stderr: 'Cloning into 'tmp'...
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Edit: I would like to add that I am not married to GitPython. If anyone knows of another library that would solve my problem, that would work as well.

Categories