Saving a html file to public_html - python

I have a small Flask application that runs fine locally, however when I go to run the application on my server, it runs but I am not able to get the output to save to a public_html folder.
This is the area I believe I am having the issue, when I run the application remotely:
df.to_html('/home/mydomain/public_html/data/candles.html', index = False)
If I run the application locally, this location works fine:
df.to_html('candles.html', index = False)
I have ensured that the remote folder 'data' has full access - 0777.
What am I doing wrong?

If you don't have an exception occurring, then very likely the file was saved, but not where you think it should have. If you did not provide a full path, the destination will be relative to the application directory. The solution is to be explicit and provide a full path, unless you are using some Flask functions that already have a default setting.
You should never grant 0777 permissions on public_html, that is a potential vulnerability. For example, someone could upload a shell to that directory if they can leverage a security flaw on your website.
There is not enough context, but the user running the process (Apache, Nginx or whatever) should not have write permissions here. If you must grant write permissions, create a dedicated directory (preferably outside the webroot unless they have to be exposed to the user), then add some directives to stipulate that files present in the directory cannot be executed. So that even if a webshell is uploaded it cannot run.

Related

How To Keep My File Paths Congruent for My Django/Python Application

I am developing a Django app on my Personal Computer. I have my files uploaded to a private repository on GitHub and I am using that to pull them down to my production server.
My question is this
I have file paths set inside of my application on my PC to file locations that are on my PC. But I also have those same files being uploaded and pulled onto my Production Server, but then needing to change the file path on my Production Server side application files in order to match the actual location on my Production Server
I have searched far and wide and cannot seem to find a proper solution to this. Is there some type of way I can write out the paths so that when I use an sys.path.append I don't have to give out the complete path, but rather a local one, so that it matches on both servers?
Possible Findings
Is there an issue if I just use sys.path.append('./<directory>')? This is, of course, assuming that the Python file is running within that current directory.

Django created folder can't be removed apache on Centos 7

I have a django application which creates a work directory for a package. This is done with:
if not os.path.exists(dest):
os.makedirs(dest)
The creation of the folder works great, but when the django application later try to remove the very same folder, I get "Permission denied".
Apparently the permissions of the folder and files created by django is owned by root and not by apache. Why is it not owned by apache if apache created it? How can I make apache and django to create it as apache?
Maybe this help you
Permission problems when creating a dir with os.makedirs (python)
According to the official python documentation the mode argument of the os.makedirs function may be ignored on some systems, and on systems where it is not ignored the current umask valued is masked out.
Either way, you can force the mode to 0777 using the os.chmod function.
It appears that the optimal solution is to set the uid that django will use in wsgi.py. By explicitly setting the user to be 'apache' for the current python process, every file that is created belongs to the user 'apache'. I found this out after Nicos Mouzourss answer.
To set the uid:
import os, pwd
os.setuid(pwd.getpwnam('apache').pw_uid)

403 error for new files posted through django admin

I'm running a Django server with Gunicorn and Nginx hosted on DigitalOcean. I've run into a problem where adding a new file through the administrator interface produces a 403 forbidden error. Specifically, the file in question works fine if I summon a query of it (e.g. Object.objects.all())but can't be rendered in my templates. I've previously fixed the problem by doing chmod/chown, but the fix only applies to existing files, not new ones. Does anyone know how to permanently apply the fix once?
TL;DR:
FILE_UPLOAD_PERMISSIONS = 0o644 in settings.py
in bash shell: find /path/to/MEDIA_ROOT -type d -exec chmod go+rx {} +
The explanation
The files are created with permissions that are too restrictive, so the user Nginx runs as, cannot read them. To fix this you need to make sure Nginx can read the files and can get to the files.
The goal
First you need FILE_UPLOAD_PERMISSIONS to allow reading by the Nginx user. Second, MEDIA_ROOT and all subdirectories must be readable by Nginx and writeable by Gunicorn.
How to
You must ensure the directories are world readable (and executable) or the group for the directories must be a group that the Nginx process belongs to and they must be at least group readable (and executable).
As a side note, you said you've used chmod and chown before, so I assumed you were familiar with the terminology used. Since you're not, I highly recommend fully reading the linked tutorial, so you understand what the commands you used can do and can screw up.

could not save preference file google-apps-engine

Just installed Google Apps Engine and am getting "could not save" errors.
Specifically if I go in to preferences I get
Could not save into preference file
C:\Usera\myname/Google\google_appengine_launcher.ini:No such file or directory.
So some how I have a weird path, would like to know where and how to change this. I have search but found nothing, I have done a repair reinstall of GAE
Can find nothing in the registry for google_appengine_launcher.ini
I first saw the error when I created my first Application
Called hellowd
Parent Directory: C:\Users\myname\workspace
Runtime 2.7 (PATH has this path)
Port 8080
Admin port 8080
click create
Error:
Could not save into project file
C:\Users\myname/Google\google_appengine_launcher.ini:No such file or directory.
Thanks
I think I have found the answer to my own question.
I have a small app I have written to backup my stuff to Google Drive, this app would appear to have an error in it that does not stop it from running but does cause it to make a file called
C:\Usera\myname\Google
Therefore GAE can not create a directory called C:\Usera\myname/Google nor a file called C:\Usera\myname/Google\google_appengine_launcher.ini
I deleted the file Google, made a directory called Google and ran the GAE, saved pereferences and all working

Ensuring a test case can delete the temporary directory it created

(Platform: Linux, specifically Fedora and Red Hat Enterprise Linux 6)
I have an integration test written in Python that does the following:
creates a temporary directory
tells a web service (running under apache) to run an rsync job that copies files into that directory
checks the files have been copied correctly (i.e. the configuration was correctly passed from the client through to an rsync invocation via the web service)
(tries to) delete the temporary directory
At the moment, the last step is failing because rsync is creating the files with their ownership set to that of the apache user, and so the test case doesn't have the necessary permissions to delete the files.
This Server Fault question provides a good explanation for why the cleanup step currently fails given the situation the integration test sets up.
What I currently do: I just don't delete the temporary directory in the test cleanup, so these integration tests leave dummy files around that need to be cleared out of /tmp manually.
The main solution I am currently considering is to add a setuid script specifically to handle the cleanup operation for the test suite. This should work, but I'm hoping someone else can suggest a more elegant solution. Specifically, I'd really like it if nothing in the integration test client needed to care about the uid of the apache process.
Approaches I have considered but rejected for various reasons:
Run the test case as root. This actually works, but needing to run the test suite as root is rather ugly.
Set the sticky bit on the directory created by the test suite. As near as I can tell, rsync is ignoring this because it's set to copy the flags from the remote server. However, even tweaking the settings to only copy the execute bit didn't seem to help, so I'm still not really sure why this didn't work.
Adding the test user to the apache group. As rsync is creating the files without group write permission, this didn't help.
Running up an Apache instance as the test user and testing against that. This has some advantages (in that the integration tests won't require that apache be already running), but has the downside that I won't be able to run the integration tests against an Apache instance that has been preconfigured with the production settings to make sure those are correct. So even though I'll likely add this capability to the test suite eventually, it won't be as a replacement for solving the current problem more directly.
One other thing I really don't want to do is change the settings passed to rsync just so the test suite can correctly clean up the temporary directory. This is an integration test for the service daemon, so I want to use a configuration as close to production as I can get.
Add the test user to the apache group (or httpd group, whichever has group ownership on the files).
With the assistance of the answers to that Server Fault question, I was able to figure out a solution using setfacl.
The code that creates the temporary directory for the integration test now does the following (it's part of a unittest.TestCase instance, hence the reference to addCleanup):
local_path = tempfile.mkdtemp().decode("utf-8")
self.addCleanup(shutil.rmtree, local_path)
acl = "d:u:{0}:rwX".format(os.geteuid())
subprocess.check_call(["setfacl", "-m", acl, local_path])
The first two lines just create the temporary directory and ensure it gets deleted at the end of the test.
The last two lines are the new part and set the default ACL for the directory such that the test user always has read/write access and will also have execute permissions for anything with the execute bit set.

Categories