Apache File Does Not Exist workaround - python

I have an app that calls upon the extension found here. I have the .py file in my /var/www folder so that it can be imported in my python code.
So, I keep getting this error:
File does not exist: /var/www/flask_util.js
in my apache error logs. It looks like, because of the name or something, it wants to find a javascript file. But, it's in python. Here's the line of code in python that import it:
from flask_util_js import FlaskUtilJs
I've tried just changing the name of the file to flask_util.js, but again, nothing. Not entirely sure what is going on here, but I am sure that I have a file in /var/www that it should be reading.
EDIT
I think, actually that the import error is coming from importing it into my HTML when I do this:
{{flask_util_js.js}}
So, what I tried was copy out the JS code from the python and create a new file with it in the correct path. When I did that, I still got the same error on the webpage, however the apache logs don't say anything (which is weird right?). So, it still doesn't work, and I don't know why

So, it's ugly but what I ended up doing was copying over the generated JS file that I could find on my server (didn't actually exist as a document in my repository). Then, apache could find it.
This part is for anyone who is actually using flask_util_js:
However, it wasn't reading in the correct Javascript for the url_mapping so I had to go in a hard-code the correct URLs. Not very scalable, but oh well.

Related

Restrict the Python file to read and write

I'm trying to restrict write and read access to a Python file. Suppose I have the following code:
with open('test.py', 'w+') as file:
file.write('''
open("document.txt", "w+").write("Hello, World!")
open("document.txt", "r+").read()
''')
By executing this code, a new file is created that in the new file there are two lines of code to write and read a another file.
I want the file created by executing this code (test.py) to hit PermissionError while running and not be able to create a new file or read it; Also, this file is only executable and normal commands work in it, but it can not access other files.
If I read you correctly, this is not a python problem, but an environment problem. I understand the question as something like 'how do I prevent python code from executing arbitrary reads or writes?'. There would be a trivial solution (modifying the generated test.py so it throws an error) but presumably that's not what you want.
The easiest way to make python hit a PermissionError... is to make sure it doesn't have permissions. So run your code as a user with extremely limited permissions---specifically no write permissions anywhere---or perhaps no default permissions at all, and use something like facls to grant permission to read specific files explicitly from a more priveleged sentinel process. (This assumes you are running Linux, but there are likely other ways to do this in different OSs).
Alternatively, look into various sandboxing techniques to give you a python interpreter with the relavent modules replaced with modules which throw errors, or an environment where outside modification is impossible.
It would help if you made it clearer why this is important, and why you are writing a python script with another python script (is this just an example of malicious action?).
You could technically change the permission of the file itself on the filesystem your trying to access.
Check the previous thread about changing permissions
os.chmod(path, <permission value>)
Where 000 is to disable anyone other than root to edit on linux.

"Desired structure doesn't exist" for PDB retrieve_pdb_file method

Trying to download some protein data from PDB using Biopython's Bio.PDB.PDBList
Here is a min. reproducible example:
from Bio.PDB import PDBList
pdbl=PDBList()
pdbl.retrieve_pdb_file('1GAV', file_format="pdb")
This returns:
Downloading PDB structure '1GAV'...
Desired structure doesn't exists
Desired behavior is download of the PDB file to the working directory.
Possibly useful info:
Using python 3
Do not want to download whole PDB, just pick and choose files
Using a proxy, but I don't think that's the problem because Biopython uses urllib to make requests and I tried using urllib with my proxy settings and it worked fine.
I've tried for a few different PDB code/IDs and for other file types ("mmCif", "bundle") and it returns the same thing
No error is being hit, it just can't find the file in PDB apparently?
The folder where the file should appear does get made in the working directory, but the folder is empty
We think the problem has to do with our corporate VPN because it works when the VPN is off (although proxy still on).
So as sammam said, no problems in the code.
Don't know the specifics of why this occurs with our VPN, will update if I find out.
I'm getting the same error message, which I looked at the source code and found out that the "Desired structure doesn't exists" message outputs whenever an IOError is hit.

Creating Executable Zip Archives With Packages in the Project

I feel this question needs a better title and I will amend it if someone suggests something better. The problem is I'm not sure of the terminology of the feature that I'm using here.
The best way to describe my problem is to show what I've done. The project is here: https://github.com/jeffnyman/quendor
This project is setup so it can be executed as a module. For example, from the project root someone could do this:
python3 -m quendor
I also have a build script to generate an in-memory zip (if I'm using that terminology correctly):
https://github.com/jeffnyman/quendor/blob/master/build.py
That works in that if you run build.py it will generate a quendor.py file that executes the entire project. That worked fine up until I included other directories (like my utilities and zinterface).
With the project as it is in the repo right now, if you run the build (.\build.py) and then run the generated file:
./quendor.py
You get the following error:
File "./quendor.py/quendor/__main__.py", line 6, in <module>
ModuleNotFoundError: No module named 'quendor.zinterface'
So a key point: if all of my files are in the same directory (i.e., in quendor) this build script works fine in terms of producing an executable script file.
But once I include the subdirectories and files in those directories, things go south on me with the above error.
I'm sure all the files are being gathered. I handle that starting on line 18 (https://github.com/jeffnyman/quendor/blob/master/build.py#L18). And if you were to add to line 24 this statement:
print(f"* {file_path}")
You would see it outputs the following:
* quendor/__init__.py
* quendor/__version__.py
* quendor/zinterface/fileio.py
* quendor/utilities/messages.py
* quendor/__main__.py
So I'm suspecting it might have to do with the code where I write the string at line 28 (https://github.com/jeffnyman/quendor/blob/master/build.py#L28). I feel I have to do more to let the executable zipped script file know about the modules.
But I'm not sure if (1) I'm accurate and (2) even if I'm accurate, if that's possible. I'm finding I'm in a bit over my head here.
Any thoughts would be appreciated and I'm happy to update with any necessarily clarifications or terminology.
So it won't let me comment unless I have more reputation but I can post an answer. Even though I don't have an answer, but rather a comment. I think the above comment was not meant for your actual __main__.py file but rather the one that is getting generated in your quendor.py file. You might want to try adding the import statements to your packed string that you write.
For example, see what happens if on line 32 you add this: import quendor.zinterface.fileio as zio. (Don't replace the line that's there. Just put my line and then keep your others.) I'm not sure how the zip process works but if it tries to mirror the module process that should work. However, if it doesn't, that won't work. You might also just want to try doing import quendor.zinterface. By itself that won't work but it would be interesting to see if it gave you a different error.
Actually, it turns out I found a way to do this! It required using os.walk rather than os.listdir. This required taking a few ideas that people here discussed. Here is the script that does the trick:
https://github.com/jeffnyman/quendor/blob/master/build.py
You can compare that with my previous commit that was trying to handle this a different way.
Eldritch was right that I couldn't just flatten the directory nor could I just add imports to the string I was writing to the final zip file. Jean-François was correct that I had to focus on the __main__.py that was being generated. My contribution was figuring out os.walk() and then parameterizing the written string to handle the different directories.
Finally, this solution does require, as per HTF's suggestion, that I put an empty __init__.py file in each package.
With my solution in place, you can run build.py which then generates the quendor.py script. That script then executes correctly, in terms of recognizing the imports to various packages.
Playing around with just about every variation of import and file gathering that I can think of with your repo, there's a good news / bad news thing.
The bad news is that the answer is this: it isn't possible.
The good news is this: you do have a working implementation if you just keep all files in the quendor directory rather than having subdirectories.
The other good news is you stumbled on something, and posed a problem, that Python gurus aren't able to answer. And there's a certain pleasure to be found in that! I guarantee you will not get an answer to this that works (except for the "all files in one directory" solution).
A refinement to the answer is that if you're setting up the program to run as a module anyway, just use a pip configuration. That basically does the same thing that you want but without having to go through the contortions. (Unless there's a reason you were doing the build the way you were rather than using pip.)

linux server hosting .py files that are reading .txt files but cant store in variable

I have a linux server.
It is reading files in a directory and doing things with the full text of the file.
I've got some code. it retrieves the file path.
And then I'm doing this:
for file in files:
with open(file,'r') as f:
raw_data = f.read()
Its reading the file just fine. And Ive used this exact code outside of the server and it worked as expected.
In this case, when run on the server, the above code is spitting out all the text to the terminal. But then raw_data == None.
Not the behavior I'm used to. I imagine its something very simple as I am new to linux in general.
But I'm wanting the text in the file to be stored in the 'raw_data' variable as a string.
is there a special way I am to do this on linux? Googling so far as not helped much and I feel this is likely a VERY simple problem.
User error.
I thought, due to my noob status in linux, that perhaps the enviroment was causing weird behavior. But buried deep in the functions that use the data from the files was a print statement i had used a while back for testing. That was causing the output to screen.
As for the None type being returned. It was being returned by another subfunction that had a try/except block in it and was failing. The variable being referenced had the same name (raw_data). So i thought it came from the file read. But it was actually from elsewhere.
thanks all who stopped by. User error for this one.

How do I get hgweb to actually display the repository I want?

I am having an infuriating experience with IIS7, Python 2.6, Mercurial 1.7.2, and hgweb.cgi.
After battling for an afternoon getting hgweb.cgi to work, I finally got it to render in the browser using hgweb.cgi and IIS7. I can now see a blank rendering of the web server, that is, a header with no repositories listed.
Now, according to the multipe sites I've read after scouring through Google results, I know that I have to update my hgweb.config file to point to some repositories.
However, for the life of me, I can't get it to list my repository using either the [paths] or [collections] entries.
I have the following directory structure, (simplified but illustrative...):
c:\code
c:\code\htmlwriter
c:\code\CommandLineProjects\Clean
The latter two directories have mercurial repositories in them.
I am trying to publish the repository in c:\code\htmlwriter
Now, if I make this entry in hgweb.config
[paths]
htmlwriter = c:\code\htmlwriter
I get nothing listed in my output.
If I put
[paths]
htmlwriter = c:\code\*
I get something, but not what I want, i.e. this:
htmlwriter/CommandLineProjects/Clean
(Note that the about drills down one directory level farther than I want it to).
I can't seem to find any combination of paths, asterisks, or anything else that will serve up the repository in c:\code\htmlwriter. It appears to always want to go one level deeper than I want it to, or to show nothing.
I know that my hgweb.config file is being read because I can change the style tag in it and it changes what is rendered.
I have read and re-read multiple time a number of resources on the web, but they all say what I'm trying should be working. For instance, I followed this instructions to the letter with no good results:
http://www.jeremyskinner.co.uk/mercurial-on-iis7/
Anyone have any suggestions?
I had about the same luck with hgweb.cgi, and ended up going a different route with wsgi and a "pure python" mercurial install.
I wrote a pretty comprehensive answer here.
I'll answer my own question:
The solution is that the path listed in the [paths] section is relative to the directory where the hgweb.config file is residing.
So, if you have your repository in:
c:\code\myrepo
and your hgweb.config file is in:
C:\inetpub\hgcgi
then the entry in your hgweb.config file needs to be:
/myrepo = ../../code/myrepo
That was the trick -- to put the correct relative path.
I was never able to get hgweb.cgi to work with a repo on a different drive.

Categories