I am working on an API, I just want to know what is the best place for config file which the user will provide for api to work.
Basically I am working on getting tweets and I want the user to provide configurations(API credentials)
A common location for config files is in a folder in the User's home directory; e.g. ~/.config/my_api_config.cfg, or ~/.my_api/config.cfg etc...
There is nothing stopping you from requiring the user to put the file in a very specific location e.g., /must/be/this/exact/folder/config.cfg
Often time programs will search a few places to see if a config file exists, e.g.,
same/directory/as/script/using/api/my_api_config.cfg (per script config)
~/.config/my_api_config.cfg (per user config; could be
anywhere else in specified user directory)
/etc/my_api/config.cfg
(system-wide config)
Related
I'm trying to set up a cloud function that performs user authentication. I need it to open an object stored in a Cloud Storage bucket, read its content and verify if username and password match those coming from the HTTP request.
In some cases the function needs to add a user: it should retrieve the content of the .json file stored in the bucket, add a username:password pair and save the content in the same object.
Basically, it has to modify the content of the object.
I can't find the way o do it using the Cloud Storage Python client library. None of the tutorials listed in the GitHub pages mentions anything like "modify a file" or similar concepts (at least in their short descriptions).
I also looked for a method to perform this operation in the Blob class source code, but I couldn't find it.
Am I missing something? This looks to me as a very common operation, one that should have a very straightforward method, like blob.modify(new_content).
I have to confess that I am completely new to GCP, so there is probably an obvious reason behind this (or maybe I just missed it).
Thank you in advance!
Cloud Storage is a blob storage and you can only read, write and delete the object. You can't update the content (only the metadata) and can't move/rename a file (move and rename operation perform a copy (create a new object) followed by a delete (of the old object)).
In addition, the directories don't exist, all the file are put at the root level of the bucket. The file name contains the path from the root to the leaf. The / is only a human representation for the folders (and the UI use that representation), but the directories are only virtual.
Finally, you can't search on a file suffix, only per prefix of the file name (including the full path from the root path /)
In summary, it's not a file system, it's a blob storage. Change your design or your file storage option.
I've built an application following the file upload process (more or less) along the lines of the Flask file upload documentation found here, https://flask.palletsprojects.com/en/1.1.x/patterns/fileuploads/.
In this portion of the code, UPLOAD_FOLDER = '/path/to/the/uploads', this points to one, single directory where file uploads will live. The problem I'm trying to solve is when I deploy my app to a server there will be multiple, simultaneous users. With a single upload directory, users will collide when they upload files with the same names--a situation that will occur in my app.
What I want to do is create a unique temp directory that is unique to each browser session. So, user 1 would have their own unique temp directory and user 2 would have their own unique temp directory and so on.
In this case, I think there would not be any user collision. Can anyone please suggest how I would create such unique temp directories associated with each browser session in the file upload process? Something along the lines of UPLOAD_FOLDER = '/path/to/the/uploads/user1_session', etc for each unique user?
Ok, so lacking further information and any sort of view on what your code/program looks like this is the what I would recommend at the moment.
I am relatively new to programming as well so this might not be the best answer. But in my experience you really,really do not want to be creating multiple directories per user/per session. That is a bad idea. This is where databases comes in handy.
Now in regards to your problem the easiest/fastest way to resolve this issue is to look into how password salt and hashing is done.
Just hash and salt your filenames.
Here is a link that provides a simple yet through explanation on how it is done.
Using the python drive api, I am attempting to remove any permissions a user has to a drive, folder, or file given their email. However, to do this it seems as though I must query all drives, then all files from all drives, then all permissions from all files. Only then can I comb every file permission to see if the id of the user on the permission matches the id of the user I want to remove permissions from. Is there an easier way to do this?
It's easy if you only want to deal with the files owned by a user, but to find all the objects that a user has permissions to access, that's not an easy thing to do: presumably you want specific writer/editor permissions, not "anyone in the organisation can edit" permissions. In our GSuite domain there are tens of millions of Drive files so this is an infeasible task.
A workaround for you is to move the user into an OU that does not have the Drive App enabled. That removes all drive access for the user, though it's not really what you asked for.
I'm faced with the following problem:
The users have some files that need syncing so I'm writing a script that copies the encrypted files from a user's directory to a temporary directory in the server before it gets distributed in the other 5 servers.
The initial copy is done by creating a folder with the user's name and putting the files there.
The users are free to change usernames so if someone changes his username to something nasty the server(s) is/are owned
I have to use the usernames for folder names because the script that does the syncing is using the folder username for metadata of some sort.
So, is there any way to escape the usernames and make sure that everything is created under the master folder?
As nrathaus suggested you could use os.path.normpath to get "normalized" path and check for security issues
I want to make an own static file view that returns the file defined in the GET request. The file must be in an extra directory. The URL must be like /e?s=NAME_OF_FILE. My problem is, hackers can use this like /e?s=/PATH/TO/DATABASE to get any file from the server. I have already a workaround, but i think there are better solutions.
My code:
path = os.path.abspath(os.path.join(script_path, filename))
if path.startswith(script_path):
# Good
else:
# Bad
This is for "hidden static files", that should not be handled by the server.
What you are doing is of not much help. Some things you could do -
In the webserver turn off directory listing so that the 'hacker' does not get the list of all files in that directory.
Instead of exposing the actual filename to outside world you could take the filename, generate a MD5 has of this filename, store this mapping somewhere in your servers and expose this MD5 as the filename. So it becomes /e?s=MD5_HASH_OF_FILENAME. What this does is make it extremely difficult for the 'hacker' to 'guess' the filename. Brute-force does not help as MD5 are not easy to guess. So in effect, only people who have been some how sent this URL will have access to it.
You can expose this static file viewing API to only authenticated users rather than make it public API. You can use #login_required decorator.
Finally, enable HTTPS on your webserver.