Get md5 from Owncloud with Webdav - python

I'm using Webdav to synchronize files into my Owncloud. All working very fine.
But I need get MD5 from files in my result list. And i'm not having success in do this, and I not found nothing on owncloud's documentation. There are a way to receive the md5 file that's stored on owncloud?
I imagine it is some setting in ownCloud, or the header of the request should be made. But really I did not find anything on how to achieve this.

This is a method to get the hash. (I'm not sure this is the most correct way).
\OC\Files\Filesystem::hash('md5',$path_to_file);
(ex. 8aed7f13a298b27cd2f9dba91eb0698a)

Related

How to save program settings to computer?

I'm looking to store some individual settings to each user's computer. Things like preferences and a license key. From what I know, saving to the registry could be one possibility. However, that won't work on Mac.
One of the easy but not so proper techniques are just saving it to a settings.txt file and reading that on load.
Is there a proper way to save this kind of data? I'm hoping to use my wx app on Windows and Mac.
There is no proper way. Use whatever works best for your particular scenario. Some common ways for storing user data include:
Text files (e.g. Windows INI, cfg files)
binary files (sometimes compressed)
Windows registry
system environment variables
online profiles
There's nothing wrong with using text files. A lot of proper applications uses them exactly for the reason that they are easy to implement, and additionally human readable. The only thing you need to worry about is to make sure you have some form of error handling in place, in case the user decides to replace you config file content with some rubbish.
Take a look at Data Persistence on python docs. One option a you said could be persist them to a simple text file. Or you can save your data using some serialization format as pickle (see previous link) or json but it will be pretty ineficient if you have several keys and values or it will be too complex.
Also, you could save user preferences in an .ini file using python's ConfigParser module as show in this SO answer.
Finally, you can use a database like sqlite3 which is simpler to handle from your code in order to save and retrieve preferences.

Guarantee of data integrity with Python Boto and Amazon Glacier concurrent.ConcurrentUploader?

I am using Python and Boto in a script to copy several files from my local disks, turn them into .tar files and upload to AWS Glacier.
I based my script on:
http://www.withoutthesarcasm.com/using-amazon-glacier-for-personal-backups/#highlighter_243847
Which uses the concurrent.ConcurrentUploader
I am just curious how sure I can be that the data is all in Glacier after successfully getting an ID back? Does the concurrentUploader do any kind of hash checking to ensure all the bits arrived?
I want to remove files from my local disk but fear I should be implementing some kind of hash check... I am hoping this is happening under the hood. I have tried and successfully retrieved a couple of archives and was able to un-tar. Just trying to be very cautions.
Does anyone know if there is checking under the hood that all pieces of the transfer were successfully uploaded? If not, does anyone have any python example code of how to implement an upload with hash checking?
Many thanks!
Boto Concurrent Uploader Docs:
http://docs.pythonboto.org/en/latest/ref/glacier.html#boto.glacier.concurrent.ConcurrentUploader
UPDATE:
Looking at the actual Boto Code (https://github.com/boto/boto/blob/develop/boto/glacier/concurrent.py) line 132 appears to show that the hashes are computed automatically but I am unclear what the
[None] * total_parts
means. Does this mean that the hashes are indeed calculated or is this left to the user to implement?
Glacier itself is designed to try to make it impossible for any application to complete a multipart upload without an assurance of data integrity.
http://docs.aws.amazon.com/amazonglacier/latest/dev/api-multipart-complete-upload.html
The API call that returns the archive id is sent with the "tree hash" -- a sha256 of the sha256 hashes of each MiB of the uploaded content, calculated as a tree coalescing up to a single hash -- and the total bytes uploaded. If these don't match what was actually uploaded in each part (which were also, meanwhile, being also validated against sha256 hashes and sub-tree-hashes as they were uploaded) then the "complete multipart" operation will fail.
It should be virtually impossible by the design of the Glacier API for an application to "successfully" upload a file that isn't intact and yet return an archive id.
Yes, the concurrent uploader does compute a tree hash of each part and a linear hash of the entire uploaded payload. The line:
[None] * total_parts
just creates a list containing total_parts number of None values. Later, the None values are replaced by the appropriate tree hash for a particular part. Then, finally, the list of tree hash values are used to compute the final linear hash of the entire upload.
So, there are a lot of integrity checks happening as required by the Glacier API.

GAE Python how to check file type on upload

So, i'm trying to create an google app engine (python) app that allows people to share files. I have file uploads working well, but my concern is about checking the file extension and making sure, primarily, that the files are read only, and secondly, that they are of the filetype that is specified. These will not be image files, as a know they are a lot of image resources already. Specifically, .stl mesh files, but i'd like to be able to do this more generally.
I know there are modules that can do this, python-magic seems to be able to do this for example, but i can't seem to find any that i'm able to import without LoadModuleRestricted. I'm considering writing my own parser, but that would be a lot of work for such a common (i'm assuming) issue.
Anyway, i'm totally stumped so this is my first stackoverflow question, so hope i'm doing well etiquette wise. Let me know, and thanks!
It sounds like you want to read the first few bytes of the uploaded file to verify that its signature matches the purported mime type. Assuming that you're uploading to blobstore (i.e., via a url obtained from blobstore.get_upload_url(), then once you're redirected to the upload handler whose path you gave to get_upload_url, you can open blob using a BlobReader, then read and verify the signature.
The Blobstore sample app lays out the framework. You'd glue in code in UploadHandler once you have blob_info (using blob_info.key() to open the blob).

How to use python to read an encrypted folder

I want to design an application that reads some a folder of text files and shows the user its contents. Three problems arise: I need the folder containing the text files to be encrypted which I don't know how to do, two, I need a way to read the encrypted files without revealing the key in the python code, so I guess C would be the best way to do that even if I don't like that way(any suggestions are welcome,using python if possible), and three, I need a way to add files to the folder and then send the encrypted folder along with the program.
Is there any way to do those things without ever revealing the key or giving the user the possibility to read the folder except using my program?
Thanks in advance for any help!
EDIT: Also, is there a way to use C to encrypt and decrypt files so that I can put the key in the compiled file and distribute that with my program?
I think the best thing to do would be to encrypt the individual text files using GPG, one of the strongest encryption systems available(and for free!) You can get several python libraries to do this, and I recommend python-gnupg. Also, you can probably just reference the file where the key is located and distribute it along with the application? If you want to include a preset key and not have your users be able to see where that key is, you are going to have a very hard time. How about using a key on a server you control that somehow only accepts requests for the key from copies of your application? I don't know how you'd make this secure though through Python.
About adding files to the folder and sending it along with the program, perhaps you aren't thinking of the most optimal solution? There are plenty of python data structures that can be serialized and accomplish most of the things you are talking about in your post.

How do I use python-magic to get the file type of a file over the Internet?

Usually I would download it to StringIO object, then run this:
m = magic.Magic()
m.from_buffer(thefile.read(1024))
But this time , I can't download the file, because the image might be 20 Megabytes. I want to use Python magic to find the file type without downloading the entire file.
If python-magic can't do it...is the next best way to observe the mime type in the headers? But how accurate is this??
I need accuracy.
You can call read(1024) without downloading the whole file:
thefile = urllib2.urlopen(someURL)
Then, just use your existing code. urlopen returns a file-like object, so this works naturally.
If it is one of the common image formats like png of jpg, and you see the server is a reliable one, then you can use the 'Content-Type' header to give what you are looking for.
But this is not as reliable as using the portion of the file and passing it to python-magic, because if server had not identified the proper format and it might have set it to application/octet-stream. This is more common with video formats, but pictures, I think Content-Type is okay.
Sorry, I can't find any statistics or research on Content-Type's accuracy. The suggested answer of downloading only part of the file is a good option too.

Categories