new to Python, been using it for a few months. One of my morning tasks is to encrypt a set of files and upload them to an FTP. It’s a very tedious task I was hoping to automate.
The current keys came from the program PGP Desktop and have created public/private keys to encrypt files and eventually upload to an FTP. The same keys are used every day.
Is there a way to encrypt a file using the already existing key(s) on my machine (as I know where they’re kept) to encrypt a file using python. The only articles I’ve found on SO involve creating a new key using python. Just wanting to bypass the morning ritual of right click > ‘Secure file with key…’. Currently we use no signing key, so it’s just the key for encrypting I need to use.
Related
I recently managed to restore an old COLX wallet by importing a mobile backup file into colx-qt core %appdata folder.
After resyncing the blockchain I changed the rpcpassword and the wallet's encryption password, tested both of these and everything was in good order.
For testing purposes, I had decided to dump another encrypted backup file (wallet.dat) into the %appdata folder without removing the restored one first, resynced the blockchain, tried the passwords and noted they were now both incorrect.
I've removed both backups,resynced the blockchain anew, set rpcpassword and wallet unlock password to those associated with the first backup, imported said backup again but noticed that both passwords were now incorrect therefore losing my access to that mobile backup while at it.
Since I happen to have an older wallet.json file containing multiple addresses, several pubkeys and ckeys as well as the mkey (in the format {nID=1; encrypted_key:"the encrypted key"; nDerivationIterations:"62116 or some similar number"; version:""; etc etc}), could I use this information to restore my access to the wallet?
If YES, how exactly must I go about doing it?
Thank you in advance for your assistance!
I haven't tried anything else so far because I am trying to understand how this happened, what caused the change and how will I need to fix it before I'd just go ahead adding even more muddy details to the issue.
I want to build an uploading tool in plotly DASH, that will take any file, encrypt it using python's cryptography.fernet and upload it. To circumvent the limitations of dcc.upload component, I am using a module dash-uploader.
The standard behavior of this module, however is following: Upload component takes the file, uploads it to the server directly and only lets me encrypt it on the server side. I want to introduce a modification of this behavior, that the encryption is performed locally and the file is transferred as encrypted.
Please suggest any way how to best modify the standard behavior to allow uploaded file modification prior to its actual upload.
thanks
I am working for a company which is currently storing PDF files into a remote drive and subsequently manually inserting values found within these files into an Excel document. I would like to automate the process using Zapier, and make the process scalable (we receive a large amount of PDF files). Would anyone know any applications useful and possibly free for converting PDFs into Excel docs and which integrate with Zapier? Alternatively, would it be possible to create a Python script in Zapier to access the information and store it into an Excel file?
This option came to mind. I'm using google drive as an example, you didn't say what you where using as storage, but Zapier should have an option for it.
Use cloud convert, doc parser (depends on what you want to pay, cloud convert at least gives you some free time per month, so that may be the closest you can get).
Create a zap with this step:
Trigger on new file in drive (Name: Convert new Google Drive files with CloudConvert)
Convert file with CloudConvert
Those are two options by Zapier that I can find. But you could also do it in python from your desktop by following something like this idea. Then set an event controller in windows event manager to trigger an upload/download.
Unfortunately it doesn't seem that you can import JS/Python libraries into zapier, however I may be wrong on that. If you could, or find a way to do so, then just use PDFminer and "Code by Zapier". A technician might have to confirm this though, I've never gotten libraries to work in zaps.
Hope that helps!
I am using Python and Boto in a script to copy several files from my local disks, turn them into .tar files and upload to AWS Glacier.
I based my script on:
http://www.withoutthesarcasm.com/using-amazon-glacier-for-personal-backups/#highlighter_243847
Which uses the concurrent.ConcurrentUploader
I am just curious how sure I can be that the data is all in Glacier after successfully getting an ID back? Does the concurrentUploader do any kind of hash checking to ensure all the bits arrived?
I want to remove files from my local disk but fear I should be implementing some kind of hash check... I am hoping this is happening under the hood. I have tried and successfully retrieved a couple of archives and was able to un-tar. Just trying to be very cautions.
Does anyone know if there is checking under the hood that all pieces of the transfer were successfully uploaded? If not, does anyone have any python example code of how to implement an upload with hash checking?
Many thanks!
Boto Concurrent Uploader Docs:
http://docs.pythonboto.org/en/latest/ref/glacier.html#boto.glacier.concurrent.ConcurrentUploader
UPDATE:
Looking at the actual Boto Code (https://github.com/boto/boto/blob/develop/boto/glacier/concurrent.py) line 132 appears to show that the hashes are computed automatically but I am unclear what the
[None] * total_parts
means. Does this mean that the hashes are indeed calculated or is this left to the user to implement?
Glacier itself is designed to try to make it impossible for any application to complete a multipart upload without an assurance of data integrity.
http://docs.aws.amazon.com/amazonglacier/latest/dev/api-multipart-complete-upload.html
The API call that returns the archive id is sent with the "tree hash" -- a sha256 of the sha256 hashes of each MiB of the uploaded content, calculated as a tree coalescing up to a single hash -- and the total bytes uploaded. If these don't match what was actually uploaded in each part (which were also, meanwhile, being also validated against sha256 hashes and sub-tree-hashes as they were uploaded) then the "complete multipart" operation will fail.
It should be virtually impossible by the design of the Glacier API for an application to "successfully" upload a file that isn't intact and yet return an archive id.
Yes, the concurrent uploader does compute a tree hash of each part and a linear hash of the entire uploaded payload. The line:
[None] * total_parts
just creates a list containing total_parts number of None values. Later, the None values are replaced by the appropriate tree hash for a particular part. Then, finally, the list of tree hash values are used to compute the final linear hash of the entire upload.
So, there are a lot of integrity checks happening as required by the Glacier API.
I want to design an application that reads some a folder of text files and shows the user its contents. Three problems arise: I need the folder containing the text files to be encrypted which I don't know how to do, two, I need a way to read the encrypted files without revealing the key in the python code, so I guess C would be the best way to do that even if I don't like that way(any suggestions are welcome,using python if possible), and three, I need a way to add files to the folder and then send the encrypted folder along with the program.
Is there any way to do those things without ever revealing the key or giving the user the possibility to read the folder except using my program?
Thanks in advance for any help!
EDIT: Also, is there a way to use C to encrypt and decrypt files so that I can put the key in the compiled file and distribute that with my program?
I think the best thing to do would be to encrypt the individual text files using GPG, one of the strongest encryption systems available(and for free!) You can get several python libraries to do this, and I recommend python-gnupg. Also, you can probably just reference the file where the key is located and distribute it along with the application? If you want to include a preset key and not have your users be able to see where that key is, you are going to have a very hard time. How about using a key on a server you control that somehow only accepts requests for the key from copies of your application? I don't know how you'd make this secure though through Python.
About adding files to the folder and sending it along with the program, perhaps you aren't thinking of the most optimal solution? There are plenty of python data structures that can be serialized and accomplish most of the things you are talking about in your post.