GPG decryption in AWS Lambda error code 2 - python

I'm trying to create an AWS Lambda (in python, although my problems are probably not python-related) that will, among other things, decrypt a PGP file stored in S3.
I have a script that runs fine locally (on an ubuntu machine). I have adapted the relevant parts of that script into the lambda script. I'm using python-gnupg, and have created a layer to get to that functionality.
I created a CentOS VM on that ubuntu machine, and put gpg on that.
I have a deployment zip that I think is correct (contents are the script, bin/gpg, lib/{libgpg-error.so.0,libreadline.so.7,libcrypt.so.20,libassuan.so.0}; the gpg executable and libraries are all from that CentOS VM). If I, for instance, remove libassuan from that, I do get an error about that being a missing dependency, hence my believing that the zip is correctly created.
When I deploy the lambda, the code shows up correctly and seems to run (I did have to set it to use the python-gnupg layer, of course).
This is still in basic testing, so the file I'm trying to decrypt is the same one I used on the ubuntu box test, and is being retrieved from S3. The decryption key and passphrase are being retrieved from AWS Parameter Store and are, as near as I can tell, being retrieved correctly (the latter is definitely correct; the former is the correct length with the correct start and correct end). And I do not get an error adding the key (not sure if I would, I guess).
So, everything looks right, coming in. Getting to the decryption itself, we have:
gpg = gnupg.GPG(gnupghome=f"{targetDir}/..", gpgbinary="./bin/gpg")
key_data = open(keyFileName, 'rb').read()
priv_key = gpg.import_keys(key_data)
decrData = gpg.decrypt_file(contents, passphrase=pgpPassphrase, always_trust=True, extra_args=[ '--yes' ])
if not decrData.ok:
logger.error (f"decryption failed: {decrData.status}")
As I implied earlier, this fails with error code 2, and the printed status message is 'decryption failed'. Totally unhelpful.
Unsurprisingly, the decrData object has zero data.
FWIW, always_true and extra_args, as shown, did not change anything (nor did passing both as extra_args=[ '--yes', '--always-trust' ]). I was getting exactly the same results before adding those.
So, all that being said, the question is, does anyone have any suggestions on something I might have done wrong, or what else I can check to see why I'm getting this error?
Thanks.
Update:
Ok, I made a mistake here. I did not have this working, locally; I had a different version (using PGPy) working locally.
Testing locally yesterday, I figured out that my problem was that the key wasn't being imported successfully. The root of that seems to've been that the key was in the wrong format (binary, sent in as uuencoded in the Parameter Store). So I re-exported the key, adding the argument '--armor' to the 'gpg --export-secret-keys' command, then added 'passphrase=...' to the gpg.import_keys() call, and that worked locally (at least to get the key imported; I actually haven't checked the gpg.decrypt() or gpg.decrypt_file() command.
However, taking that exported key and putting it into the Parameter Store... It looks like I am getting it back from the parameter store correctly (I've checked the beginning, the ending, and spots in the middle - including where I joined the two parameters from the Parameter Store), but the key is still not being imported correctly when I run my lambda. FWIW, I did try adding "extra_args=[ '--yes', '--always-trust']" to the gpg.import_keys(), which did nothing. I also tried uploading the key file to S3, and getting it back from there, which also did nothing.
Again, I welcome any suggestions for other things to try.
Thanks again.
Update 2: I also tried supplying the key file (ascii-armored) as part of the distribution zip. That kind-of worked (though I did have to put the key file into a subdirectory; leaving it at the top level led to a permissions problem on the main python file, somehow), in that the file was there, and seemed to be read correctly. However, the key still failed to import.
I seem to be stuck where I was with the PGPy solution, where the code worked perfectly, when run locally, but not when I ran it in AWS.
Update 3: Finally did run my local test to completion and, as expected, the file decrypted perfectly. Wish I knew why I can't import the key on AWS.

Several things to add here.
Part of why I had gotten confused about testing locally was that I had gotten it running under the Python 3.7 runtime (which has GPG as part of the VM). So that's a partial answer.
I was trying to get it running under 3.9, though, to be current. I had tried to solve this via a Docker image. I created the image, but, for terrible reasons, was unable to deploy it. So that's why I was trying via a deployment zip.
As I mentioned, I was using a version of GPG (and attendant libraries) that was pulled off a CentOS VM. This did get closer than my previous attempt, which had used GPG/libraries from Ubuntu. That had fallen apart on libc version.
What finally occurred to me today, though, was that maybe I could pull apart my Docker image, and grab GPG/libraries from that. Using 'docker save --output="filename.tar" image-name', I was able to dump the files out, so I sifted through that to get the relevant files and create a new deployment zip.
The short of it is that that worked perfectly.

Related

Getting compiler error when trying to verify a contract importing from #uniswap/v3-periphery

I'm trying to perform a simple Swap from DAI to WETH with Uniswap in my own SmartContract on the Kovan Testnet. Unfortunately my transaction keeps getting reverted even after setting the gas limit manually.
I also discovered that I can not verify the contract on Kovan via etherscan-API nor manually. Instead I keep getting this error for every library I import:
Source "#uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol" not found: File import callback not supported
Accordingly I have the feeling something is going wrong during compilation and I'm stuck without any further ideas to work out my problem.
Here are a couple infos on what I've tried so far and how to reproduce:
Brownie Version 1.16.4, Tested on Windows 10 and Ubuntu 21.04
I've tried:
Importing libraries with Brownie package manager
Importing libraries with npm and using relative paths
All kinds of different compiler remappings in the brownie-config.yaml
Adding all dependency files to project folders manually
Here's a link to my code for reproducing my error:
https://github.com/MjCage/swap-demo
It'd be fantastic if someone could help.
It's very unlikely that something is "going wrong during compilation". If your contract compiles but what it does does not match the sources, you have found a very serious codegen bug in the compiler and you should report it so that it can be fixed quickly. From experience I'd say that it's much more likely that you have a bug in your contract though.
As for the error during verification - the problem is that to properly compile a multi-file project, you have to provide all the source files and have them in the right directories. This applies to library code as well so if your contract imports ISwapRouter.sol, you need to also submit that file and all files it in turn imports too.
The next hurdle is that as far as I can tell, the multi-file verification option at Etherscan only allows you to submit files from a single directory so it only gets their names, not the whole paths (not sure if it's different via the API). You need Etherscan to see the file as #uniswap/v3-periphery/contracts/interfaces/ISwapRouter.sol but it sees just ISwapRouter.sol instead and the compiler will not treat them as the same (both could exist after all).
The right solution is to use the Standard JSON verification option - this way you submit the whole JSON input that your framework passes to the compiler and that includes all files in the project (including libraries) and relevant compiler options. The issue is that Brownie does not give you this input directly. You might be able to recreate it from the JSON it stores on disk (Standard JSON input format is documented at Compiler Input and Output JSON Description) but that's a bit of manual work. Unfortunately Brownie does not provide any way to request this on the command line. The only other way to get it that I know of is to use Brownie's API and call compiler.generate_input_json().
Since this is a simple project with just one contract and does not have deep dependencies, it might be easier for you to follow #Jacopo Mosconi's answer and just "flatten" the contract by replacing all imports by sources pasted directly into the main contract. You might also try copying the file to your project dir and altering the import so that it only contains the file name, without any path component - this might pass the multi-file verification. Flattening is ultimately how Brownie and many other frameworks currently do verification and Etherscan's check is lax enough to allow sources modified in such a way - it only checks bytecode so you can still verify even if you completely change the import structure, names, comments or even any code that gets removed by the optimizer.
the compiler can't find ISwapRouter.sol
you can add the code of ISwapRouter.sol directly on your swap.sol and delate that line from your code, this is the code https://github.com/Uniswap/v3-periphery/blob/main/contracts/interfaces/ISwapRouter.sol

linux server hosting .py files that are reading .txt files but cant store in variable

I have a linux server.
It is reading files in a directory and doing things with the full text of the file.
I've got some code. it retrieves the file path.
And then I'm doing this:
for file in files:
with open(file,'r') as f:
raw_data = f.read()
Its reading the file just fine. And Ive used this exact code outside of the server and it worked as expected.
In this case, when run on the server, the above code is spitting out all the text to the terminal. But then raw_data == None.
Not the behavior I'm used to. I imagine its something very simple as I am new to linux in general.
But I'm wanting the text in the file to be stored in the 'raw_data' variable as a string.
is there a special way I am to do this on linux? Googling so far as not helped much and I feel this is likely a VERY simple problem.
User error.
I thought, due to my noob status in linux, that perhaps the enviroment was causing weird behavior. But buried deep in the functions that use the data from the files was a print statement i had used a while back for testing. That was causing the output to screen.
As for the None type being returned. It was being returned by another subfunction that had a try/except block in it and was failing. The variable being referenced had the same name (raw_data). So i thought it came from the file read. But it was actually from elsewhere.
thanks all who stopped by. User error for this one.

gnupg.GPG: Sorry, no terminal at all requested - can't get input

I am trying to encrypt a file using Python GNUPG library and it doesnt work!
Here is my code snippet:
import gnupg
gpg=gnupg.GPG(homedir='/home/datadev/')
recipients=['realname#email.com']
f = open('/home/datadev/filename','rb')
status = gpg.encrypt(f,recipients)
/home/datadev is the folder where I have my .asc file and all .pubring files related to gpg.
After reading the post gpg: Sorry, no terminal at all requested - can't get input
I tried out the following option:
gpg=gnupg.GPG(options='')
but the error is always "gpg: Sorry, no terminal at all requested - can't get input"
It would be helpful if someone could advise me as to what is wrong.
The message about the terminal indicates that there is something wrong, but gnupg cannot tell it as there is no terminal. The python wrapper calls the gnupg executable. Apparently, the gnupg output is not consistent in a way that everything comes back with a return code etc.
So the problem is a different one. I faced the same issue. After passing in the recipients not as a list but just straight it worked. The code documentation of encrypt gives an example for multiple recipients - just pass them as single parameters. Converting list to string would to the trick in the above example.
The correct answer is the PGP/GPG feature needs to be activated in WHM. I had this feature working, then after altering the features for an account's Package as defined in WHM, it stopped working. I went back and noticed this option had been unchecked. Checked it, ran my code again and, success.

Extremely new user to Python. "No module named request" error while trying code to detect image subdomains in a website to extract them to a folder

I may sound rather uninformed writing this, and unfortunately, my current issue may require a very articulate answer to fix. Therefore, I will try to be specific as possible as to ensure that my problem can be concisely understood.
My apologizes for that- as this Python code was merely obtained from a friend of mine who wrote it for me in order to complete a certain task. I myself had had extremely minimal programming knowledge.
Essentially, I am running Python 3.6 on a Mac. I am trying to work out a code that allows Python to scan through a bulk of a particular website's potentially existent subdomains in order to find possibly-existent JPG images files contained within said subdomains, and download any and all of the resulting found files to a distinct folder on my Desktop.
The Setup-
The code itself, named "download.py" on my computer, is written as follows:
import urllib.request
start = int(input("Start range:100000"))
stop = int(input("End range:199999"))
for i in range(start, stop + 1):
filename = str(i).rjust(6, '0') + ".jpg"
url = "http://website.com/Image_" + filename
urllib.request.urlretrieve(url, filename)
print(url)
(Note that the words "website" and "Image" have been substituted for the actual text included in my code).
Before I proceed, perhaps some explanation would be necessary.
Basically, the website in question contains several subdomains that include .JPG images, however, the majority of the exact URLs that allow the user to access these sub-domains are unknown and are a hidden component of the internal website itself. The format is "website.com/Image_xxxxxx.jpg", wherein x indicates a particular digit, and there are 6 total numerical digits by which only when combined to make a valid code pertain to each of the existent images on the site.
So as you can see, I have calibrated the code so that Python will initially search through number values in the aforementioned URL format from 100000 to 199999, and upon discovering any .JPG images attributed to any of the thousands of link combinations, will directly download all existent uncovered images to a specific folder that resides within my Desktop. The aim would be to start from that specific portion of number values, and upon running the code and fetching any images (or not), continually renumbering the code to work my way through all of the possible 6-digit combos until the operation is ultimately a success.
(Possible Side-Issue- Although I am fairly confident that my friend's code is written in a manner so that Python will only download .JPG files to my computer from images that actually do exist on that particular URL, rather than swarming my folder with blank/bare files from every single one of URL attempts regardless of whether that URL happens to be successful or not, I am admittedly not completely certain. If the latter is the case, informing me of a more suitable edit to my code would be tremendously appreciated.)
The Execution-
Right off the bat, the code experienced a large error. I'll list through the series of steps that led to the creation of said error.
#1- Of course, I first copy-pasted the code into a text document, and saved it as "download.py". I saved it inside of a folder named "Images" where I sought the images to be directly downloaded to. I used BBEdit.
#2- I proceeded, in Terminal, to input the commands "cd Desktop/Images" (to account for the file being held within the "Images" folder on my Desktop), followed by the command "Python download.py" (to actually run the code).
As you can see, the error which I obtained following my attempt to run the code was the ImportError: No module named request. Despite me guessing that the answer to solving this is simple, I can legitimately say I have got such minimal knowledge regarding Python that I've absolutely no idea how to solve this.
Hint: Prior to making the download.py file, the folder, and typing the Terminal code the only interactions I made with Python were downloading the program (3.6) and placing it in my toolbar. I'm not even quite sure if I am required to create any additional scripts/text files, or make any additional downloads before a script like this would work and successfully download the resulting images into my "Images" folder as is my desired goal. If I sincerely missed something integral at any point during this long read, hopefully, someone in here can provide a thoroughly detailed explanation as to how to solve my issue.
Finishing statements for those who've managed to stick along this far:
Thank you. I know this is one hell of a read, and I'm getting more tired as I go along. What I hope to get out of this question is
1.) Obviously, what would constitute a direct solution to the "No module named request" Input Error in Terminal. In other words, what I did wrong there or am missing.
2.) Any other helpful information that you know would assist this code, for example, if there is any integral step or condition I've missed or failed to meet that would ultimately cause the entirety of my code to cease to work. If you do see a fault in this, I only ask of you to be specific, as I've not got much experience in the programming world. After all, I know there is a lot of developers out here that are far more informed and experienced than am I. Thanks.
urllib.request is in Python 3 only. When running 'python' on a Mac, you're running Python 2 by default. Try running executing with python3.
python --version
might need to
brew install python3
urllib.request is a Python 3 construct. Most systems run Python 2 as default and this is what you get when you run simply python.
To install Python 3, go to https://brew.sh/ and follow the instructions to install the Hombrew package manager. Then run
brew install python3
python3 download.py

Apache File Does Not Exist workaround

I have an app that calls upon the extension found here. I have the .py file in my /var/www folder so that it can be imported in my python code.
So, I keep getting this error:
File does not exist: /var/www/flask_util.js
in my apache error logs. It looks like, because of the name or something, it wants to find a javascript file. But, it's in python. Here's the line of code in python that import it:
from flask_util_js import FlaskUtilJs
I've tried just changing the name of the file to flask_util.js, but again, nothing. Not entirely sure what is going on here, but I am sure that I have a file in /var/www that it should be reading.
EDIT
I think, actually that the import error is coming from importing it into my HTML when I do this:
{{flask_util_js.js}}
So, what I tried was copy out the JS code from the python and create a new file with it in the correct path. When I did that, I still got the same error on the webpage, however the apache logs don't say anything (which is weird right?). So, it still doesn't work, and I don't know why
So, it's ugly but what I ended up doing was copying over the generated JS file that I could find on my server (didn't actually exist as a document in my repository). Then, apache could find it.
This part is for anyone who is actually using flask_util_js:
However, it wasn't reading in the correct Javascript for the url_mapping so I had to go in a hard-code the correct URLs. Not very scalable, but oh well.

Categories