I wrote a script and uploaded it to GCFunctions.
Now, one of my functions is using PyJWT library in order to generate JWT for GCP API calls,
the problem is that I keep getting errors every time I run the function.
When I added pyjwt to 'requirements.txt' I got the error: 'Algorithm RS256 could not be found',
then I tried to add cryptography (the encrypting library that pyjwt uses), and also tried pycrypto (for RS256 registering) but still nothing.
I'd be grateful for some help here! even suggestions for other ways of authentication methods in GCP API calls would be great!
Thanks in advance!!
Edit: BTW- the function is running on Python3.7
Here is the content of my requirements.txt file (dependencies)
# Function dependencies, for example:
# package>=version
requests==2.21.0
pycrypto
pyjwt==1.7.1
pyjwt[crypto]
boto3==1.11.13
And this is the exception I get while trying to add pyjwt[crypto] and run the script once again:
enter image description here
Found a way to make it work. Posting it here for those who will face it in the future...
I decided eventually to upload a zip file that contains the code file + requirements.txt + service account JSON Credentials file and added the following libraries as dependencies(to requirements.txt): oauth2client, google-api-python-client.
Here's how I did it:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import logging
# set the service with the credentials
credentials = GoogleCredentials.from_stream("my_creds.json")
service = discovery.build('compute', 'v1', credentials=credentials)
# block errors printing for 'googleapicliet.discovery'
logging.getLogger('googleapicliet.discovery_cache').setLevel(logging.ERROR)
def main(event, context):
# Project ID for this request.
project = '<project_id>'
# The name of the zone for this request.
zone = '<zone>'
# Name of the instance resource to return.
instance = '<instance-id>'
request = service.instances().get(project=project, zone=zone, instance=instance)
response = request.execute()
# print only network details of the instance
print("'{}' Network Details: {}".format(response['name'], response['networkInterfaces'][0]['accessConfigs'][0]))
As specified in the PyJWT installation documentation, if you plan to do encoding or decoding of JWTs you should use the following pip install command to install it as a required extra along with pyjwt:
pip install pyjwt[crypto]
or add pyjwt[crypto] as a new line in your requirements.txt.
As you want to use the GCP APIs I would recommend you to use the client libraries, as when using the client libraries the authentication is managed by the library itself, therfor there is no need of managing the security by yourself.
here It is recommended by Google to avoid doing the authorization process explecitely. and instead using the propper client libraries.
Related
I'm trying to automate report downloading from Google Play (thru Cloud Storage) using GC Python client library. From the docs, I found that it's possible to do it using gsutil. I found this question has been answered here, but I also found that Client infers credentials from environment and I plan to do this on automation platform with (assumed) no gcloud credentials set.
I've found that you can generate gsutil boto file then use it as credential, but how can I load this into the client library?
This is not exactly a direct answer to your question, but the best way would be to create a service account in GCP, and then use the service account's JSON keyfile to interact with GCS. See this documentation on how to generate said keyfile.
NOTE: You should treat this keyfile as a password as it will have the access you give it in the step below. So no uploading to public github repos for example.
You'll also have to give the serviceaccount the permission Storage Object Viewer, or one with more permissions.
NOTE: Always use the least needed to due to security considerations.
The code for this is extremely simple. Note that this is extremely similar to the methods mentioned in the link for generating the keyfile, the exception being the way the client is instantiated.
requirements.txt
google-cloud-storage
code
from google.cloud import storage
cred_json_file_path = 'path/to/file/credentials.json'
client = storage.Client.from_service_account_json(cred_json_file_path)
If you want to use the general Google API Python client library you can use this library to do a similar instantiation of a credentials object using the JSON keyfile, but for GCS the google-cloud-storage library is very much preferred as it does some magic behind the scenes, as the API python client library is a very generic one that (theoretically) be useable with all Google API's.
gsutil will look for a .boto file in the home directory of the user invoking it, so ~/.boto, for Linux and macOS, and in %HOMEDRIVE%%HOMEPATH% for Windows.
Alternately, you can set the BOTO_CONFIG environment variable to the path of the .boto file you want to use. Here's an example:
BOTO_CONFIG=/path/to/your_generated_boto_file.boto gsutil -m cp files gs://bucket
You can generate a .boto file with a service account by using the "-e" flag with the config command: gsutil config -e.
Also note that if gsutil is installed with the gcloud command, gcloud will share its authentication config with gsutil unless you disable that behavior with this command: gcloud config set pass_credentials_to_gsutil false.
https://cloud.google.com/storage/docs/boto-gsutil
I have been able to setup an Azure pipeline that publishes a python package to our internal Azure Feed. Now I am trying to have it publish directly to PyPi. This is what I have done already:
I have setup a PyPi "Service Connection" in the Azure Project with the following configuration
Authentication Method = Username and Password
Python Repository Url for upload = https://upload.pypi.org/legacy
EndpointName: I wasnt too sure about this but I set it as the package name on PyPi
And I named this Service Connection PyPi.
In the pipeline I will run the following authentication task:
- task: TwineAuthenticate#1
inputs:
pythonUploadServiceConnection: 'PyPi'
Then I build the wheel for publishing.
Whenever I try to publish to the internal Azure feed it works, but when I try to upload that same package to pypi it gets stuck on this:
Uploading distributions to https://upload.pypi.org/legacy/
Are there any clear issues anyone can see that can have it get stuck trying to upload to pypi?
Twine authenticate probably isn't actually providing credentials to the twine upload command, so it's hanging waiting for user input. Try adding --non-interactive to your twine command like this twine upload --non-interactive dist/*. It will probably end up showing an error.
I am using poetry version 1.1.6 to build and publish my project to an internal artifactory.
I have provided the below command and configured the repository.
poetry config repositories.myrepo https://my-internal-artifactory/api/pypi/python/simple
How do I configure API token for an internal repository?
I tried this
poetry config http-basic.myrepo mytoken
Its still prompting for password assuming that I am providing a username and password. However, all I have is a token. I don't have a username and password.
The docs doesn't seem to provide sufficient information for private repositories using tokens,
Note : Before poetry, we were using curl to upload to artifactory using the token.
How do we publish to private repositories with token in poetry? Is it even possible to do this? Any help would be greatly appreciated.
The http-basic config is for user + password combination, you're only providing one of them.
There's another configuration setting called pypi-token, you probably want to use this instead (more information in the credentials section of poetry). In your case it should be poetry config pypi-token.myrepo mytoken
Make sure that you haven't specified both http-basic and pypi-token, as just one of them will work - i belive poetry will check for pypi-token and if that is present it will use that. Just use poetry config --unset to remove the other config option.
My problem was I wanted to publish to Artifactory, where you have a token but you also have a user, in that case you need to use the http-basic option and specify both your user and your token as the password.
I've got a Python script for an AWS Lambda function that does HTTP POST requests to another endpoint. Since Python's urllib2.request, https://docs.python.org/2/library/urllib2.html, can only handle data in the standard application/x-www-form-urlencoded format and I want to post JSON data, I used the Requests library, https://pypi.org/project/requests/2.7.0/.
That Requests library wasn't available at AWS Lambda in the Python runtime environment, so had to be imported via from botocore.vendored import requests. So far, so good.
Today, I get a deprecation warning on that:
DeprecationWarning: You are using the post() function from 'botocore.vendored.requests'.
This is not a public API in botocore and will be removed in the future.
Additionally, this version of requests is out of date. We recommend you install the
requests package, 'import requests' directly, and use the requests.post() function instead.
This was mentioned in this blog post from AWS too: https://aws.amazon.com/blogs/developer/removing-the-vendored-version-of-requests-from-botocore/.
Unfortunately, changing from botocore.vendored import requests into import requests results in the following error:
No module named 'requests'
Why is requests not available for the Python runtime at AWS Lambda? And how can I use / import it?
I succeeded sending HTTP POST requests using the urllib3 library, which is available at AWS Lambda without the requirements for additional installation instructions.
import urllib3
http = urllib3.PoolManager()
response = http.request('POST',
url,
body = json.dumps(some_data_structure),
headers = {'Content-Type': 'application/json'},
retries = False)
Check out the instructions here: https://docs.aws.amazon.com/lambda/latest/dg/python-package.html#python-package-dependencies
All you need to do is download the requests module locally, then include it in your Lambda function deployment package (ZIP archive).
Example (if all your Lambda function consisted of was a single Python module + requests module):
$ pip install --target ./package requests
$ cd package
$ zip -r9 ${OLDPWD}/function.zip .
$ cd $OLDPWD
$ zip -g function.zip lambda_function.py
$ aws lambda update-function-code --function-name my-function --zip-file fileb://function.zip
Answer 2020-06-18
I found a nice and easy way to use requests inside AWS Lambda functions!
Open this link and find the region that your function is using:
https://github.com/keithrozario/Klayers/tree/master/deployments/python3.8/arns
Open the .csv related to your region and search for the requests row.
This is the ARN related to requests library:
arn:aws:lambda:us-east-1:770693421928:layer:Klayers-python38-requests:6
So now in your lambda function, add a layer using the ARN found.
Obs.: make sure your Python lambda function runtime is python3.8.
If you are using serverless framework
Specify the plugin in serverless.yml
plugins:
- serverless-python-requirements
At the directory root create file requirements.txt
requirements.txt
requests==2.22.0
This will install the requests and packages mentioned.
requests is NOT part of core python.
See https://docs.aws.amazon.com/en_pv/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html about packaging a Lambda having external dependencies (in your case the requests library)
Amazon's Serverless Application Model (SAM) provides a build command that can bundle arbitrary python dependencies into the deployment artifact.
To be able to use the requests package in your code, add the dependency to your requirements.txt file:
requests==2.22.0
then run sam build to get an artifact that vendors requests. By default, your artifacts will be saved to the .aws-sam/build directory but another destination directory can be specified with the --build-dir option.
Consult SAM's documentation for more info.
Here's my redneck solution that works with any library, using an AWS Lambda Layer:
This has the advantage that you don't have to trust any 3rd party layers, because you can easily make it yourself.
Go to your local python's Lib/site-packages (python install location or your venv)
Copy whichever libraries you need (e.g. "requests") into a folder named "python"
Zip this folder
Create an AWS Lambda Layer, and upload the zip into it
Add this layer in your lambda function
Import your libraries as usual, and keep coding as if nothing happened
pip install requests
and then
import requests
to use.
I have a web page with Google Sign-In and I want to access user's data on behalf of the user even if he is offline.
I am looking for a suggestion for a library that I can use to obtain access token and refresh token using the authorization code that the client sent to the server.
I followed the official guide here.
In Step 7: Exchange the authorization code for an access token, the author uses oauth2client library which appears to be deprecated:
Note: oauth2client is now deprecated. No more features will be added to the libraries and the core team is turning down support. We recommend you use google-auth and oauthlib. For more details on the deprecation, see oauth2client deprecation.
So I looked at google-auth
This library provides no support for obtaining user credentials, but does provide limited support for using user credentials.
I also took a look at oauthlib, but there are many pages undocumented.
I am using Python 3.x with Flask.
Ended using Requests-OAuthlib.
pip install requests requests_oauthlib
from requests_oauthlib import OAuth2Session
oauth = OAuth2Session(client_id=GOOGLE_CLIENT_ID,
scope=GOOGLE_SCOPE,
redirect_uri='http://localhost:8000')
token = oauth.fetch_token(token_url=GOOGLE_TOKEN_URL,
code=authorization_code,
client_secret=GOOGLE_CLIENT_SECRET)