I've been trying to get the shared access signature working for Azure Blob Storage as I want to provide time limited downloads. For that I currently use a PHP script that calls a Python script on the Azure Websites service.
This is the way I call it to test things:
<?php echo "https://container.blob.core.windows.net/bla/random.zip?"; system('azureapp\Scripts\python azureapp\generate-sas.py "folder/file.zip"'); ?>
And this is the python script I use to generate the parameters:
from azure.storage import *
import sys, datetime
file = sys.argv[1]
accss_plcy = AccessPolicy()
accss_plcy.start = (datetime.datetime.utcnow() + datetime.timedelta(seconds=-120)).strftime('%Y-%m-%dT%H:%M:%SZ')
accss_plcy.expiry = (datetime.datetime.utcnow() + datetime.timedelta(seconds=15)).strftime('%Y-%m-%dT%H:%M:%SZ')
accss_plcy.permission = 'r'
sap = SharedAccessPolicy(accss_plcy)
sas = SharedAccessSignature('containername', 'accountkey')
qry_str = sas.generate_signed_query_string(file, 'b', sap)
print (sas._convert_query_string(qry_str))
I managed to get this construct running for the most part, but my current issue is that if I use the link that was generated I am now always faced with this error:
<Message>
Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
</Message>
<AuthenticationErrorDetail>
Signature did not match. String to sign used was r 2015-01-27T22:52:17Z 2015-01-27T22:54:32Z /filepath/random.zip 2012-02-12
</AuthenticationErrorDetail>
I double checked everything and tried to find something on Google, but sadly this aspect isn't really THAT well documented, and the error message isn't really helping me out either.
EDIT: forgot to leave an example link that was generated: https://container.blob.core.windows.net/filepath/random.zip?st=2015-01-27T23%3A23%3A18Z&se=2015-01-27T23%3A26%3A18Z&sp=r&sr=b&sv=2012-02-12&sig=eqCirXRjUbGGVVAYwWwARreTPr4j8wXubGx1q51AUHU%3D&
Assuming your container name is bla and file name is random.zip, please change the following code from:
<?php echo "https://container.blob.core.windows.net/bla/random.zip?"; system('azureapp\Scripts\python azureapp\generate-sas.py "folder/file.zip"'); ?>
to:
<?php echo "https://container.blob.core.windows.net/bla/random.zip?"; system('azureapp\Scripts\python azureapp\generate-sas.py "bla/random.zip"'); ?>
The reason you were running into this error is because you were not providing the name of the blob container in your SAS calculation routine, thus storage service was computing the SAS on $root blob container. Since the SAS was computed on a one blob container and used on another blob container, you're getting this authorization error.
Related
i'm writing this Google Cloud Function (Python)
def create_kubeconfig(request):
subprocess.check_output("curl https://sdk.cloud.google.com | bash | echo "" ",stdin=subprocess.PIPE, shell=True )
os.system("./google-cloud-sdk/install.sh")
os.system("gcloud init")
os.system("curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl")
os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**")
os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**")
conf = KubeConfig()
conf.use_context('**cluster name**')
when i run the code it gives me the error
'Invalid kube-config file. ' kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
help me to solve it please
You have to reach programmatically the K8S API. You have the description of the API in the documentation
But it's not easy and simple to perform. However, here some inputs for achieving what you want.
First, get the GKE master IP
Then you can access to the cluster easily. Here for reading the deployment
import google.auth
from google.auth.transport import requests
credentials, project_id = google.auth.default()
session = requests.AuthorizedSession(credentials)
response = session.get('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', verify=False)
response.raise_for_status()
print(response.json())
For creating one, you can do this
import google.auth
from google.auth.transport import requests
credentials, project_id = google.auth.default()
session = requests.AuthorizedSession(credentials)
with open("deployment.yaml", "r") as f:
data = f.read()
response = session.post('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', data=data,
headers={'content-type': 'application/yaml'}, verify=False)
response.raise_for_status()
print(response.json())
According with the object that you want to build, you have to use the correct file definition and the correct API endpoint. I don't know a way to apply a whole yaml with several definition in only one API call.
Last things, be sure to provide the correct GKE roles to the Cloud Function service Account
UPDATE
Another solution is to use Cloud Run. Indeed, with Cloud Run and thanks to the Container capability, you have the ability to install and to call system process (it's totally open because your container runs into a GVisor sandbox, but most of common usages are allowed)
The idea is the following: use a gcloud SDK base image and deploy your application on it. Then, code your app to perform system calls.
Here a working example in Go
Docker file
FROM golang:1.13 as builder
# Copy local code to the container image.
WORKDIR /app/
COPY go.mod .
ENV GO111MODULE=on
RUN go mod download
COPY . .
# Perform test for building a clean package
RUN go test -v ./...
RUN CGO_ENABLED=0 GOOS=linux go build -v -o server
# Gcloud capable image
FROM google/cloud-sdk
COPY --from=builder /app/server /server
CMD ["/server"]
Note: The image cloud-sdk image is heavy: 700Mb
The content example (only the happy path. I remove error management, and the stderr/stdout feedback for simplifying the code)
.......
// Example here: recover the yaml file into a bucket
client,_ := storage.NewClient(ctx)
reader,_ := client.Bucket("my_bucket").Object("deployment.yaml").NewReader(ctx)
content,_:= ioutil.ReadAll(reader)
// You can store locally the file into /tmp directory. It's an in-memory file system. Don't forget to purge it to avoid any out of memory crash
ioutil.WriteFile("/tmp/file.yaml",content, 0644)
// Execute external command
// 1st Recover the kube authentication
exec.Command("gcloud","container","clusters","get-credentials","cluster-1","--zone=us-central1-c").Run()
// Then interact with the cluster with kubectl tools and simply apply your description file
exec.Command("kubectl","apply", "-f","/tmp/file.yaml").Run()
.......
Instead of using gcloud inside the Cloud Function (and attempting to install it on every request, which will significantly increase the runtime of your function), you should use the google-cloud-container client library to make the same API calls directly from Python, for example:
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
project_id = 'YOUR_PROJECT_ID'
zone = 'YOUR_PROJECT_ZONE'
response = client.list_clusters(project_id, zone)
I'm trying to automate some data cleaning tasks by uploading the files to Cloud Storage, running them through a pipeline, and downloading the results.
I have created the template for my pipeline to execute using the GUI in Dataprep, and am attempting to automate the upload and execution of the template using the Google Client Libraries, specifically in Python.
However, I have found that when running the job with the Python script, the full template is not executed; sometimes some of the step aren't completed, sometimes the output file - which should be MegaBytes large - is less than 500 bytes. This is dependent on the template I use. Each template has its own issue.
I've tried breaking the large template into smaller templates to apply consecutively so I can see where the issue is, but that is where I discovered that each template has it's own issue. I have also tried creating the job from the Dataflow Monitoring Interface, and have found that anything created with that will run perfectly, meaning that there must be some issue with the script I've created.
def runJob(bucket, template, fileName):
#open connection with the needed credentials
credentials = GoogleCredentials.get_application_default()
service = build('dataflow', 'v1b3', credentials = credentials)
#name job after file being processed
jobName = fileName.replace('.csv', '')
projectId = 'my-project'
#find the template to run on the dataset
templatePath = "gs://{bucket}/me#myemail.com/temp/{template}".format(bucket = bucket, template=template)
#construct job JSON
body = {
"jobName":"{jobName}".format(jobName=jobName),
"parameters" : {
"inputLocations" :"{\"location1\":\"gs://" + bucket + "/me#myemail.com/RawUpload/" + fileName + "\"}",
"outputLocations":"{\"location1\":\"gs://" + bucket + "/me#myemail.com/CleanData/" + fileName.replace('.csv', '_auto_delete_2') + "\"}",
},
"environment" : {
"tempLocation":"gs://{bucket}/me#myemail.com/temp".format(bucket = bucket),
"zone":"us-central1-f"
}
}
#create and execute HTTPRequest
request = service.projects().templates().launch(projectId=projectId, gcsPath=templatePath, body=body)
response = request.execute()
#notify user
print(response)
Using the JSON format, my input to the parameters is that same as when I use the Monitoring Interface. This tells me that there is either something going on in the background of the Monitoring Interface that I am unaware of and thus am not including, or there is an issue with the code that I have created.
As I said above, the issue varies depending on the template I try to run, but the most common is the extremely small output file. The output file will be magnitudes smaller than it should be. This is because it will contain only the CSV headers and some random samples of the first row within the data and is also formatted incorrectly for a CSV file in the first place.
Does anyone know what I'm missing or recognize what I'm doing wrong?
So I'm trying to produce temporary globally readable URLs for my Google Cloud Storage objects using the google-cloud-storage Python library (https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html) - more specifically the Blob.generate_signed_url() method. I doing this from within a Compute Engine instance in a command line Python script. And I keep getting the following error:
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'oauth2cl
ient.service_account.ServiceAccountCredentials'> just contains a token. see https://google-cloud-python.readthedocs
.io/en/latest/core/auth.html?highlight=authentication#setting-up-a-service-account for more details.
I am aware that there are issues with doing this from within GCE (https://github.com/GoogleCloudPlatform/google-auth-library-python/issues/50) but I have created a new Service Account credentials following the instructions here: https://cloud.google.com/storage/docs/access-control/create-signed-urls-program and my key.json file most certainly includes a private key. Still I am seeing that error.
This is my code:
keyfile = "/path/to/my/key.json"
credentials = ServiceAccountCredentials.from_json_keyfile_name(keyfile)
expiration = timedelta(3) # valid for 3 days
url = blob.generate_signed_url(expiration, method="GET",
credentials=credentials)
I've read through the issue tracker here https://github.com/GoogleCloudPlatform/google-cloud-python/issues?page=2&q=is%3Aissue+is%3Aopen and nothing related jumps out so I am assuming this should work. Cannot see what's going wrong here.
I was having the same issue. Ended up fixing it by starting the storage client directly from the service account json.
storage_client = storage.Client.from_service_account_json('path_to_service_account_key.json')
I know I'm late to the party but hopefully this helps!
Currently, it's not possible to use blob.generate_signed_url without explicitly referencing credentials. (Source: Google-Cloud-Python documentation) However, you can do a workaround, as seen here, which consists of:
signing_credentials = compute_engine.IDTokenCredentials(
auth_request,
"",
service_account_email=credentials.service_account_email
)
signed_url = signed_blob_path.generate_signed_url(
expires_at_ms,
credentials=signing_credentials,
version="v4"
)
A much complete snippet for those asking where other elements come from. cc #AlbertoVitoriano
from google.auth.transport import requests
from google.auth import default, compute_engine
credentials, _ = default()
# then within your abstraction
auth_request = requests.Request()
credentials.refresh(auth_request)
signing_credentials = compute_engine.IDTokenCredentials(
auth_request,
"",
service_account_email=credentials.service_account_email
)
signed_url = signed_blob_path.generate_signed_url(
expires_at_ms,
credentials=signing_credentials,
version="v4"
)
First of all, I'm not a Python guru as you can probably tell... So here we go.
I'm trying to use Asana's API to pull data with Python requests (Projects, tasks, etc) and doing the authentication using Oauth 2.0... I've been trying to find a simple python script to have something to begin with but I haven't had any luck and I can't find a decent and simple example!
I already created the app and got my client_secret and client_secret. But I don't really know where or how to start... Could anybody help me please?
import sys, os, requests
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
import asana
import json
from six import print_
import requests_oauthlib
from requests_oauthlib import OAuth2Session
client_id=os.environ['ASANA_CLIENT_ID'],
client_secret=os.environ['ASANA_CLIENT_SECRET'],
# this special redirect URI will prompt the user to copy/paste the code.
# useful for command line scripts and other non-web apps
redirect_uri='urn:ietf:wg:oauth:2.0:oob'
if 'ASANA_CLIENT_ID' in os.environ:
#Creates a client with previously obtained Oauth credentials#
client = asana.Client.oauth(
#Asana Client ID and Secret, set as a Windows environments to avoid hardcoding variables into the script#
client_id=os.environ['ASANA_CLIENT_ID'],
client_secret=os.environ['ASANA_CLIENT_SECRET'],
# this special redirect URI will prompt the user to copy/paste the code.
# useful for command line scripts and other non-web apps
redirect_uri='urn:ietf:wg:oauth:2.0:oob'
)
print ("authorized=", client.session.authorized)
# get an authorization URL:
(url, state) = client.session.authorization_url()
try:
# in a web app you'd redirect the user to this URL when they take action to
# login with Asana or connect their account to Asana
import webbrowser
webbrowser.open(url)
except Exception as e:
print_("Open the following URL in a browser to authorize:")
print_(url)
print_("Copy and paste the returned code from the browser and press enter:")
code = sys.stdin.readline().strip()
# exchange the code for a bearer token
token = client.session.fetch_token(code=code)
#print_("token=", json.dumps(token))
print_("authorized=", client.session.authorized)
me = client.users.me()
print "Hello " + me['name'] + "\n"
params = {'client_id' : client_id, 'redirect_uri' : redirect_uri, 'response_type' : token,}
print_("*************** Request begings *******************"+"\n")
print_("r = requests.get('https://app.asana.com/api/1.0/users/me)" + "\n")
r = requests.get('https://app.asana.com/api/1.0/users/me', params)
print_(r)
print_(r.json)
print_(r.encoding)
workspace_id = me['workspaces'][0]['id']
print_("My workspace ID is" + "\n")
print_(workspace_id)
print_(client.options)
I'm not sure how to use the requests lib with Asana. Their python doc did not help me. I'm trying to pull the available projects and their code colours so I can later plot them into a web browser (For a high-level view of the different projects and their respective colours - Green, yellow or red)
When I introduce the url (https://app.asana.com/api/1.0/users/me) into a browser, it gives me back a json response with the data, but when I try to do the same with the script, it gives me back a 401 (not authorized) response.
Does anybody know what I'm missing / doing wrong?
Thank you!!!
I believe the issue is that the Requests library is a lower level library. You would need to pass all of the parameters to your requests.
Is there a reason you are not exclusively using the Asana Python client library to make requests? All of the data you are looking to fetch from Asana (projects, tasks, etc.) are accessible using the Asana Python library. You will want to look in the library to find the methods you need. For example, the methods for the tasks resource can be found here. I think this approach will be easier (and less error-prone) than switching between the Asana lib and the Requests lib. The Asana lib is actually built on top of Requests (as seen here).
I am working on bugzilla xml-rpc by using "Bugzilla XMLRPC access module" developed in python.
How I can attach/download bugzilla file by using this module ?
According to guideline of API get_attachments_by_bug($bug_id) retrieves and returns the attachments.
But this function didn't worked for me, I got following error message.
<type 'exceptions.AttributeError'>: 'Bugzilla4' object has no attribute 'get_attachments_by_bug'
Any help would be appreciated.
FYI:
I am in contact with supplier of python-bugzilla tool and here I got response from them.
"Not all bugzilla XMLRPC APIs are wrapped by python-bugzilla, that's one of them.
The 'bugzilla' command line tool that python-bugzilla ships has commands for
attaching files and downloading attachments, take a look at the code there for
guidance."
I've figured out the way how to download/upload attachment by using "Bugzilla XMLRPC access module"
you need to pass the id of attached file as parameter to the following function
Download:
downloaded_file = bz.download_attachment(attachment_id)
file_name = str(downloaded_file.name)
Upload:
kwards = {
'contenttype':'application/octet-stream',
# 'filename': file_path #there could be more parameters if needed
}
#attachfile method will return the id of attached file
bz.attachfile(bug_id, file_path, file_name, **kwards)
However attached file got corrupted due to some xmp-rpc API's internal methods described here, here and here, that's another issue :)