I am trying to deploy a Python app in Google Cloud Run to perform some tasks automatically and these tasks require access to my BigQuery.
I have tested the implementation in localhost through Cloud Shell, and it worked just as expected. Then I created a Cloud Run Service and all functions that do not require access to BigQuery work normally, but when I they require, I get the following error:
google.auth.exceptions.DefaultCredentialsError: File /XXXXXX/gbq.json was not found.
However, the file is there (the folders are correct, and I also tested adding copies of the file in other folders):
Any suggestions to solve the problem or a workaround I could use?
Thanks in advance
ADDITIONAL INFO:
main.py function:
(the bottom part of the code is used to test the app in localhost, which works perfectly)
from flask import Flask, request
from test_py import test as t
app = Flask(__name__)
#app.get("/")
def hello():
"""Return a friendly HTTP greeting."""
chamado = request.args.get("chamado", default="test")
print(chamado)
if chamado == 'test':
dados = f'chamado = test?\n{chamado == "test"}\n{t.show_data(chamado)}'
elif chamado == 'bigqueer':
dados = f'chamado = test?\n{chamado == "test"}\n{t.show_bq_data()}'
else:
dados = f'chamado = test?\n{chamado == "test"}\n{t.show_not_data(chamado)}'
print(dados)
return dados
if __name__ == "__main__":
# Development only: run "python main.py" and open http://localhost:8080
# When deploying to Cloud Run, a production-grade WSGI HTTP server,
# such as Gunicorn, will serve the app.
app.run(host="localhost", port=8080, debug=True)
BigQuery class:
class GoogleBigQuery:
def __init__(self):
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/XXXXXX/gbq.json'
self.client = bq.Client()
def executar_query(self, query):
client_query = self.client.query(query)
result = client_query.result()
return result
Cloud Run deploy:
gcloud run deploy pythontest \
--source . \
--platform managed \
--region $REGION \
--allow-unauthenticated
YOU DO NOT NEED THAT
Excuse my first brutal words but it's extremely dangerous to do what you do. Let me explain.
In your container, you put, in plain text a secret. Keep in mind that your container is like a zip. There is nothing secret, encrypted in it. You can convince yourselves by using dive and exploring your container layers and data.
Therefore: DO NOT DO THAT!
So now, what to do?
On Google Cloud, all the services can use the Metadata server to get credentials. The client libraries leverage it and you can rely on the default credential with you initialise your code. That mechanism is named ADC.
In your code, simpy remove that line: os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/XXXXXX/gbq.json'
So that, when you deploy your Cloud Run service, specify the runtime service account that you want to use. That's all! The Google Cloud environment and libraries will do the rest
Related
i'm writing this Google Cloud Function (Python)
def create_kubeconfig(request):
subprocess.check_output("curl https://sdk.cloud.google.com | bash | echo "" ",stdin=subprocess.PIPE, shell=True )
os.system("./google-cloud-sdk/install.sh")
os.system("gcloud init")
os.system("curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl")
os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**")
os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**")
conf = KubeConfig()
conf.use_context('**cluster name**')
when i run the code it gives me the error
'Invalid kube-config file. ' kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
help me to solve it please
You have to reach programmatically the K8S API. You have the description of the API in the documentation
But it's not easy and simple to perform. However, here some inputs for achieving what you want.
First, get the GKE master IP
Then you can access to the cluster easily. Here for reading the deployment
import google.auth
from google.auth.transport import requests
credentials, project_id = google.auth.default()
session = requests.AuthorizedSession(credentials)
response = session.get('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', verify=False)
response.raise_for_status()
print(response.json())
For creating one, you can do this
import google.auth
from google.auth.transport import requests
credentials, project_id = google.auth.default()
session = requests.AuthorizedSession(credentials)
with open("deployment.yaml", "r") as f:
data = f.read()
response = session.post('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', data=data,
headers={'content-type': 'application/yaml'}, verify=False)
response.raise_for_status()
print(response.json())
According with the object that you want to build, you have to use the correct file definition and the correct API endpoint. I don't know a way to apply a whole yaml with several definition in only one API call.
Last things, be sure to provide the correct GKE roles to the Cloud Function service Account
UPDATE
Another solution is to use Cloud Run. Indeed, with Cloud Run and thanks to the Container capability, you have the ability to install and to call system process (it's totally open because your container runs into a GVisor sandbox, but most of common usages are allowed)
The idea is the following: use a gcloud SDK base image and deploy your application on it. Then, code your app to perform system calls.
Here a working example in Go
Docker file
FROM golang:1.13 as builder
# Copy local code to the container image.
WORKDIR /app/
COPY go.mod .
ENV GO111MODULE=on
RUN go mod download
COPY . .
# Perform test for building a clean package
RUN go test -v ./...
RUN CGO_ENABLED=0 GOOS=linux go build -v -o server
# Gcloud capable image
FROM google/cloud-sdk
COPY --from=builder /app/server /server
CMD ["/server"]
Note: The image cloud-sdk image is heavy: 700Mb
The content example (only the happy path. I remove error management, and the stderr/stdout feedback for simplifying the code)
.......
// Example here: recover the yaml file into a bucket
client,_ := storage.NewClient(ctx)
reader,_ := client.Bucket("my_bucket").Object("deployment.yaml").NewReader(ctx)
content,_:= ioutil.ReadAll(reader)
// You can store locally the file into /tmp directory. It's an in-memory file system. Don't forget to purge it to avoid any out of memory crash
ioutil.WriteFile("/tmp/file.yaml",content, 0644)
// Execute external command
// 1st Recover the kube authentication
exec.Command("gcloud","container","clusters","get-credentials","cluster-1","--zone=us-central1-c").Run()
// Then interact with the cluster with kubectl tools and simply apply your description file
exec.Command("kubectl","apply", "-f","/tmp/file.yaml").Run()
.......
Instead of using gcloud inside the Cloud Function (and attempting to install it on every request, which will significantly increase the runtime of your function), you should use the google-cloud-container client library to make the same API calls directly from Python, for example:
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
project_id = 'YOUR_PROJECT_ID'
zone = 'YOUR_PROJECT_ZONE'
response = client.list_clusters(project_id, zone)
I simply need an efficient way to debug GAE application, and to do so I need to connect to the production GAE infrastructure from the localhost when running dev_appserver.py.
Next code work well if I run it as a separate script:
import argparse
try:
import dev_appserver
dev_appserver.fix_sys_path()
except ImportError:
print('Please make sure the App Engine SDK is in your PYTHONPATH.')
raise
from google.appengine.ext import ndb
from google.appengine.ext.remote_api import remote_api_stub
def main(project_id):
server_name = '{}.appspot.com'.format(project_id)
remote_api_stub.ConfigureRemoteApiForOAuth(
app_id='s~' + project_id,
path='/_ah/remote_api',
servername=server_name)
# List the first 10 keys in the datastore.
keys = ndb.Query().fetch(10, keys_only=True)
for key in keys:
print(key)
if __name__ == '__main__':
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument('project_id', help='Your Project ID.')
args = parser.parse_args()
main(args.project_id)
With this script, I was able to get data from remote Datastore. But where is I need to put the same code in my application(which is obviously not a single script) to make it work?
I've tried to put remote_api_stub.ConfigureRemoteApiForOAuth() code in the appengine_config.py but I've got a recursion error.
I'm running app like this:
dev_appserver.py app.yaml --admin_port=8001 --enable_console --support_datastore_emulator=no --log_level=info
The application uses NDB to access Google Datastore.
Application contain many modules and files and I simply don't know where is to put remote_api_stab auth code.
I hope somebody from the google team will see this topic because I've searched all the internet without any results. That's unbelievable how many people developing apps for the GAE platform, but it looks like nobody is developing/debugging apps locally.
I have built a SMS service (using Twilio) that the user texts to get realtime bus information. At the moment i have been hosting this on my personal computer using ngrok. Now i want to use AWS to host this service, but I am not sure as to how i should go about it. I have tried running a flask webserver and trying to get ngrok to run on AWS, but no luck.
Here is my code concerning Flask and Twilio's REST Api:
app = Flask(__name__)
#app.route("/sms", methods=['GET', 'POST'])
def hello_monkey():
resp = MessagingResponse()
response = request.form['Body']
if (" " in response):
response = response.split(" ")
result = look_up(response[0], response[1])
else:
result = look_up(response, False)
resp.message(result)
return str(resp)
if __name__ == "__main__":
app.run(debug=True)
There is a blog post on the Twilio blog on How to Send SMS Text Messages with AWS Lambda and Python 3.6. It does not use Flask, but it can definitely be modified to achieve your goal. Alternatively, you could read about using Flask with AWS Elastic Beanstalk here.
Running ngrok on AWS is not the correct approach to this. If you wanted to host your own Flask server, you could use something like Lightsail, but that's overkill for this usage.
I am trying to Implement Google Cloud DataStore in my Python Django Project not running on Google App Engine.
Can it be possible to use Google Datastore without having the project run on Google App Engine ? If yes, Can you please tell how to retrieve the complete entity object or execute the query successfully ?
The below code snippet prints the query object but throws an error after that.
Code Snippet:
from gcloud import datastore
entity_kind = 'EntityKind'
numeric_id = 1234
client = datastore.Client()
key = client.key(entity_kind, numeric_id)
query = client.query(kind=entity_kind)
print(query)
results = list(query.fetch())
print(results)
Error:
NotFound: 404 The project gproj does not exist or it does not contain an active App Engine application. Please visit http://console.developers.google.com to create a project or https://console.developers.google.com/appengine?project=gproj to add an App Engine application. Note that the app must not be disabled.
This guide will probably be helpful. You can see an example of it in action here.
You just need to pass a project id to the .Client() method:
datastore.Client("YOUR_PROJECT_ID")
You can also skip this part by running this command before running your app:
$ gcloud beta auth application-default login
If you run that, it will authenticate all of your requests locally without injecting the project id :)
Hope this helps!
I'm trying to programmatically spin up an Azure VM using the Python REST API wrapper. All I want is a simple VM, not part of a deployment or anything like that. I've followed the example here: http://www.windowsazure.com/en-us/develop/python/how-to-guides/service-management/#CreateVM
I've gotten the code to run, but I am not seeing any new VM in the portal; all it does is create a new cloud service that says "You have nothing deployed to the production environment." What am I doing wrong?
You've created a hosted_service (cloud service) but haven't put deployed anything in that service. You need to do a few more things so I'll coninue from where you left off, where name is the name of the VM:
# Where should the OS VHD be created:
media_link = 'http://portalvhdsexample.blob.core.windows.net/vhds/%s.vhd' % name
# Linux username/password details:
linux_config = azure.servicemanagement.LinuxConfigurationSet(name, 'username', 'password', True)
# Endpoint (port) configuration example, since documentation on this is lacking:
endpoint_config = azure.servicemanagement.ConfigurationSet()
endpoint_config.configuration_set_type = 'NetworkConfiguration'
endpoint1 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='HTTP', protocol='tcp', port='80', local_port='80', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint2 = azure.servicemanagement.ConfigurationSetInputEndpoint(name='SSH', protocol='tcp', port='22', local_port='22', load_balanced_endpoint_set_name=None, enable_direct_server_return=False)
endpoint_config.input_endpoints.input_endpoints.append(endpoint1)
endpoint_config.input_endpoints.input_endpoints.append(endpoint2)
# Set the OS HD up for the API:
os_hd = azure.servicemanagement.OSVirtualHardDisk(image_name, media_link)
# Actually create the machine and start it up:
try:
sms.create_virtual_machine_deployment(service_name=name, deployment_name=name,
deployment_slot='production', label=name, role_name=name,
system_config=linux_config, network_config=endpoint_config,
os_virtual_hard_disk=os_hd, role_size='Small')
except Exception as e:
logging.error('AZURE ERROR: %s' % str(e))
return False
return True
Maybe I'm not understanding your problem, but a VM is essentially a deployment within a cloud service (think of it like a logical container for machines).