I am new to GCP. I created a cloud function and tried to deploy it. I encountered the following error while deploying. Can anyone help me solve this issue? Thank you!
Command:
gcloud functions deploy first_cloud_function_http --runtime python37 --trigger-http --allow-unauthenticated --verbosity debug
Error Logs:
DEBUG: Running [gcloud.functions.deploy] with arguments: [--allow-unauthenticated: "True", --runtime: "python37", --trigger-http: "True", --verbosity: "debug", NAME: "first_cloud_function_http"]
INFO: Not using ignore file.
INFO: Not using ignore file.
Deploying function (may take a while - up to 2 minutes)...failed.
DEBUG: (gcloud.functions.deploy) OperationError: code=13, message=Failed to initialize region (action ID: 78ed38913711b6cd)
Traceback (most recent call last):
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 983, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 808, in Run
resources = command_instance.Run(args)
File "/home/hasher/GN/google-cloud-sdk/lib/surface/functions/deploy.py", line 351, in Run
return _Run(args, track=self.ReleaseTrack())
File "/home/hasher/GN/google-cloud-sdk/lib/surface/functions/deploy.py", line 305, in _Run
on_every_poll=[TryToLogStackdriverURL])
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 318, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 369, in WaitForFunctionUpdateOperation
on_every_poll=on_every_poll)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 151, in Wait
on_every_poll)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 121, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 73, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
googlecloudsdk.api_lib.functions.exceptions.FunctionsError: OperationError: code=13, message=Failed to initialize region (action ID: 78ed38913711b6cd)
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=Failed to initialize region (action ID: 78ed38913711b6cd)
Notice that as stated on the documentation of the gcloud functions deploy command you'd necessarily need to set the --region flag.
To check the available regions where Cloud Functions is available refer to the following sections of the documentation.
For example, if you'd like to deploy the function in the europe-west1 region running the following command would suffice:
gcloud functions deploy first_cloud_function_http --region europe-west1 --runtime python37 --trigger-http --allow-unauthenticated --verbosity debug
Additionally, if you'd like to avoid using the --region flag you can set a default region for Cloud Functions by running:
gcloud config set functions/region REGION
where you could change the REGION field to any of the locations mentioned above.
Related
I am developing an application in Python which will be deployed on Google Kubernetes Engine (GKE) pod. The application involves writing and reading .csv files to Google Cloud Storage (private Google bucket). I am facing an error while trying to read/write files to the Google bucket. The read/write to private google bucket is working when I run the application on my local system.
The operation is failing when the application is deployed to the GKE pod.
The GKE pod in the cluster is not able to access the private GCS bucket even though I am providing credentials the same as local system. Following are some of the details regarding the application and the error which I am facing:
DockerFile: The docker file contain a reference to the cred.json file which contains credentials of the google cloud service account.
The service account has permissions of google cloud storage admin.
FROM python:3.9.10-slim-buster
WORKDIR /pipeline
COPY . .
RUN pip3 install --no-cache-dir -r requirements.txt
EXPOSE 3000
ENV GOOGLE_APPLICATION_CREDENTIALS=/pipeline/cred.json
ENV GIT_PYTHON_REFRESH=quiet
requirements.txt: Following is the requirements.txt file content (I have included only google cloud related packages as they are relevant related to the error):
google-api-core==2.8.2
google-auth==2.9.0
google-auth-oauthlib==0.5.2
google-cloud-bigquery==3.2.0
google-cloud-bigquery-storage==2.11.0
google-cloud-core==2.3.1
google-cloud-storage==2.4.0
google-crc32c==1.3.0
google-resumable-media==2.3.3
googleapis-common-protos==1.56.3
fsspec==2022.8.2
gcsfs==2022.8.2
gevent==21.12.0
Error details: Following is the traceback:
Anonymous caller does not have storage.objects.create access to the Google Cloud Storage bucket., 401
ERROR:gcsfs:_request non-retriable exception: Anonymous caller does not have storage.objects.create access to the Google Cloud Storage bucket., 401
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 115, in retry_request
return await func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 384, in _request
validate_response(status, contents, path, args)
File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 102, in validate_response
raise HttpError(error)
gcsfs.retry.HttpError: Anonymous caller does not have storage.objects.create access to the Google Cloud Storage bucket., 401
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/usr/local/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/pipeline/training/train.py", line 133, in training
X.to_csv(file_name, index=False)
File "/usr/local/lib/python3.9/site-packages/pandas/core/generic.py", line 3563, in to_csv
return DataFrameRenderer(formatter).to_csv(
File "/usr/local/lib/python3.9/site-packages/pandas/io/formats/format.py", line 1180, in to_csv
csv_formatter.save()
File "/usr/local/lib/python3.9/site-packages/pandas/io/formats/csvs.py", line 261, in save
self._save()
File "/usr/local/lib/python3.9/site-packages/pandas/io/formats/csvs.py", line 266, in _save
self._save_body()
File "/usr/local/lib/python3.9/site-packages/pandas/io/formats/csvs.py", line 304, in _save_body
self._save_chunk(start_i, end_i)
File "/usr/local/lib/python3.9/site-packages/pandas/io/formats/csvs.py", line 315, in _save_chunk
libwriters.write_csv_rows(
File "pandas/_libs/writers.pyx", line 72, in pandas._libs.writers.write_csv_rows
File "/usr/local/lib/python3.9/site-packages/fsspec/spec.py", line 1491, in write
self.flush()
File "/usr/local/lib/python3.9/site-packages/fsspec/spec.py", line 1527, in flush
self._initiate_upload()
File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 1443, in _initiate_upload
self.location = sync(
File "/usr/local/lib/python3.9/site-packages/fsspec/asyn.py", line 96, in sync
raise return_result
File "/usr/local/lib/python3.9/site-packages/fsspec/asyn.py", line 53, in _runner
result[0] = await coro
File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 1559, in initiate_upload
headers, _ = await fs._call(
File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 392, in _call
status, headers, info, contents = await self._request(
File "/usr/local/lib/python3.9/site-packages/decorator.py", line 221, in fun
return await caller(func, *(extras + args), **kw)
File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 152, in retry_request
raise e
File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 115, in retry_request
return await func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/gcsfs/core.py", line 384, in _request
validate_response(status, contents, path, args)
File "/usr/local/lib/python3.9/site-packages/gcsfs/retry.py", line 102, in validate_response
raise HttpError(error)
gcsfs.retry.HttpError: Anonymous caller does not have storage.objects.create access to the Google Cloud Storage bucket., 401
I also tried running the application by making the google cloud bucket public. With this approach, the read and write operations to google cloud bucket are working.
The problem arises when the google cloud bucket is private (which is essential for application deployment).
Any help to resolve this error will be appreciated. Thanks in advance!!
You are able to read/write from local system because you might you be using your credential or impersonating the SA that has permission to access the private bucket. FYI - if you are access bucket cross-project then the SA should be granted required permission in the the project bucket is in.
One thing you can do is grant the SA that you are using to run the gke pod required permission (instead of explicitly setting the credentials GOOGLE_APPLICATION_CREDENTIALS) and can access the credentials with google.auth.default() wherever needed.
PS: If the SA running your gke pod has storage access permission in project the bucket you are trying to access is in, then you should be just fine.
Hope this helps :)
If I remove the codedeploy stage the codepipeline creates and executes the Source (GitHub) and Build (docker image to ECR) stages. In this case, I also create a new ECS cluster, which gets created as well as the expected service.
This is the portion of the code:
fargate_service = ecs_patterns.ApplicationLoadBalancedFargateService(self, parms["ecs_fargate_service"],
cluster = ecs_cluster,
task_definition = task_definition,
public_load_balancer = True,
desired_count = 2,
listener_port = 80,
min_healthy_percent = 100,
max_healthy_percent = 200,
assign_public_ip = False,
)
deploy_action = codepipeline_actions.EcsDeployAction(
action_name = 'DeployAction',
service = fargate_service,
image_file = codepipeline.ArtifactPath(build_output, 'imagedefinitions.json')
)
pipeline = codepipeline.Pipeline(self, "CodePipeline",
pipeline_name=parms["pipeline_name"]+"test",
cross_account_keys = False,
stages=[
codepipeline.StageProps(stage_name="Source", actions=[source_action]),
codepipeline.StageProps(stage_name="Build", actions=[build_action]),
codepipeline.StageProps(stage_name="Deploy-to-ECS", actions=[deploy_action])
]
)
If a remove this line codepipeline.StageProps(stage_name="Deploy-to-ECS", actions=[deploy_action]) there's no errors and the stack works.
I already tried the following tests:
Creation of the Codepipeline referencing an existing ECS cluster and in this case, there are no errors and the deploy stage works as expected.
Checked the documentation and there is no way to specify the region to the Codepipeline project.
This is the error
jsii.errors.JavaScriptError:
TypeError: Cannot read property 'region' of undefined
at RichAction.get effectiveRegion [as effectiveRegion] (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/private/rich-action.js:61:111)
at RichAction.get isCrossRegion [as isCrossRegion] (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/private/rich-action.js:31:61)
at Pipeline.ensureReplicationResourcesExistFor (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/pipeline.js:377:21)
at Pipeline._attachActionToPipeline (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/pipeline.js:337:38)
at Stage.attachActionToPipeline (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/private/stage.js:124:31)
at Stage.addAction (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/private/stage.js:71:33)
at new Stage (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/private/stage.js:28:18)
at Pipeline.addStage (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/pipeline.js:272:23)
at new Pipeline (/tmp/jsii-kernel-wOdXOf/node_modules/#aws-cdk/aws-codepipeline/lib/pipeline.js:239:18)
at /tmp/tmpg46d0z4s/lib/program.js:8367:58
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "app.py", line 40, in <module>
pipeline = EcsCicdStack(main_stack, "Pipeline-"+parms["stage"], vpc=vpc_stack.vpc, parms=parms)
File "/mnt/e/Proyectos/ecs-cicd/.venv/lib/python3.8/site-packages/jsii/_runtime.py", line 86, in __call__
inst = super().__call__(*args, **kwargs)
File "/mnt/e/Proyectos/ecs-cicd/ecs_cicd/ecs_cicd_stack.py", line 190, in __init__
pipeline = codepipeline.Pipeline(self, "CodePipeline",
File "/mnt/e/Proyectos/ecs-cicd/.venv/lib/python3.8/site-packages/jsii/_runtime.py", line 86, in __call__
inst = super().__call__(*args, **kwargs)
File "/mnt/e/Proyectos/ecs-cicd/.venv/lib/python3.8/site-packages/aws_cdk/aws_codepipeline/__init__.py", line 4030, in __init__
jsii.create(Pipeline, self, [scope, id, props])
File "/mnt/e/Proyectos/ecs-cicd/.venv/lib/python3.8/site-packages/jsii/_kernel/__init__.py", line 290, in create
response = self.provider.create(
File "/mnt/e/Proyectos/ecs-cicd/.venv/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 344, in create
return self._process.send(request, CreateResponse)
File "/mnt/e/Proyectos/ecs-cicd/.venv/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 326, in send
raise JSIIError(resp.error) from JavaScriptError(resp.stack)
jsii.errors.JSIIError: Cannot read property 'region' of undefined
This is the configuration of my environment:
CDK CLI Version : 1.120.0 (build 6c15150)
Framework Version: 1.120.0 (build 6c15150)
Node.js Version: v16.8.0
OS : Ubuntu 20.04 LTS
Language (Version): Python 3.8.2
I would really appreciate it if you could give me an idea as to where to look at.
Thanks in advance!
Regards.
This is because EcsDeployAction accepts aws_ecs.IBaseService in the service parameter, and you're passing an ecs_patterns.ApplicationLoadBalancedFargateService.
To access the actual ECS service construct, pass fargate_service.service.
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ecs-patterns.ApplicationLoadBalancedFargateService.html#service
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-codepipeline-actions.EcsDeployActionProps.html
The problem
I'm writing a GCP cloud function that takes an input id from a pubsub message, process, and output the table to BigQuery.
The code is as followed:
from __future__ import absolute_import
import base64
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from scrapinghub import ScrapinghubClient
import os
def processing_data_function():
# do stuff and return desired data
def create_data_from_id():
# take scrapinghub's job id and extract the data through api
def run(event, context):
"""Triggered from a message on a Cloud Pub/Sub topic.
Args:
event (dict): Event payload.
context (google.cloud.functions.Context): Metadata for the event.
"""
# Take pubsub message and also Scrapinghub job's input id
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
agrv = ['--project=project-name',
'--region=us-central1',
'--runner=DataflowRunner',
'--temp_location=gs://temp/location/',
'--staging_location=gs://staging/location/']
p = beam.Pipeline(options=PipelineOptions(agrv))
(p
| 'Read from Scrapinghub' >> beam.Create(create_data_from_id(pubsub_message))
| 'Trim b string' >> beam.FlatMap(processing_data_function)
| 'Write Projects to BigQuery' >> beam.io.WriteToBigQuery(
'table_name',
schema=schema,
# Creates the table in BigQuery if it does not yet exist.
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
)
p.run()
if __name__ == '__main__':
run()
Note that 2 functions create_data_from_id and processing_data_function process data from Scrapinghub (a scraping site for scrapy) and they're quite lengthy so I don't want to include them here. They have nothing to do with the error as well since this code works if I run it from the cloud shell and pass arguments using argparse.ArgumentParser() instead.
Regarding the error I have, while there was no problem deploying the code and the pubsub message could trigger the function successfully, the data flow job failed and reported this error:
"Error message from worker: Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 279, in loads
return dill.loads(s)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 275, in loads
return load(file, ignore, **kwds)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 472, in load
obj = StockUnpickler.load(self)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 826, in _import_module
return __import__(import_name)
ModuleNotFoundError: No module named 'main'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 649, in do_work
work_executor.execute()
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 179, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 662, in apache_beam.runners.worker.operations.DoOperation.start
File "apache_beam/runners/worker/operations.py", line 664, in apache_beam.runners.worker.operations.DoOperation.start
File "apache_beam/runners/worker/operations.py", line 665, in apache_beam.runners.worker.operations.DoOperation.start
File "apache_beam/runners/worker/operations.py", line 284, in apache_beam.runners.worker.operations.Operation.start
File "apache_beam/runners/worker/operations.py", line 290, in apache_beam.runners.worker.operations.Operation.start
File "apache_beam/runners/worker/operations.py", line 611, in apache_beam.runners.worker.operations.DoOperation.setup
File "apache_beam/runners/worker/operations.py", line 616, in apache_beam.runners.worker.operations.DoOperation.setup
File "/usr/local/lib/python3.7/site-packages/apache_beam/internal/pickler.py", line 283, in loads
return dill.loads(s)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 275, in loads
return load(file, ignore, **kwds)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 270, in load
return Unpickler(file, ignore=ignore, **kwds).load()
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 472, in load
obj = StockUnpickler.load(self)
File "/usr/local/lib/python3.7/site-packages/dill/_dill.py", line 826, in _import_module
return __import__(import_name)
ModuleNotFoundError: No module named 'main'
What I've tried
Given that I could run the same pipeline from the cloud shell but using the argument parser instead of specifying the options, I thought that the way the options stated were the problem. Hence, I tried different combinations of the options, which were with or without --save_main_session, --staging_location, --requirement_file=requirements.txt, --setup_file=setup.py ... They all reported more-or-less the same issue, all with dill don't know what module to pick up. With save_main_session specified, the main session couldn't be picked up. With requirement_file and setup_file specified the job was not even successfully created so I would save you the trouble of looking into its error. My main problem is I don't know where this problem came from because I've never used dill before and why is it so different running the pipeline from shell and from cloud functions? Does anybody have a clue?
Thanks
You may also try modifying the last part as and test if the following works:
if __name__ == "__main__":
...
Additionally, make sure you are executing the script in the correct folder, as it might have to do with the naming or the location of your files in your directories.
Please take into consideration the following sources, which you may find helpful: Source 1, Source 2
I hope this information helps.
You're maybe using gunicorn to start the app on Cloud Run (as a standard practice) like:
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
I faced with the same problem, and found a workaround as to start the app without gunicorn:
CMD exec python3 main.py
Probably, it's because gunicorn skips the main context and directly launch the main:app object. I don't know how to fix it with using gunicorn.
=== Additional note ===
I found a way to use gunicorn.
Move a function (that starts a pipeline) to another module such as df_pipeline/pipe.py.
.
├── df_pipeline
│ ├── __init__.py
│ └── pipe.py
├── Dockerfile
├── main.py
├── requirements.txt
└── setup.py
# in main.py
import df_pipeline as pipe
result = pipe.preprocess(....)
Create setup.py in the same directory as main.py
# setup.py
import setuptools
setuptools.setup(
name='df_pipeline',
install_requires=[],
packages=setuptools.find_packages(include=['df_pipeline']),
)
Set the pipeline option setup_file as ./setup.py in df_pipeline/pipe.py.
I have a problem configuring Endpoints API. Any code i use, from my own, to google's examples on site fail with the same traceback
WARNING 2016-11-01 06:16:48,279 client.py:229] no scheduler thread, scheduler.run() will be invoked by report(...)
Traceback (most recent call last):
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/control/client.py", line 225, in start
self._thread.start()
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/api/background_thread/background_thread.py", line 108, in start
start_new_background_thread(self.__bootstrap, ())
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/api/background_thread/background_thread.py", line 87, in start_new_background_thread
raise ERROR_MAP[error.application_error](error.error_detail)
FrontendsNotSupported
INFO 2016-11-01 06:16:48,280 client.py:327] created a scheduler to control flushing
INFO 2016-11-01 06:16:48,280 client.py:330] scheduling initial check and flush
INFO 2016-11-01 06:16:48,288 client.py:804] Refreshing access_token
/home/vladimir/projects/sb_fork/sb/lib/vendor/urllib3/contrib/appengine.py:113: AppEnginePlatformWarning: urllib3 is using URLFetch on Google App Engine sandbox instead of sockets. To use sockets directly instead of URLFetch see https://urllib3.readthedocs.io/en/latest/contrib.html.
AppEnginePlatformWarning)
ERROR 2016-11-01 06:16:49,895 service_config.py:125] Fetching service config failed (status code 403)
ERROR 2016-11-01 06:16:49,896 wsgi.py:263]
Traceback (most recent call last):
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/vladimir/projects/sb_fork/sb/main.py", line 27, in <module>
api_app = endpoints.api_server([SolarisAPI,], restricted=False)
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/endpoints/apiserving.py", line 497, in api_server
controller)
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/control/wsgi.py", line 77, in add_all
a_service = loader.load()
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/control/service.py", line 110, in load
return self._load_func(**kw)
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/config/service_config.py", line 78, in fetch_service_config
_log_and_raise(Exception, message_template.format(status_code))
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/config/service_config.py", line 126, in _log_and_raise
raise exception_class(message)
Exception: Fetching service config failed (status code 403)
INFO 2016-11-01 06:16:49,913 module.py:788] default: "GET / HTTP/1.1" 500 -
My app.yaml is configured like the new Endpoints Migrating to 2.0 document states:
- url: /_ah/api/.*
script: api.solaris.api_app
And main.py imports the API into the app:
api_app = endpoints.api_server([SolarisAPI,], restricted=False)
I use Google Cloud SDK with these versions:
Google Cloud SDK 132.0.0
app-engine-python 1.9.40
bq 2.0.24
bq-nix 2.0.24
core 2016.10.24
core-nix 2016.10.24
gcloud
gsutil 4.22
gsutil-nix 4.22
Have you tried generating and uploading the OpenAPI configuration for the service? See the sections named "Generating the OpenAPI configuration file" and "Deploying the OpenAPI configuration file" in the python library documentation.
Note that in step 2 of the generation process, you may need to prepend python to the command (e.g python lib/endpoints/endpointscfg.py get_swagger_spec ...), since the PyPi package doesn't preserve executable file permissions right now.
To get rid of the "FrontendsNotSupported" you need to use a "B*" instance class.
The error "Exception: Fetching service config failed" should be gone if you follow the steps in https://cloud.google.com/endpoints/docs/frameworks/python/quickstart-frameworks-python. As already pointed out by Brad, the section "OpenAPI configuration" and the resulting environment variables are required to make the service configuration work.
Environment: I am running Django==1.5.4 with Python 2.7.2, deploying to Heroku. I am using Haystack with Elastic Search. On Heroku I'm using the Bonsai Elastic Search add-on.
Issue: When I run the rebuild_index command, I encounter a "Read timeout error" when destroying the index and "IndexMissingException" when attempting to create the indexes. The log output is this:
> heroku run python manage.py rebuild_index
Running `python manage.py rebuild_index` attached to terminal... up, run.1762
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
Failed to clear Elasticsearch index: Non-OK response returned (404): u'IndexMissingException[[msdc] missing]'
All documents removed.
Indexing 195 schools
ERROR:root:Error updating schools using default
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 221, in handle_label
self.update_backend(label, using)
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 267, in update_backend
do_update(backend, index, qs, start, end, total, self.verbosity)
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 89, in do_update
backend.update(index, current_qs)
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/backends/elasticsearch_backend.py", line 183, in update
self.conn.bulk_index(self.index_name, 'modelresult', prepped_docs, id_field=ID)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 96, in decorate
return func(*args, query_params=query_params, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 387, in bulk_index
query_params=query_params)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 254, in send_request
self._raise_exception(resp, prepped_response)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 268, in _raise_exception
raise error_class(response.status_code, error_message)
ElasticHttpNotFoundError: (404, u'IndexMissingException[[msdc] missing]')
ElasticHttpNotFoundError: (404, u'IndexMissingException[[msdc] missing]')
Verification: I have created the index explicitely, which I have verified by trying to re-create it and running through the test steps:
> curl -X POST http://index#box.us-east-1.bonsai.io/msdc
{"error":"Index already exists.","status":400}%
> curl -X POST http://index#boc.us-east-1.bonsai.io/msdc/test -d '{"title":"hello, world"}'
{"ok":true,"_index":"msdc","_type":"test","_id":"9q8t4m0sTgy6JeGkueL54Q","_version":1}%
> curl -X POST http://index#box.us-east-1.bonsai.io/msdc/_search -d '{}'
{"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"msdc","_type":"test","_id":"9q8t4m0sTgy6JeGkueL54Q","_score":1.0, "_source" : {"title":"hello, world"}}]}}%
I am fairly new to Elasticsearch and Heroku, so I may be missing a critical step. Any help troubleshooting this error would be greatly appreciated!