Environment: I am running Django==1.5.4 with Python 2.7.2, deploying to Heroku. I am using Haystack with Elastic Search. On Heroku I'm using the Bonsai Elastic Search add-on.
Issue: When I run the rebuild_index command, I encounter a "Read timeout error" when destroying the index and "IndexMissingException" when attempting to create the indexes. The log output is this:
> heroku run python manage.py rebuild_index
Running `python manage.py rebuild_index` attached to terminal... up, run.1762
WARNING: This will irreparably remove EVERYTHING from your search index in connection 'default'.
Your choices after this are to restore from backups or rebuild via the `rebuild_index` command.
Are you sure you wish to continue? [y/N] y
Removing all documents from your index because you said so.
Failed to clear Elasticsearch index: Non-OK response returned (404): u'IndexMissingException[[msdc] missing]'
All documents removed.
Indexing 195 schools
ERROR:root:Error updating schools using default
Traceback (most recent call last):
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 221, in handle_label
self.update_backend(label, using)
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 267, in update_backend
do_update(backend, index, qs, start, end, total, self.verbosity)
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/management/commands/update_index.py", line 89, in do_update
backend.update(index, current_qs)
File "/app/.heroku/python/lib/python2.7/site-packages/haystack/backends/elasticsearch_backend.py", line 183, in update
self.conn.bulk_index(self.index_name, 'modelresult', prepped_docs, id_field=ID)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 96, in decorate
return func(*args, query_params=query_params, **kwargs)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 387, in bulk_index
query_params=query_params)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 254, in send_request
self._raise_exception(resp, prepped_response)
File "/app/.heroku/python/lib/python2.7/site-packages/pyelasticsearch/client.py", line 268, in _raise_exception
raise error_class(response.status_code, error_message)
ElasticHttpNotFoundError: (404, u'IndexMissingException[[msdc] missing]')
ElasticHttpNotFoundError: (404, u'IndexMissingException[[msdc] missing]')
Verification: I have created the index explicitely, which I have verified by trying to re-create it and running through the test steps:
> curl -X POST http://index#box.us-east-1.bonsai.io/msdc
{"error":"Index already exists.","status":400}%
> curl -X POST http://index#boc.us-east-1.bonsai.io/msdc/test -d '{"title":"hello, world"}'
{"ok":true,"_index":"msdc","_type":"test","_id":"9q8t4m0sTgy6JeGkueL54Q","_version":1}%
> curl -X POST http://index#box.us-east-1.bonsai.io/msdc/_search -d '{}'
{"took":2,"timed_out":false,"_shards":{"total":1,"successful":1,"failed":0},"hits":{"total":1,"max_score":1.0,"hits":[{"_index":"msdc","_type":"test","_id":"9q8t4m0sTgy6JeGkueL54Q","_score":1.0, "_source" : {"title":"hello, world"}}]}}%
I am fairly new to Elasticsearch and Heroku, so I may be missing a critical step. Any help troubleshooting this error would be greatly appreciated!
Related
I'm trying to set up an Airflow DAG that is able to send emails through the EmailOperator in Composer 2, Airflow 2.3.4. I've followed this guide. I tried running the example DAG that is provided in the guide, but I get an HTTP 400 error.
The log looks like this:
[2023-01-20, 10:46:45 UTC] {taskinstance.py:1904} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/email.py", line 75, in execute
send_email(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/utils/email.py", line 58, in send_email
return backend(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/sendgrid/utils/emailer.py", line 123, in send_email
_post_sendgrid_mail(mail.get(), conn_id)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/sendgrid/utils/emailer.py", line 142, in _post_sendgrid_mail
response = sendgrid_client.client.mail.send.post(request_body=mail_data)
File "/opt/python3.8/lib/python3.8/site-packages/python_http_client/client.py", line 277, in http_request
self._make_request(opener, request, timeout=timeout)
File "/opt/python3.8/lib/python3.8/site-packages/python_http_client/client.py", line 184, in _make_request
raise exc
python_http_client.exceptions.BadRequestsError: HTTP Error 400: Bad Request
I've looked at similar threads on Stackoverflow but none of those suggestions worked for me.
I have set up and verified the from email address in Sendgrid and it
uses a whole email address including the domain.
I also set this email address up in Secret Manager (as well as the API key).
I haven't changed the test DAG from the guide, except for the 'to' address.
In another DAG I've tried enabling 'email_on_retry' and that also didn't trigger any mail.
I'm at a loss here, can someone provide me with suggestions on things to try?
I am new to GCP. I created a cloud function and tried to deploy it. I encountered the following error while deploying. Can anyone help me solve this issue? Thank you!
Command:
gcloud functions deploy first_cloud_function_http --runtime python37 --trigger-http --allow-unauthenticated --verbosity debug
Error Logs:
DEBUG: Running [gcloud.functions.deploy] with arguments: [--allow-unauthenticated: "True", --runtime: "python37", --trigger-http: "True", --verbosity: "debug", NAME: "first_cloud_function_http"]
INFO: Not using ignore file.
INFO: Not using ignore file.
Deploying function (may take a while - up to 2 minutes)...failed.
DEBUG: (gcloud.functions.deploy) OperationError: code=13, message=Failed to initialize region (action ID: 78ed38913711b6cd)
Traceback (most recent call last):
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 983, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 808, in Run
resources = command_instance.Run(args)
File "/home/hasher/GN/google-cloud-sdk/lib/surface/functions/deploy.py", line 351, in Run
return _Run(args, track=self.ReleaseTrack())
File "/home/hasher/GN/google-cloud-sdk/lib/surface/functions/deploy.py", line 305, in _Run
on_every_poll=[TryToLogStackdriverURL])
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 318, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 369, in WaitForFunctionUpdateOperation
on_every_poll=on_every_poll)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 151, in Wait
on_every_poll)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 121, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/home/hasher/GN/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 73, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
googlecloudsdk.api_lib.functions.exceptions.FunctionsError: OperationError: code=13, message=Failed to initialize region (action ID: 78ed38913711b6cd)
ERROR: (gcloud.functions.deploy) OperationError: code=13, message=Failed to initialize region (action ID: 78ed38913711b6cd)
Notice that as stated on the documentation of the gcloud functions deploy command you'd necessarily need to set the --region flag.
To check the available regions where Cloud Functions is available refer to the following sections of the documentation.
For example, if you'd like to deploy the function in the europe-west1 region running the following command would suffice:
gcloud functions deploy first_cloud_function_http --region europe-west1 --runtime python37 --trigger-http --allow-unauthenticated --verbosity debug
Additionally, if you'd like to avoid using the --region flag you can set a default region for Cloud Functions by running:
gcloud config set functions/region REGION
where you could change the REGION field to any of the locations mentioned above.
The Pybossa didn't describe how to configure webhook.
I met some issue when I'm configuring webhook, below is my steps:
fork pybossa webhook example
Run webhook with default settings(modified api_key and endpoint).
In Pybossa, modify the project and add webhook to point to webhook running URL.
Open a command line window and execute the following command:
# rqworker high
Then when a task is completed, I see there are logs in command line window. which is complaining the following I get the below error:
14:06:11 *** Listening on high...
14:07:42 high: pybossa.jobs.webhook(u'http://192.168.116.135:5001', {'project_short_name': u'tw', 'task_id': 172, 'fired_at': '2017-08-10 06:07:42', 'project_id': 17, 'result_id': 75, 'event': 'task_completed'}) (e435386c-615d-4525-a65d-f08f0afd2351)
14:07:44 UnboundLocalError: local variable 'project' referenced before assignment
Traceback (most recent call last):
File "/home/baib2/Desktop/pybossa_server/env/local/lib/python2.7/site-packages/rq/worker.py", line 479, in perform_job
rv = job.perform()
File "/home/baib2/Desktop/pybossa_server/env/local/lib/python2.7/site-packages/rq/job.py", line 466, in perform
self._result = self.func(*self.args, **self.kwargs)
File "./pybossa/jobs.py", line 525, in webhook
if project.published and webhook.response_status_code != 200 and current_app.config.get('ADMINS'):
UnboundLocalError: local variable 'project' referenced before assignment
I'm not sure if we should execute the following command
# rqworker high
But if this rqworker not running, I don't see any component picking up work from the redis queue.
You need to run a specific worker, not the default one from PYBOSSA. Just use https://github.com/Scifabric/pybossa/blob/master/app_context_rqworker.py to run it like this:
python app_context_rqworker.py high
This will set up the Flask context, and it will run properly ;-)
We're in the middle of improving our docs, so this should be better in the coming months.
I have a problem configuring Endpoints API. Any code i use, from my own, to google's examples on site fail with the same traceback
WARNING 2016-11-01 06:16:48,279 client.py:229] no scheduler thread, scheduler.run() will be invoked by report(...)
Traceback (most recent call last):
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/control/client.py", line 225, in start
self._thread.start()
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/api/background_thread/background_thread.py", line 108, in start
start_new_background_thread(self.__bootstrap, ())
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/api/background_thread/background_thread.py", line 87, in start_new_background_thread
raise ERROR_MAP[error.application_error](error.error_detail)
FrontendsNotSupported
INFO 2016-11-01 06:16:48,280 client.py:327] created a scheduler to control flushing
INFO 2016-11-01 06:16:48,280 client.py:330] scheduling initial check and flush
INFO 2016-11-01 06:16:48,288 client.py:804] Refreshing access_token
/home/vladimir/projects/sb_fork/sb/lib/vendor/urllib3/contrib/appengine.py:113: AppEnginePlatformWarning: urllib3 is using URLFetch on Google App Engine sandbox instead of sockets. To use sockets directly instead of URLFetch see https://urllib3.readthedocs.io/en/latest/contrib.html.
AppEnginePlatformWarning)
ERROR 2016-11-01 06:16:49,895 service_config.py:125] Fetching service config failed (status code 403)
ERROR 2016-11-01 06:16:49,896 wsgi.py:263]
Traceback (most recent call last):
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/vladimir/sdk/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/vladimir/projects/sb_fork/sb/main.py", line 27, in <module>
api_app = endpoints.api_server([SolarisAPI,], restricted=False)
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/endpoints/apiserving.py", line 497, in api_server
controller)
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/control/wsgi.py", line 77, in add_all
a_service = loader.load()
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/control/service.py", line 110, in load
return self._load_func(**kw)
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/config/service_config.py", line 78, in fetch_service_config
_log_and_raise(Exception, message_template.format(status_code))
File "/home/vladimir/projects/sb_fork/sb/lib/vendor/google/api/config/service_config.py", line 126, in _log_and_raise
raise exception_class(message)
Exception: Fetching service config failed (status code 403)
INFO 2016-11-01 06:16:49,913 module.py:788] default: "GET / HTTP/1.1" 500 -
My app.yaml is configured like the new Endpoints Migrating to 2.0 document states:
- url: /_ah/api/.*
script: api.solaris.api_app
And main.py imports the API into the app:
api_app = endpoints.api_server([SolarisAPI,], restricted=False)
I use Google Cloud SDK with these versions:
Google Cloud SDK 132.0.0
app-engine-python 1.9.40
bq 2.0.24
bq-nix 2.0.24
core 2016.10.24
core-nix 2016.10.24
gcloud
gsutil 4.22
gsutil-nix 4.22
Have you tried generating and uploading the OpenAPI configuration for the service? See the sections named "Generating the OpenAPI configuration file" and "Deploying the OpenAPI configuration file" in the python library documentation.
Note that in step 2 of the generation process, you may need to prepend python to the command (e.g python lib/endpoints/endpointscfg.py get_swagger_spec ...), since the PyPi package doesn't preserve executable file permissions right now.
To get rid of the "FrontendsNotSupported" you need to use a "B*" instance class.
The error "Exception: Fetching service config failed" should be gone if you follow the steps in https://cloud.google.com/endpoints/docs/frameworks/python/quickstart-frameworks-python. As already pointed out by Brad, the section "OpenAPI configuration" and the resulting environment variables are required to make the service configuration work.
I am running django 1.1.1 on python2.6.1, and did start the django web server like this
manage.py runserver 192.0.0.1:8000
then tried to connect to the django dev web server on http://192.0.0.1:8000/
keep getting this message on the remote computer
Traceback (most recent call last):
File "C:\Python26\Lib\site-packages\django\core\servers\basehttp.py", line 279, in run
self.result = application(self.environ, self.start_response)
File "C:\Python26\Lib\site-packages\django\core\servers\basehttp.py", line 651, in __call__
return self.application(environ, start_response)
File "C:\Python26\lib\site-packages\django\core\handlers\wsgi.py", line 241, in __call__
response = self.get_response(request)
File "C:\Python26\lib\site-packages\django\core\handlers\base.py", line 115, in get_response
return debug.technical_404_response(request, e)
File "C:\Python26\Lib\site-packages\django\views\debug.py", line 247, in technical_404_response
tried = exception.args[0]['tried']
KeyError: 'tried'
what i am doing wrong ?
it seen to work ok if i run http://192.0.0.1:8000/ on the computer that runs the Django web server and have that ip 192.0.0.1:8000
If you look at the revision logs of that file you'll see that django has recently started catching the KeyError that is raised in that try block.
The log message reads "Ensured generating debug 404 page won't raise a key error. Thanks pigletto."
See the ticket http://code.djangoproject.com/ticket/12083 and the changeset http://code.djangoproject.com/changeset/12679
So, I would check if you're raising a debug 404 page, or if any of the other comments in the ticket jump out of you.
Hope that helps!
Update: After looking at the code a little bit closer, I'd take a good look at your urls.py file for any mistakes in the url resolving regex. Are you doing url('') instead of ('^$') for the root/homepage?