Endpoints quickstart-app-engine for Python deploy error - python

I'm trying to following the step-by-step tutorial found here, but the deploy just does not work, always returning Error Response: [13] An internal error occurred.
I did not change anything in the example code itself. As I said, I just followed the linked tutorial carefully. It fails and return the error above when I try gcloud app deploy.
Using gcloud app deploy --verbosity debug it brings some stacktrace, but without any usefull meaning. I'm copying below for completeness:
Updating service [default] (this may take several minutes)...failed.
DEBUG: (gcloud.app.deploy) Error Response: [13] An internal error occurred
Traceback (most recent call last):
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 791, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 756, in Run
resources = command_instance.Run(args)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/surface/app/deploy.py", line 65, in Run
parallel_build=False)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 587, in RunDeploy
flex_image_build_option=flex_image_build_option)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 395, in Deploy
extra_config_settings)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/appengine_api_client.py", line 188, in DeployService
message=message)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 244, in WaitForOperation
sleep_ms=retry_interval)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 266, in WaitFor
sleep_ms=sleep_ms)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 222, in RetryOnResult
if not should_retry(result, state):
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 260, in _IsNotDone
return not poller.IsDone(operation)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 169, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [13] An internal error occurred
ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred
Below is the app.yaml (exactly tha same of example git, except for the APPID):
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
# [START configuration]
endpoints_api_service:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy openapi-appengine.yaml' command.
name: echo-api.endpoints.MYAPPID.cloud.goog
config_id: [2018-01-09r1]
# [END configuration]
What I tried until now:
Change python version to python 2 (python_version: 2);
Include some files to skip in the app.yaml (copied from endpoint framework in standard env tutorial);
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?setuptools/script \(dev\).tmpl$
Tried the workaround from here;
gcloud config set app/use_deprecated_preparation True
NOthing worked. What I'm doing wrong?
Notes:
It works fine locally following the README info;
Everything works in GAE standard env folowing this another tutorial;
I did not found any problem in the Endpoint itself (I can see it deployed in API explorer), but the app deploy did not work in any way.

If the app.yaml file you are using is exactly the same as the one you copied in your question then there seems to be an error in the name and config-id you are entering. With the info provided in your question your app.yaml should look like:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
# [START configuration]
endpoints_api_service:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy openapi-appengine.yaml' command.
name: "echo-api.endpoints.MYAPPID.cloud.goog"
config_id: "2018-01-09r1"
# [END configuration]
See that you were not entering the values for name and config_id inside quotation marks, and also you should not enter your config_id inside brackets.
I have tried this myself and works alright. If it still doesn't work for you maybe your name is not correct, as the format seems to be:
"example-project.appspot.com"
So if your project id is i.e. my-project-id, then your name would look like:
"my-project-id.appspot.com"

Related

FastAPI python app error on Azure App Service

I have a python web application that is using FastAPI. It works locally, but when I deploy it to a free linux Azure App Service (using GitHub Actions) and try to load the site it says "Internal Server Error". When I pull up the application logs I see the following error message
2023-02-06T23:44:30.765055894Z [2023-02-06 23:44:30 +0000] [90] [ERROR] Error handling request /
2023-02-06T23:44:30.765101490Z Traceback (most recent call last):
2023-02-06T23:44:30.765109589Z File "/opt/python/3.10.9/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 136, in handle
2023-02-06T23:44:30.765116389Z self.handle_request(listener, req, client, addr)
2023-02-06T23:44:30.765122088Z File "/opt/python/3.10.9/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 179, in handle_request
2023-02-06T23:44:30.765128688Z respiter = self.wsgi(environ, resp.start_response)
2023-02-06T23:44:30.765134688Z TypeError: FastAPI.__call__() missing 1 required positional argument: 'send'
Any suggestions on how to fix this issue?
I was able to resolve this issue by adding the following custom startup command in the Azure App Service Configuration General Settings
python -m uvicorn app:app --host 0.0.0.0
As h4z3 pointed out, gunicorn is wsgi and fastapi is asgi so I had to change the startup command to use uvicorn. Additional details can be found in the Azure docs here: https://learn.microsoft.com/en-us/azure/app-service/configure-language-python#example-startup-commands

git webhook fails - do you know why?

Background:
I have a bitbucket repo called DOSTUFF that includes a python script do_stuff.py. I edit it using Eclipse pydev on my local machine and push changes to bitbucket via git push origin master.
I cloned DOSTUFF to a pythonanywhere trial account without any issues.
Now, whenever I edit do_stuff.py locally and then git commit -m 'foo' & git push origin master them to bitbucket, I manually need to git pull from within pythonanywhere afterwards in order to see the edits in pythonanywhere. This is inefficient.
Objectives:
I want that my local (Eclipse) commits to bitbucket get pulled automatically to pythonanywhere once pushed from local to bitbucket. Apparently, webhooks are the way to go.
Challenges:
In order to do so, I followed this hint by specifying a webhook within bitbucket to pythonanywhere/user/me/webhook.py. Unfortunately, those instructions are minimalistic as they lack proper imports and don't elucidate as to why flask is necessary (I am not an expert).
webhook.py looks like this:
#!/usr/bin/python2.7
# -*- coding: utf-8 -*
import git
from flask import Flask, request
# Initiate flask instance
app = Flask(__name__)
#app.route('/webhook', methods=['POST'])
def webhook():
if request.method == 'POST':
repo = git.Repo('./DOSTUFF')
origin = repo.remotes.origin
repo.create_head('master',
origin.refs.master).set_tracking_branch(origin.refs.master).checkout()
origin.pull()
return '', 200
else:
return '', 400
if __name__ == '__main__':
app.run(port=5000,debug=True)
Now, when I git push from Eclipse to bitbucket, the commit(s) arrive at bitbucket but the code in pythonanywhere remains unchanged. In other words, webhook.py fails.
In contrast, when I run webhook.py from within pythonanywhere (bash console), I yield the following error:
* Serving Flask app "__main__" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
Traceback (most recent call last):
File "/home/ME/webhook.py", line 21, in <module>
app.run(port=5000,debug=True)
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 943, in run
run_simple(host, port, self, **options)
File "/usr/local/lib/python2.7/dist-packages/werkzeug/serving.py", line 795, in run_simple
s.bind(get_sockaddr(hostname, port, address_family))
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 98] Address already in use
Questions:
What's the root cause for this to fail?
How to properly configure a webhook that is necessary and sufficient to auto-git pull changes to pythonanywhere once pushed from local to bitbucket?
You're trying to start a server in a PythonAnywhere console, which will not work since traffic is not routed to console servers. Use a web app to create the server to listen for the web hooks.

How can I run Lektor within a Docker container?

I'm attempting to run lektor within a docker container and have hit a problem.
If I 'ADD' (or 'COPY') my source code folder within my Dockerfile, everything works perfectly but, of course, the container is then not dynamic and doesn't respond to changes in the code.
If, instead, I use a volume, the container becomes dynamic and lektor successfully rebuilds and serves my site as I make changes.
However, when I come to publish the site, an error appears in the container's log and it enters a never-ending loop:
Started build
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/admin/utils.py", line 18, in generate
for event in chain(f(*args, **kwargs), (None,)):
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/admin/modules/api.py", line 309, in generator
for event in event_iter:
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/publisher.py", line 639, in publish
self.link_artifacts(path)
File "/usr/local/lib/lektor/lib/python2.7/site-packages/lektor/publisher.py", line 602, in link_artifacts
link(full_path, dst)
OSError: [Errno 18] Invalid cross-device link
Minimal Dockerfile:
FROM python:2.7.11
RUN curl -sf https://www.getlektor.com/install.sh | \
sed '/stdin/d;s/input = .*/return/' | \
sh
I'm actually using docker-compose.
Minimal docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/project
working_dir: /project/source
command: ['lektor', 'server', '--host', '0.0.0.0.']
(My project folder is structured such that the lektor project file and all the expected lektor folders are in the 'source' sub-folder).
The lektor build process uses hard links and a temporary folder for the built files. If the source code is on a mounted volume (which it is in a docker volume), then the two filesystems are different and the linking fails as above.
Deploying and building via the command line and specifying the output path can get around the problem (described here: https://www.getlektor.com/docs/deployment/), but it's not a great solution within a Docker container where the aim is to make life as simple as possible.
The method that does the linking within lektor actually falls back to copying instead in some circumstances. I've created an issue (https://github.com/lektor/lektor/issues/315) suggesting that the fall back also occurs if the project and output folders are on different volumes. I suspect that would solve the problem properly.

Heroku app - log folder

As Heroku has read-only file system except two directories (log and tmp) I wanted to dump my logs from python app to one of them.
The git repository pushed to heroku server app contains both of the folders created by my (checked twice, even downloaded app after push to check if both dirs are there).
While running "heroku run bash" I am able to see only the "tmp" folder - "log" is not visible using "ls -la" or even to the app as I receive errors regarding missing location for the .log files.
2013-08-05T13:10:41.170434+00:00 heroku[web.1]: Starting process with command `python runserver.py`
2013-08-05T13:10:43.132418+00:00 app[web.1]: Traceback (most recent call last):
2013-08-05T13:10:43.609980+00:00 app[web.1]: File "runserver.py", line 2, in <module>
2013-08-05T13:10:43.725134+00:00 app[web.1]: from app_name import app
2013-08-05T13:10:43.850738+00:00 app[web.1]: File "/app/app_name/__init__.py", line 13, in <module>
2013-08-05T13:10:43.968714+00:00 app[web.1]: from logger import flask_debug
2013-08-05T13:10:44.081900+00:00 app[web.1]: File "/app/logger.py", line 8, in <module>
2013-08-05T13:10:44.194540+00:00 app[web.1]: logging.config.dictConfig(CONFIG)
2013-08-05T13:10:44.306174+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/logging/config.py", line 797, in dictConfig
2013-08-05T13:10:44.425589+00:00 app[web.1]: dictConfigClass(config).configure()
2013-08-05T13:10:44.535392+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/logging/config.py", line 579, in configure
2013-08-05T13:10:44.274067+00:00 heroku[web.1]: Process exited with status 1
2013-08-05T13:10:44.287252+00:00 heroku[web.1]: State changed from starting to crashed
So quick thinking I wanted to check if I can place the logs in tmp folder. The app starts, everything looks fine... but nothing is dumped from the app.
I am lost actually as I've looked for the solution quite a time.
Maybe someone will tell me:
Why "log" directory under /app_name folder is not visible?
Why "tmp" directory is not receiving logs?
http://speedy.sh/VaR2w/logger.conf - here's the *.conf file for my loggers
http://speedy.sh/c3QNs/logger.py - here are loggers
PS. Logs are working for the console for the "tmp" folder configuration.
Two better ways:
1) Using Heroku's logdrain feature: https://devcenter.heroku.com/articles/logging#syslog-drains
2) Using one of the addons: Papertrail and Logly come to mind, both have free plans.
I use #1 above: I have an ec2 instance I set up that aggregates all the logs from all of my dynos into a specific set of files, which I use logrotate to manage. I then use simple grep to search thru them, and tail -f to follow, if I feel like it. My rsyslogd configuration on said ec2 instance is:
----- 8< ----- cut here ----- 8< ----- cut here ----- 8< ----- cut here -----
$ModLoad imtcp
$InputTCPServerRun 5514
# vi /etc/rsyslog.d/01-heroku.conf
if $syslogtag startswith 'app[postgres]' then /matchspot-logs/postgres
& ~
if $syslogtag startswith 'app[pgbackups]' then /matchspot-logs/postgres
& ~
if $syslogtag startswith 'heroku[' then /matchspot-logs/heroku
& ~
if $syslogtag startswith 'app[' then /matchspot-logs/app
& ~
when you heroku run, it spins up a new dyno so as not to degrade your web server's performance... think about it like this, if you had 2 web dynos, which one would you get when you heroku run? :)
the approaches suggested by other people here should work way better for you
also, you should be able to add free addons without a confirmed account

Error when running "python manage.py syncdb" locally, but no error when running the same command via Heroku

I am new to Heroku and Django/Python. I was hoping to find an answer for an issue I'm experiencing. I have been following the Getting Started tutorial in Heroku's Dev Center: https://devcenter.heroku.com/articles/django
Everything is working properly when running commands and pushing app code to Heroku. For example, when I run the CLI command "heroku run python manage.py syncdb" everything works as expected with no errors. However, when I try to run the same command locally, "python manage.py syncdb", I am getting the following output and error:
Scotts-MacBook-Pro:bonavina scottklieberman$ python manage.py syncdb
Traceback (most recent call last):
File "manage.py", line 10, in <module>
...
File "/Library/Python/2.7/site- packages/django/db/backends/postgresql_psycopg2/base.py", line 162, in _cursor
raise ImproperlyConfigured("You need to specify NAME in your Django settings file.")
django.core.exceptions.ImproperlyConfigured: You need to specify NAME in your Django settings file.
I then went back and checked my settings.py file. I am not specifying NAME in the settings file because I am using dj_database_url, as per the Heroku tutorial. I am curious as to why this is failing locally (why it is requiring NAME), whereas it is compiling successfully on Heroku. Any help would be greatly appreciated. Let me know if there is any additional information you need to diagnose the issue.
Best,
Scott
dj_database_url uses the value of the environment variable DATABASE_URL, which is set on Heroku but is probably not set in your local environment (unless you set it yourself).

Categories