Old deploy had the code at line 34 -
for _ in range(HEX[0][0]):
but HEX[0][0] was a string, realizing this mistake I changed it to
for _ in range(len(HEX[0][0])):
and redeployed
but Heroku is still referring to the old code and giving the error
2020-07-12T09:02:32.927382+00:00 app[web.1]: Traceback (most recent call last):
2020-07-12T09:02:32.927384+00:00 app[web.1]: File "/app/.heroku/python/lib/pytho
n3.6/site-packages/apscheduler/executors/base.py", line 125, in run_job
2020-07-12T09:02:32.927384+00:00 app[web.1]: retval = job.func(*job.args, **job.
kwargs)
2020-07-12T09:02:32.927385+00:00 app[web.1]: File "/app/finalDeploy.py", line 3
4, in timed_job_garbage
2020-07-12T09:02:32.927385+00:00 app[web.1]: for _ in range(HEX[0][0]):
2020-07-12T09:02:32.940876+00:00 app[web.1]: TypeError: 'str' object cannot be i
nterpreted as an integer
The code is in file clock.py which runs a custom clock for scheduling tasks but the error is showing on finalDeploy.py(which is my app file ) but the code doesn't even exist on finalDeploy.py.
This was a problem with heroku. I switched off the dynos from the dashboard and used heroku ps:scale web=0 and heroku ps:scale clock=0. The app stopped 15 minutes after doing this! Then I simply changed the app name. Reinitialised the git and heroku repositories using git init . and heroku git:remote -a <new app name> and then redeployed. The app worked fine after that.
Related
I deployed a web app on GPU enabled ACI (Azure Container Instance) using Gunicorn + Flask + Docker. This app runs a couple of pytorch models (one of them being easyOCR and the other being YOLOv5).
The app was working fine, but then started to throw exception for all incoming requests. similar to the following
File "/usr/local/lib/python3.7/site-packages/werkzeug/wrappers/request.py", line 540, in json
return self.get_json()
File "/usr/local/lib/python3.7/site-packages/werkzeug/wrappers/request.py", line 575, in get_json
data = self.get_data(cache=cache)
File "/usr/local/lib/python3.7/site-packages/werkzeug/wrappers/request.py", line 405, in get_data
rv = self.stream.read()
File "/usr/local/lib/python3.7/site-packages/gunicorn/http/body.py", line 215, in read
data = self.reader.read(1024)
File "/usr/local/lib/python3.7/site-packages/gunicorn/http/body.py", line 130, in read
data = self.unreader.read()
File "/usr/local/lib/python3.7/site-packages/gunicorn/http/unreader.py", line 37, in read
d = self.chunk()
File "/usr/local/lib/python3.7/site-packages/gunicorn/http/unreader.py", line 64, in chunk
return self.sock.recv(self.mxchunk)
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 203, in handle_abort
sys.exit(1)
SystemExit: 1
gunicorn parameters
gunicorn wsgi:app --bind 0.0.0.0:443 --log-level=info --workers=3 --reload --timeout 120
ACI specs
4 Cores
8 GB RAM
1 GPU Tesla K80
Linux environment
Followed this blog to create the app.
I tried adjusting the timeout parameter following many other posts such as this but didn't resolve the issue.
What caused this error and how do I fix this?
Thank you #amro_ghoneim for updating the resolution in the comments. Posting it as an answer to help other community members.
To get rid of those exceptions, change worker type to gevent.
To pause your application code for extended periods of time make use of gevent worker or -k gevent on the command line.
In order to add gevent worker, add below commands in your config file:
pip install gevent
gunicon .... --worker-class gevent
Reference :
Gunicorn worker timeout error - Stack Overflow
I've written a Flask app that's being served with Gunicorn and running on Raspberry Pi OS (Buster). The app is supposed to run automatically as a service on system boot. The issue is, the app fails when run as a service... but only when run as a service...
It used to work until I introduced server hooks into my Gunicorn configuration file. There's a few of them, but the first to be called, and thus fail is:
gunicorn.conf.py:
def on_starting(server):
import wsgi
wsgi.on_starting(server)
wsgi.py:
def on_starting(server):
api_instance = server.app.wsgi()
shared_memory_manager = Manager()
api_instance.requestless_variables = shared_memory_manager.dict()
api_instance.log = server.log
server.log.info("Loading API...")
With the following traceback:
Traceback (most recent call last):
File "/home/pi/.local/bin/gunicorn", line 8, in <module>
sys.exit(run())
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 67, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 198, in run
self.start()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 138, in start
self.cfg.on_starting(self)
File "/home/pi/nano/manager/src/api/gunicorn.conf.py", line 56, in on_starting
api_instance = server.app.wsgi()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/home/pi/.local/lib/python3.8/site-packages/gunicorn/util.py", line 430, in import_app
raise AppImportError("Application object must be callable.")
gunicorn.errors.AppImportError: Application object must be callable.
As you can see, the error seems to be with api_instance = server.app.wsgi(), which appears in each of my server hooks and is likewise my point of failure in each.
The absolute weirdest thing about this is that the app/Gunicorn works perfectly if instantiated directly from the terminal:
/home/pi/.local/bin/gunicorn -c /home/pi/nano/manager/src/api/gunicorn.conf.py --bind unix:nano_api.sock --umask 007
But produces the above error if instantiated from the following service:
[Unit]
Description=Gunicorn instance serving the Nano API
After=network.target
[Service]
User=pi
Group=www-data
WorkingDirectory=/home/pi/nano/manager/src/api
ExecStart=/home/pi/.local/bin/gunicorn -c /home/pi/nano/manager/src/api/gunicorn.conf.py --bind unix:nano_api.sock --umask 007
[Install]
WantedBy=multi-user.target
Anyone have any ideas as to what might be causing this issue and how to fix it? Many thanks!
Well, after a solid 10 hours of debugging, I finally figured out the problem...
I'm ashamed to say it was just a naming conflict. I had a package named "api" containing a module named "api.py" that housed a Flask instance named, you guessed it, "api."
Since I've given these three items distinct names and fixed my references, everything's ran smoothly.
I'm trying to following the step-by-step tutorial found here, but the deploy just does not work, always returning Error Response: [13] An internal error occurred.
I did not change anything in the example code itself. As I said, I just followed the linked tutorial carefully. It fails and return the error above when I try gcloud app deploy.
Using gcloud app deploy --verbosity debug it brings some stacktrace, but without any usefull meaning. I'm copying below for completeness:
Updating service [default] (this may take several minutes)...failed.
DEBUG: (gcloud.app.deploy) Error Response: [13] An internal error occurred
Traceback (most recent call last):
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 791, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 756, in Run
resources = command_instance.Run(args)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/surface/app/deploy.py", line 65, in Run
parallel_build=False)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 587, in RunDeploy
flex_image_build_option=flex_image_build_option)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 395, in Deploy
extra_config_settings)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/appengine_api_client.py", line 188, in DeployService
message=message)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 244, in WaitForOperation
sleep_ms=retry_interval)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 266, in WaitFor
sleep_ms=sleep_ms)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 222, in RetryOnResult
if not should_retry(result, state):
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/util/waiter.py", line 260, in _IsNotDone
return not poller.IsDone(operation)
File "/Users/jamesmiranda/Documents/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 169, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [13] An internal error occurred
ERROR: (gcloud.app.deploy) Error Response: [13] An internal error occurred
Below is the app.yaml (exactly tha same of example git, except for the APPID):
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
# [START configuration]
endpoints_api_service:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy openapi-appengine.yaml' command.
name: echo-api.endpoints.MYAPPID.cloud.goog
config_id: [2018-01-09r1]
# [END configuration]
What I tried until now:
Change python version to python 2 (python_version: 2);
Include some files to skip in the app.yaml (copied from endpoint framework in standard env tutorial);
skip_files:
- ^(.*/)?#.*#$
- ^(.*/)?.*~$
- ^(.*/)?.*\.py[co]$
- ^(.*/)?.*/RCS/.*$
- ^(.*/)?\..*$
- ^(.*/)?setuptools/script \(dev\).tmpl$
Tried the workaround from here;
gcloud config set app/use_deprecated_preparation True
NOthing worked. What I'm doing wrong?
Notes:
It works fine locally following the README info;
Everything works in GAE standard env folowing this another tutorial;
I did not found any problem in the Endpoint itself (I can see it deployed in API explorer), but the app deploy did not work in any way.
If the app.yaml file you are using is exactly the same as the one you copied in your question then there seems to be an error in the name and config-id you are entering. With the info provided in your question your app.yaml should look like:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
# [START configuration]
endpoints_api_service:
# The following values are to be replaced by information from the output of
# 'gcloud endpoints services deploy openapi-appengine.yaml' command.
name: "echo-api.endpoints.MYAPPID.cloud.goog"
config_id: "2018-01-09r1"
# [END configuration]
See that you were not entering the values for name and config_id inside quotation marks, and also you should not enter your config_id inside brackets.
I have tried this myself and works alright. If it still doesn't work for you maybe your name is not correct, as the format seems to be:
"example-project.appspot.com"
So if your project id is i.e. my-project-id, then your name would look like:
"my-project-id.appspot.com"
So I'm trying to create an app with Flask and Heroku. I can run it with Foreman just fine, but after deploying to Heroku, the application error comes up and the heroku logs show:
heroku[web.1]: State changed from crashed to starting
heroku[web.1]: Starting process with command `python app.py`
app[web.1]: File "app.py", line 2, in <module>
app[web.1]: from flask import Flask, send_from_directory
app[web.1]: ImportError: No module named flask
Any idea on how this could happen? Thanks!
EDIT: Flask is in the requirements file and I see that it gets installed during the push to Heroku.
You probably need to add Flask (and any other external dependencies) to a requirements.txt and include it in your repo.
You can use 'pip freeze > requirements.txt" to create it with what ever packages you have installed in your environment at the moment.
As Heroku has read-only file system except two directories (log and tmp) I wanted to dump my logs from python app to one of them.
The git repository pushed to heroku server app contains both of the folders created by my (checked twice, even downloaded app after push to check if both dirs are there).
While running "heroku run bash" I am able to see only the "tmp" folder - "log" is not visible using "ls -la" or even to the app as I receive errors regarding missing location for the .log files.
2013-08-05T13:10:41.170434+00:00 heroku[web.1]: Starting process with command `python runserver.py`
2013-08-05T13:10:43.132418+00:00 app[web.1]: Traceback (most recent call last):
2013-08-05T13:10:43.609980+00:00 app[web.1]: File "runserver.py", line 2, in <module>
2013-08-05T13:10:43.725134+00:00 app[web.1]: from app_name import app
2013-08-05T13:10:43.850738+00:00 app[web.1]: File "/app/app_name/__init__.py", line 13, in <module>
2013-08-05T13:10:43.968714+00:00 app[web.1]: from logger import flask_debug
2013-08-05T13:10:44.081900+00:00 app[web.1]: File "/app/logger.py", line 8, in <module>
2013-08-05T13:10:44.194540+00:00 app[web.1]: logging.config.dictConfig(CONFIG)
2013-08-05T13:10:44.306174+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/logging/config.py", line 797, in dictConfig
2013-08-05T13:10:44.425589+00:00 app[web.1]: dictConfigClass(config).configure()
2013-08-05T13:10:44.535392+00:00 app[web.1]: File "/app/.heroku/python/lib/python2.7/logging/config.py", line 579, in configure
2013-08-05T13:10:44.274067+00:00 heroku[web.1]: Process exited with status 1
2013-08-05T13:10:44.287252+00:00 heroku[web.1]: State changed from starting to crashed
So quick thinking I wanted to check if I can place the logs in tmp folder. The app starts, everything looks fine... but nothing is dumped from the app.
I am lost actually as I've looked for the solution quite a time.
Maybe someone will tell me:
Why "log" directory under /app_name folder is not visible?
Why "tmp" directory is not receiving logs?
http://speedy.sh/VaR2w/logger.conf - here's the *.conf file for my loggers
http://speedy.sh/c3QNs/logger.py - here are loggers
PS. Logs are working for the console for the "tmp" folder configuration.
Two better ways:
1) Using Heroku's logdrain feature: https://devcenter.heroku.com/articles/logging#syslog-drains
2) Using one of the addons: Papertrail and Logly come to mind, both have free plans.
I use #1 above: I have an ec2 instance I set up that aggregates all the logs from all of my dynos into a specific set of files, which I use logrotate to manage. I then use simple grep to search thru them, and tail -f to follow, if I feel like it. My rsyslogd configuration on said ec2 instance is:
----- 8< ----- cut here ----- 8< ----- cut here ----- 8< ----- cut here -----
$ModLoad imtcp
$InputTCPServerRun 5514
# vi /etc/rsyslog.d/01-heroku.conf
if $syslogtag startswith 'app[postgres]' then /matchspot-logs/postgres
& ~
if $syslogtag startswith 'app[pgbackups]' then /matchspot-logs/postgres
& ~
if $syslogtag startswith 'heroku[' then /matchspot-logs/heroku
& ~
if $syslogtag startswith 'app[' then /matchspot-logs/app
& ~
when you heroku run, it spins up a new dyno so as not to degrade your web server's performance... think about it like this, if you had 2 web dynos, which one would you get when you heroku run? :)
the approaches suggested by other people here should work way better for you
also, you should be able to add free addons without a confirmed account