So this Discord Bot works perfectly fine when I run it lokal on my PC. But when I push it to Heroku i get the following error log.
When I try to delete "Styles" (the part mentioned in the log down below) in the render.py the Bot goes online but is not working (of course).
heroku[worker.1]: Starting process with command python src/main.py
heroku[worker.1]: State changed from starting to up
heroku[worker.1]: Process exited with status 1
heroku[worker.1]: State changed from up to crashed
app[worker.1]: Traceback (most recent call last):
app[worker.1]: File "src/main.py", line 9, in <module>
app[worker.1]: from render import RenderStats
app[worker.1]: File "/app/src/render.py", line 6, in <module>
app[worker.1]: class RenderStats():
app[worker.1]: File "/app/src/render.py", line 29, in RenderStats
app[worker.1]: 'titles': ImageFont.truetype("fonts/MyriadPro-Bold.otf", 10),
app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/PIL/ImageFont.py", line 546, in truetype
app[worker.1]: return freetype(font)
app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/PIL/ImageFont.py", line 543, in freetype
app[worker.1]: return FreeTypeFont(font, size, index, encoding, layout_engine)
app[worker.1]: File "/app/.heroku/python/lib/python3.6/site-packages/PIL/ImageFont.py", line 161, in init
app[worker.1]: font, size, index, encoding, layout_engine=layout_engine
app[worker.1]: OSError: cannot open resource
Is the folder structure the problem?
Heroku isn't made for hosting Discord Bots, but to answer your question the issue might be Heroku's ephemeral filesystem. As Heroku explains here you may host static files (such as fonts) dedicated, they might get deleted upon Herokus daily cycling.
Related
We have recently switched to faust-streaming(0.6.9) from faust 1.10.4. Post this we have seen the applications crashing with the below exception. The application has multiple layers with aggregation and filtering of data at each stage. At each stage, the processor sends the message to Kafka topic and the respective faust app agent consumes the message. But we have kept the partition count the same, for the Kafka topic, at each layer.
Cluster Size = 12
Topic & Table Parition count = 36
faust-streaming version = 0.6.9
kafka-python version = 2.0.2
[2021-07-29 10:05:23,761] [18808] [ERROR] [^---Fetcher]: Crashed reason=AssertionError(‘Partition is not assigned’)
Traceback (most recent call last):
File “/usr/local/lib/python3.8/site-packages/mode/services.py”, line 802, in _execute_task
await task
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 176, in _fetcher
await consumer._drain_messages(self)
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 1104, in _drain_messages
async for tp, message in ait:
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 714, in getmany
highwater_mark = self.highwater(tp)
File “/usr/local/lib/python3.8/site-packages/faust/transport/consumer.py”, line 1367, in highwater
return self._thread.highwater(tp)
File “/usr/local/lib/python3.8/site-packages/faust/transport/drivers/aiokafka.py”, line 923, in highwater
return self._ensure_consumer().highwater(tp)
File “/usr/local/lib/python3.8/site-packages/aiokafka/consumer/consumer.py”, line 673, in highwater
assert self._subscription.is_assigned(partition), \
AssertionError: Partition is not assigned
[2021-07-29 10:05:23,764] [18808] [INFO] [^Worker]: Stopping...
[2021-07-29 10:05:23,765] [18808] [INFO] [^-App]: Stopping...
Please help us here.
I deployed quickstart tutorial based on the example "daml-on-fabric" https://github.com/hacera/daml-on-fabric and after that i tried to deploy the pingpong example from dazl https://github.com/digital-asset/dazl-client/tree/master/samples/ping-pong. The bots from the example works fine on daml ledger. However, when i try to deploy this example on fabric the bots are unable to send the transactions. Everything works fine based on this read me from https://github.com/hacera/daml-on-fabric/blob/master/README.md. The smart contract look like to be deployed on Fabric. The error is when i try to use the bots from pingpong python files https://github.com/digital-asset/dazl-client/blob/master/samples/ping-pong/README.md
I receive this error:
[ ERROR] 2020-03-10 15:40:57,475 | dazl | A command submission failed!
Traceback (most recent call last):
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/_party_client_impl.py", line 415, in main_writer
await submit_command_async(client, p, commands)
File "/home/vasisiop/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/protocols/v1/grpc.py", line 42, in <lambda>
lambda: self.connection.command_service.SubmitAndWait(request))
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Party not known on ledger"
debug_error_string = "{"created":"#1583847657.473821297","description":"Error received from peer ipv6:[::1]:6865","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Party not known on ledger","grpc_status":3}"
>
[ ERROR] 2020-03-10 15:40:57,476 | dazl | An event handler in a bot has thrown an exception!
Traceback (most recent call last):
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/bots.py", line 157, in _handle_event
await handler.callback(new_event)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/client/_party_client_impl.py", line 415, in main_writer
await submit_command_async(client, p, commands)
File "/home/vasisiop/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/dazl/protocols/v1/grpc.py", line 42, in <lambda>
lambda: self.connection.command_service.SubmitAndWait(request))
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/home/vasisiop/.local/share/virtualenvs/ping-pong-sDNeps76/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Party not known on ledger"
debug_error_string = "{"created":"#1583847657.473821297","description":"Error received from peer ipv6:[::1]:6865","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Party not known on ledger","grpc_status":3}"
From the error message it looks like the parties defined in the quick start example have not been allocated on the ledger, hence the "Party not known on ledger" error.
You can follow the steps in https://docs.daml.com/deploy/index.html with use of daml deploy --host= --port=, which will both upload the dars and allocate the parties on the ledger.
You can also run just the allocate party command daml ledger allocate-parties, which will allocate based on the parties defined you your daml.yaml.
The Pybossa didn't describe how to configure webhook.
I met some issue when I'm configuring webhook, below is my steps:
fork pybossa webhook example
Run webhook with default settings(modified api_key and endpoint).
In Pybossa, modify the project and add webhook to point to webhook running URL.
Open a command line window and execute the following command:
# rqworker high
Then when a task is completed, I see there are logs in command line window. which is complaining the following I get the below error:
14:06:11 *** Listening on high...
14:07:42 high: pybossa.jobs.webhook(u'http://192.168.116.135:5001', {'project_short_name': u'tw', 'task_id': 172, 'fired_at': '2017-08-10 06:07:42', 'project_id': 17, 'result_id': 75, 'event': 'task_completed'}) (e435386c-615d-4525-a65d-f08f0afd2351)
14:07:44 UnboundLocalError: local variable 'project' referenced before assignment
Traceback (most recent call last):
File "/home/baib2/Desktop/pybossa_server/env/local/lib/python2.7/site-packages/rq/worker.py", line 479, in perform_job
rv = job.perform()
File "/home/baib2/Desktop/pybossa_server/env/local/lib/python2.7/site-packages/rq/job.py", line 466, in perform
self._result = self.func(*self.args, **self.kwargs)
File "./pybossa/jobs.py", line 525, in webhook
if project.published and webhook.response_status_code != 200 and current_app.config.get('ADMINS'):
UnboundLocalError: local variable 'project' referenced before assignment
I'm not sure if we should execute the following command
# rqworker high
But if this rqworker not running, I don't see any component picking up work from the redis queue.
You need to run a specific worker, not the default one from PYBOSSA. Just use https://github.com/Scifabric/pybossa/blob/master/app_context_rqworker.py to run it like this:
python app_context_rqworker.py high
This will set up the Flask context, and it will run properly ;-)
We're in the middle of improving our docs, so this should be better in the coming months.
Need some help! While running the python script using Rabbit MQ RPC. I am getting a Socket 104,Socket closed when connection was open error. Below is python traceback and some code:
Traceback (most recent call last):
File "./server.py", line 34, in <module>
channel.start_consuming()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1681, in start_consuming
self.connection.process_data_events(time_limit=None)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 656, in process_data_events
self._dispatch_channel_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 469, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1310, in _dispatch_events
evt.body)
File "./server.py", line 30, in on_request
body=json.dumps(DEVICE_INFO))
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish
mandatory, immediate)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 2065, in publish
self._flush_output()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1174, in _flush_output
*waiters)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
Apologies as i am unable to comment due to low reputation. Could you provide a little more information on how you are opening your connection. Is it really open?
It might be because of loss of connection with rabbitmq server as pika doesn't deal with disconnects and often results in similar stacktrace.
I also had similar problem, in my case it was because my pika connection was dropping after sometime and my colleague was able to deal with this by adding a wait time for mq:port_number.
We were using docker container so we added following line to our invoke.sh to wait for mq:
filename.py --wait-secs 30 --port-wait mq:5672
I hope you are able to resolve this after doing that.
Otherwise it would be better to check if the connection is being dropped by pika before your python script runs or providing more information on how you are invoking it.
I have a Django app being served with nginx+gunicorn with 3 gunicorn worker processes. Occasionally (maybe once every 100 requests or so) one of the worker processes gets into a state where it starts failing most (but not all) requests that it serves, and then it throws an exception when it tries to email me about it. The gunicorn error logs look like this:
[2015-04-29 10:41:39 +0000] [20833] [ERROR] Error handling request
Traceback (most recent call last):
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 130, in handle
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 171, in handle_request
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 206, in __call__
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 196, in get_response
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 226, in handle_uncaught_exception
File "/usr/lib/python2.7/logging/__init__.py", line 1178, in error
File "/usr/lib/python2.7/logging/__init__.py", line 1271, in _log
File "/usr/lib/python2.7/logging/__init__.py", line 1281, in handle
File "/usr/lib/python2.7/logging/__init__.py", line 1321, in callHandlers
File "/usr/lib/python2.7/logging/__init__.py", line 749, in handle
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/log.py", line 122, in emit
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/log.py", line 125, in connection
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/mail/__init__.py", line 29, in get_connection
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 26, in import_by_path
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 21, in import_by_path
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module
ImproperlyConfigured: Error importing module django.core.mail.backends.smtp: "No module named smtp"
So some uncaught exception is happening and then Django is trying to email me about it. The fact that it can't import django.core.mail.backends.smtp doesn't make sense because django.core.mail.backends.smtp should definitely be on the worker process' Python path. I can import it just fine from a manage.py shell and I do get emails for other server errors (actual software bugs) so I know that works. It's like the the worker process' environment is corrupted somehow.
Once a worker process enters this state it has a really hard time recovering; almost every request it serves ends up failing in this same manner. If I restart gunicorn everything is good (until another worker process falls into this weird state again).
I don't notice any obvious patterns so I don't think this is being triggered by a bug in my app (the URLs error'ing out are different, etc). It seems like some sort of race condition.
Currently I'm using gunicorn's --max-requests option to mitigate this problem but I'd like to understand what's going on here. Is this a race condition? How can I debug this?
I suggest you use Sentry which gives a smart way of handling errors.
You can use it as a cloud based solution (getsentry) or you can install it on your own server (github).
Before I was using django core log mailer now I always use sentry.
I do not work at Sentry but their solution is pretty awesome !
We discovered one particular view that was pegging the CPU for a few seconds every time it was loaded that seemed to be triggering this issue. I still don't understand how slamming a gunicorn worker could result in a corrupted execution environment, but fixing the high-CPU view seems to have gotten rid of this issue.