Rabbit MQ python script. Socket closed when connection was open - python

Need some help! While running the python script using Rabbit MQ RPC. I am getting a Socket 104,Socket closed when connection was open error. Below is python traceback and some code:
Traceback (most recent call last):
File "./server.py", line 34, in <module>
channel.start_consuming()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1681, in start_consuming
self.connection.process_data_events(time_limit=None)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 656, in process_data_events
self._dispatch_channel_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 469, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1310, in _dispatch_events
evt.body)
File "./server.py", line 30, in on_request
body=json.dumps(DEVICE_INFO))
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish
mandatory, immediate)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 2065, in publish
self._flush_output()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1174, in _flush_output
*waiters)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed

Apologies as i am unable to comment due to low reputation. Could you provide a little more information on how you are opening your connection. Is it really open?
It might be because of loss of connection with rabbitmq server as pika doesn't deal with disconnects and often results in similar stacktrace.
I also had similar problem, in my case it was because my pika connection was dropping after sometime and my colleague was able to deal with this by adding a wait time for mq:port_number.
We were using docker container so we added following line to our invoke.sh to wait for mq:
filename.py --wait-secs 30 --port-wait mq:5672
I hope you are able to resolve this after doing that.
Otherwise it would be better to check if the connection is being dropped by pika before your python script runs or providing more information on how you are invoking it.

Related

Vague question re "Bad file descriptor" in Flask with Peewee/pscopg2

tl;dr: An app that had been working fine is suddenly throwing a "Bad file descriptor" error with no other changes; I need advice for how to evaluate this.
I inherited an app that had been untouched for years, after the server crashed and I needed to move it to another machine. It's built with Flask, and uses Peewee to talk to a Postgres database over pyscopg2. It has a bunch of other stuff--an Elasticsearch engine for searching, a lot of heavy JS on the front end--but that doesn't seem to be the problem here. The code is moderately complex, and I am not very knowledgeable about all of its pieces.
It took me a while to get it set up using the sketchy deployment instructions that had been left behind, but eventually I got it running, and was able to get a test version running on a clean VM and then deploy it on an actual server, using gunicorn and nginx. It's been working fine in production for a week. I'm using Debian Buster for all versions. I'm using the most recent versions of all software.
I then decided to do some basic code cleanup, and ran the entire app through a linter, before looking at some other changes to make, that the end user had requested. Unfortunately, after this, the app consistently fails at the same point with a "Bad file descriptor" error. This is in a pre-run section, which parses a large XML file and saves the info to the database and to Elasticsearch; the app receives an XML upload, forks a few processes, and runs the parse/index process in the background.
I am subsequently unable to get past this error by any means. I have launched a clean VM and installed everything from scratch; I've reverted the git repo to before I linted the code. Same problem. I don't see how it can be a code issue, as it's now at the same point it was when I started. But I'm at a loss for what to do, and terrified that the production machine will fail.
The errors I get (trimming the first few lines that refer to places in the app itself) are:
[2021-03-14 14:40:11.699837] self.execute()
[2021-03-14 14:40:11.699878] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 1906, in inner
[2021-03-14 14:40:11.699907] return method(self, database, *args, **kwargs)
[2021-03-14 14:40:11.699946] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 1977, in execute
[2021-03-14 14:40:11.699976] return self._execute(database)
[2021-03-14 14:40:11.700004] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 2149, in _execute
[2021-03-14 14:40:11.700032] cursor = database.execute(self)
[2021-03-14 14:40:11.700060] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 3156, in execute
[2021-03-14 14:40:11.700088] return self.execute_sql(sql, params, commit=commit)
[2021-03-14 14:40:11.700115] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 3150, in execute_sql
[2021-03-14 14:40:11.700143] self.commit()
[2021-03-14 14:40:11.700171] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 2916, in __exit__
[2021-03-14 14:40:11.700198] reraise(new_type, new_type(exc_value, *exc_args), traceback)
[2021-03-14 14:40:11.700226] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 190, in reraise
[2021-03-14 14:40:11.700254] raise value.with_traceback(tb)
[2021-03-14 14:40:11.700282] File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/peewee.py", line 3143, in execute_sql
[2021-03-14 14:40:11.700309] cursor.execute(sql, params or ())
[2021-03-14 14:40:11.700339] OperationalError('SSL SYSCALL error: Bad file descriptor\n')
127.0.0.1 - - [14/Mar/2021 10:40:11] "POST /manage/versions/upload HTTP/1.1" 500 -
Error on request:
Traceback (most recent call last):
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 323, in run_wsgi
execute(self.server.app)
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 315, in execute
write(data)
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 273, in write
self.send_response(code, msg)
File "/home/deploy/git/myapp/venv/lib/python3.7/site-packages/werkzeug/serving.py", line 388, in send_response
self.wfile.write(hdr.encode("ascii"))
File "/usr/lib/python3.7/socketserver.py", line 799, in write
self._sock.sendall(b)
OSError: [Errno 9] Bad file descriptor
Exception in thread Thread-22:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.7/socketserver.py", line 654, in process_request_thread
self.shutdown_request(request)
File "/usr/lib/python3.7/socketserver.py", line 509, in shutdown_request
self.close_request(request)
File "/usr/lib/python3.7/socketserver.py", line 513, in close_request
request.close()
File "/usr/lib/python3.7/socket.py", line 420, in close
self._real_close()
File "/usr/lib/python3.7/socket.py", line 414, in _real_close
_ss.close(self)
OSError: [Errno 9] Bad file descriptor
I note that the final section ("Exception in thread Thread-22") is showing the system Python, rather than my virtual environment; I don't know if that's relevant, or if that's just what's running some overall process. I didn't get to this point doing anything different, though--the app is running in the virtual environment.
I'd be very grateful for any thoughts here--I'm obviously hoping it's some kind of stupid permission error or something, as I can't easily go into the code because of its complexity.

How to auto refresh requesting data and refresh variables shown in html?

so basically i am using python request post to grab data about next bus arriving time, then i used websocket and tornado to make a html webpage to show the data that i grabbed.
However, i really dont know how to auto refresh the data and the html. I tried to use this:
import threading
def getETA():
threading.Timer(5.0, getETA).start()
... the python file ends with
if __name__ == "__main__":
app.listen(8888)
ioloop.IOLoop.instance().start()
getETA()
so the code run and showing this error
Exception in thread Thread-25:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 920, in _bootstrap_inner
self.run()
File "C:\Python34\lib\threading.py", line 1186, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Tan\Desktop\DIP\DIP WEB INTERFACE\testing.py", line 183, in getETA
app.listen(8888)
File "C:\Python34\lib\site-packages\tornado\web.py", line 1788, in listen
server.listen(port, address)
File "C:\Python34\lib\site-packages\tornado\tcpserver.py", line 126, in listen
sockets = bind_sockets(port, address=address)
File "C:\Python34\lib\site-packages\tornado\netutil.py", line 187, in bind_sockets
sock.bind(sockaddr)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
Please suggest correct way or other method to let my html local host show refreshed value.

From Raspberry PI using PIKA to put data to RabbitMQ

i have an issue.
For my final year project of uni i have my raspberry Pi taking sensor data, i need to put that data into my rabbitMQ service on another machine.
Currently i am using pika as a library to push the data into the queue.
Right now i have an issue with connection, this is the error i am getting.
WARNING:pika.adapters.base_connection:Could not connect due to "Connection timed out," retrying in 10 sec
ERROR:pika.adapters.base_connection:Could not connect: Connection timed out
Traceback (most recent call last):
File "TryOne.py", line 8, in <module>
connection = pika.AsyncoreConnection(pika.ConnectionParameters('192.168.1.93'))
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 63, in __init__
super(BaseConnection, self).__init__(parameters, on_open_callback)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 513, in __init__
self._connect()
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 804, in _connect
self._adapter_connect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/asyncore_connection.py", line 96, in _adapter_connect
super(AsyncoreConnection, self)._adapter_connect()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 122, in _adapter_connect
self.params.retry_delay)
pika.exceptions.AMQPConnectionError: 10.0
Any help with this problem is very much appreciated.

Blender Network Render Timeout

I've tried to render a animation using Network Render. I connected my PC an my Laptop without further problems. But when I clicked "Render animation on network" after some seconds the following error occurs:
AL lib: (EE) UpdateDeviceParams: Failed to set 44100hz, got 48000hz instead
Traceback (most recent call last):
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\operat ors.py", line 85, in invoke
return self.execute(context)
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\operat ors.py", line 77, in execute
scene.network_render.job_id = client.sendJob(conn, scene, True)
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\client .py", line 121, in sendJob
return sendJobBlender(conn, scene, anim, can_save)
File "F:\Program Files (x86)\Blender\2.74\scripts\addons\netrender\client .py", line 340, in sendJobBlender
response = conn.getresponse()
File "F:\Program Files (x86)\Blender\2.74\python\lib\http\client.py", line 1172, in getresponse
response.begin()
File "F:\Program Files (x86)\Blender\2.74\python\lib\http\client.py", line 351, in begin
version, status, reason = self._read_status()
File "F:\Program Files (x86)\Blender\2.74\python\lib\http\client.py", line 313, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "F:\Program Files (x86)\Blender\2.74\python\lib\socket.py", line 371, inreadinto
return self._sock.recv_into(b)
socket.timeout: timed out
I asked Google: Somebody circumvents the problem by changing "the default timeout to 1000 (instead of 300) (in the socket.py file[...])". I can't find this line, I guess they changed it in the current version. Since I have no experience using python I do not know how I can change it now.
I hope you can help me!
The addon would be a better place to make the change instead of the socket module. If you look in your addons folder you will find netrender/utils.py where you will find a few lines that use socket.setdefaulttimeout and you could make some adjustments there.
An even better solution would be to look at why the connection is timing out, two computers in the same room should not get any timeouts. A common cause of timeouts is the inability to get a connection, firewalls are good at stopping connections, so you may want to check that the port used by network render is allowing incoming connections, and that blender is running with network render turned on to accept the connection. The default port is 8000 which could also be in use by another application, you can configure each computer to use a different port if needed.

Python Streamhandler over ftp doesn't work after second import

I have the following problem:
I wrote a FTPHandler(StreamHandler), which connects via 'transport=paramiko.Transport(...)' and 'transport.connect(...)' to a server and opens a sftp connection with 'SFTPClient.from_transport(...)'.
I am importing this handler in a class named 'JUS_Logger.py', which is my module for logging. This 'FMP_Logger' is imported by another class, 'JUS_Reader'.
The problem is, that if I start 'JUS_Reader', the transport is being initialized, but the Connection fails. There is no exception, the program only hangs. If I kill it, I get the stacktrace
CTraceback (most recent call last):
File "./JUS_Reader.py", line 24, in <module>
from JUS_Logger import logger
File "/<home>/.../JUS_Logger.py", line 74, in <module>
ftpHandler=FTPHandler(ftpOut,10)
File "/<home>/FTPHandler.py", line 21, in __init__
self.transport.connect(username=ftpOut['user'].decode('base64'),password=ftpOut['passwd'].decode('base64'))
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1004, in connect
self.auth_password(username, password)
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1165, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "/usr/lib/python2.7/dist-packages/paramiko/auth_handler.py", line 158, in wait_for_response
event.wait(0.1)
File "/usr/lib/python2.7/threading.py", line 403, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 262, in wait
_sleep(delay)
However, if I'm running the 'JUS_Logger.py' by itself, everything works, the transport's connection establishes and the SFTClient connects also.
Any ideas? Or further questions?

Categories