Python Streamhandler over ftp doesn't work after second import - python

I have the following problem:
I wrote a FTPHandler(StreamHandler), which connects via 'transport=paramiko.Transport(...)' and 'transport.connect(...)' to a server and opens a sftp connection with 'SFTPClient.from_transport(...)'.
I am importing this handler in a class named 'JUS_Logger.py', which is my module for logging. This 'FMP_Logger' is imported by another class, 'JUS_Reader'.
The problem is, that if I start 'JUS_Reader', the transport is being initialized, but the Connection fails. There is no exception, the program only hangs. If I kill it, I get the stacktrace
CTraceback (most recent call last):
File "./JUS_Reader.py", line 24, in <module>
from JUS_Logger import logger
File "/<home>/.../JUS_Logger.py", line 74, in <module>
ftpHandler=FTPHandler(ftpOut,10)
File "/<home>/FTPHandler.py", line 21, in __init__
self.transport.connect(username=ftpOut['user'].decode('base64'),password=ftpOut['passwd'].decode('base64'))
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1004, in connect
self.auth_password(username, password)
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1165, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "/usr/lib/python2.7/dist-packages/paramiko/auth_handler.py", line 158, in wait_for_response
event.wait(0.1)
File "/usr/lib/python2.7/threading.py", line 403, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 262, in wait
_sleep(delay)
However, if I'm running the 'JUS_Logger.py' by itself, everything works, the transport's connection establishes and the SFTClient connects also.
Any ideas? Or further questions?

Related

"[ERROR] OSError: [Errno 38] Function not implemented" - Accessing trend deepsecurity.ComputersApi via Lambda

I have written a python script that successfully queries the trend deepsecurity api calls when ran locally on my machine.
I've been tasked with running the script in an aws lambda so that it is automated and can be scheduled.
The script is following the examples in the api reference and calls the legacy api successfully. However when I attempt to query using the computers api It blows up on the line : computers_api = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
def get_computer_status_api():
# Include computer status information in the returned Computer objects
#expand = deepsecurity.Expand(deepsecurity.Expand.computer_status)
expand = deepsecurity.Expand()
expand.add(deepsecurity.Expand.security_updates)
expand.add(deepsecurity.Expand.computer_status)
expand.add(deepsecurity.Expand.anti_malware)
# Set Any Required Values
computers_api = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
try:
computers = computers_api.list_computers(api_version, expand=expand.list(), overrides=False)
print("Querying ComputersApi...")
api_response_str=str(computers)
computer_count = len(computers.computers)
print(str(computer_count) + " Computers listed in Trend")
...
the error I get is :
[ERROR] OSError: [Errno 38] Function not implemented
Traceback (most recent call last):
File "/var/task/handler.py", line 782, in main
get_computer_status_api()
File "/var/task/handler.py", line 307, in get_computer_status_api
computers_api = deepsecurity.ComputersApi(deepsecurity.ApiClient(configuration))
File "/var/task/deepsecurity/api_client.py", line 69, in __init__
self.pool = ThreadPool()
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 925, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 196, in __init__
self._change_notifier = self._ctx.SimpleQueue()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 113, in SimpleQueue
return SimpleQueue(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/queues.py", line 336, in __init__
self._rlock = ctx.Lock()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
searching for this error it implies that I can't use the deepsecurity api in a lambda because lambdas don't support multiprocessing.
Looking for either confirmation that this is the case or suggestions for what I can change to get this working.
Trend support ticket suggested posting to here.
Resolved the issue by changing the python version from 3.8 to 3.7 in the lambda. The script now runs successfully

How to debug a stuck asyncio coroutine in Python?

There are lots of coroutines in my production code, which are stuck at unknown position while processing request. I attached gdb with Python support extension to the process, but it doesn't show the exact line in the coroutine where the process is stuck, only primary stack trace. Here is a minimal example:
import asyncio
async def hello():
await asyncio.sleep(30)
print('hello world')
asyncio.run(hello())
(gdb) py-bt
Traceback (most recent call first):
File "/usr/lib/python3.8/selectors.py", line 468, in select
fd_event_list = self._selector.poll(timeout, max_ev)
File "/usr/lib/python3.8/asyncio/base_events.py", line 2335, in _run_once
File "/usr/lib/python3.8/asyncio/base_events.py", line 826, in run_forever
None, getaddr_func, host, port, family, type, proto, flags)
File "/usr/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete
self.run_forever()
File "/usr/lib/python3.8/asyncio/runners.py", line 299, in run
File "main.py", line 7, in <module>
GDB shows a trace that ends on line 7, but the code is obviously stuck on line 4. How to make it show a more complete trace with nested coroutines?
You can use the aiodebug.log_slow_callbacks.enable(0.05)
Follow for more : https://pypi.org/project/aiodebug/

Twisted reactor.listenUNIX erroring after update to python 3.8

I have updated twisted from 13.0.0 to 20.3.0 and python from 2.7 to 3.8 and now twisted is throwing this not helpful error. (Ubuntu 16.04.6 LTS)
I verified the permissions of the .sock file and the directory, made sure the socket file is deleted before reactor.listenUNIX is called.
What could be the cause of this and how can I troubleshoot this issue?
class UserServer(Plugin):
def setup(self):
socket = self.parent.socket
if os.path.exists(socket):
os.remove(socket)
self.console(socket)
self.console(self.parent.config.get_umask('sock'))
self.factory = UserServerFactory(self.parent)
reactor.listenUNIX(socket, self.factory, mode=self.parent.config.get_umask('sock'))
service: 'user_server' failed to initialize
Traceback (most recent call last):
File "/home/mcgen/tools/mark2/mk2/plugins/__init__.py", line 335, in load
plugin = cls(self.parent, name, **kwargs)
File "/home/mcgen/tools/mark2/mk2/plugins/__init__.py", line 165, in __init__
self.setup()
File "/home/mcgen/tools/mark2/mk2/services/user_server.py", line 148, in setup
reactor.listenUNIX(socket, self.factory, mode=self.parent.config.get_umask('sock'))
File "/usr/local/lib/python3.8/dist-packages/twisted/internet/posixbase.py", line 397, in listenUNIX
p.startListening()
File "/usr/local/lib/python3.8/dist-packages/twisted/internet/unix.py", line 408, in startListening
self.startReading()
File "/usr/local/lib/python3.8/dist-packages/twisted/internet/abstract.py", line 435, in startReading
self.reactor.addReader(self)
File "/usr/local/lib/python3.8/dist-packages/twisted/internet/epollreactor.py", line 109, in addReader
self._add(reader, self._reads, self._writes, self._selectables,
File "/usr/local/lib/python3.8/dist-packages/twisted/internet/epollreactor.py", line 96, in _add
self._poller.register(fd, flags)
OSError: [Errno 22] Invalid argument

Rabbit MQ python script. Socket closed when connection was open

Need some help! While running the python script using Rabbit MQ RPC. I am getting a Socket 104,Socket closed when connection was open error. Below is python traceback and some code:
Traceback (most recent call last):
File "./server.py", line 34, in <module>
channel.start_consuming()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1681, in start_consuming
self.connection.process_data_events(time_limit=None)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 656, in process_data_events
self._dispatch_channel_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 469, in _dispatch_channel_events
impl_channel._get_cookie()._dispatch_events()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1310, in _dispatch_events
evt.body)
File "./server.py", line 30, in on_request
body=json.dumps(DEVICE_INFO))
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish
mandatory, immediate)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 2065, in publish
self._flush_output()
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1174, in _flush_output
*waiters)
File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output
raise exceptions.ConnectionClosed()
pika.exceptions.ConnectionClosed
Apologies as i am unable to comment due to low reputation. Could you provide a little more information on how you are opening your connection. Is it really open?
It might be because of loss of connection with rabbitmq server as pika doesn't deal with disconnects and often results in similar stacktrace.
I also had similar problem, in my case it was because my pika connection was dropping after sometime and my colleague was able to deal with this by adding a wait time for mq:port_number.
We were using docker container so we added following line to our invoke.sh to wait for mq:
filename.py --wait-secs 30 --port-wait mq:5672
I hope you are able to resolve this after doing that.
Otherwise it would be better to check if the connection is being dropped by pika before your python script runs or providing more information on how you are invoking it.

How to auto refresh requesting data and refresh variables shown in html?

so basically i am using python request post to grab data about next bus arriving time, then i used websocket and tornado to make a html webpage to show the data that i grabbed.
However, i really dont know how to auto refresh the data and the html. I tried to use this:
import threading
def getETA():
threading.Timer(5.0, getETA).start()
... the python file ends with
if __name__ == "__main__":
app.listen(8888)
ioloop.IOLoop.instance().start()
getETA()
so the code run and showing this error
Exception in thread Thread-25:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 920, in _bootstrap_inner
self.run()
File "C:\Python34\lib\threading.py", line 1186, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Tan\Desktop\DIP\DIP WEB INTERFACE\testing.py", line 183, in getETA
app.listen(8888)
File "C:\Python34\lib\site-packages\tornado\web.py", line 1788, in listen
server.listen(port, address)
File "C:\Python34\lib\site-packages\tornado\tcpserver.py", line 126, in listen
sockets = bind_sockets(port, address=address)
File "C:\Python34\lib\site-packages\tornado\netutil.py", line 187, in bind_sockets
sock.bind(sockaddr)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
Please suggest correct way or other method to let my html local host show refreshed value.

Categories