I have a python bottle application, which uses threads. due to the fact I'm using monkey.patch, the threads were blocking app execution (a dialog box fired from a thread was blocking bottle routes from responding to the client, until dismissed.)
A little research here showed I should use monkey patch without trying to patch Thread:
# Patch python's threads with greenlets
from gevent import monkey
monkey.patch_all(thread=False)
This does not block on a minimal example I wrote.
But raises these errors on intensive use with Threads, with methods like threading.setEvent()
This is the error I get:
C:\Users\IEUser\downloadloft-localserver>python mainserver.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 551, in _
_bootstrap_inner
self.run()
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 753, in r
un
self.finished.wait(self.interval)
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 403, in w
ait
self.__cond.wait(timeout)
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 262, in w
ait
_sleep(delay)
File "c:\users\admini~1\appdata\local\temp\easy_install-sch3hm\gevent-0.13.8-p
y2.7-win32.egg.tmp\gevent\hub.py", line 79, in sleep
switch_result = get_hub().switch()
File "c:\users\admini~1\appdata\local\temp\easy_install-sch3hm\gevent-0.13.8-p
y2.7-win32.egg.tmp\gevent\hub.py", line 135, in get_hub
raise NotImplementedError('gevent is only usable from a single thread')
NotImplementedError: gevent is only usable from a single thread
Bottle v0.12-dev server starting up (using GeventSocketIOServer())...
Listening on http://localhost:8080/
Hit Ctrl-C to quit.
Exception in thread Thread-2:
Traceback (most recent call last):
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 551, in _
_bootstrap_inner
self.run()
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 753, in r
un
self.finished.wait(self.interval)
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 403, in w
ait
self.__cond.wait(timeout)
File "C:\Program Files\DownloadLoft\Python27\lib\threading.py", line 262, in w
ait
_sleep(delay)
File "c:\users\admini~1\appdata\local\temp\easy_install-sch3hm\gevent-0.13.8-p
y2.7-win32.egg.tmp\gevent\hub.py", line 79, in sleep
switch_result = get_hub().switch()
File "c:\users\admini~1\appdata\local\temp\easy_install-sch3hm\gevent-0.13.8-p
y2.7-win32.egg.tmp\gevent\hub.py", line 135, in get_hub
raise NotImplementedError('gevent is only usable from a single thread')
NotImplementedError: gevent is only usable from a single thread
Is this a known issue with gevent.monkeypatch? Any ideas?
Bottle applications are threaded, so you can't use gevent in any function called in bottle routes.
To help you, I need to speculate why you use threads.
If it's to accelerate your bottle website, juste use the cherrypy server :
pip install cherrypy
(or just dump the cherrypy dir in your current dir, it's a pure Python server)
Then run your bottle app this way :
bottle.run(server='cherrypy')
If it's because you want to make non blocking calls (such as fetching URLs) without blocking the response, it's easy enought to do it manually :
Create a Queue object (it's a special queue that can be filled and popped between threads).
Create and run a thread with a infinite while loop unpoping the queue and doing actions every time.
When you need a non blocking call, push the action to the queue and carry one.
Related
I can use KafkaConsumer to consume messages in separate threads.
However, when I use multiprocessing.Process instead of threading.Thread, I get an error:
OSError: [Errno 9] Bad file descriptor
This question and documentation suggests that using multiprocessing to consume messages in parallel is possible. Would someone please share a working example?
Edit
Here's some sample code. Sorry the original code is too involved, so I created a sample here that I hope communicates what is happening. This code works fine if I use threading.Thread instead of multiprocessing.Process.
from multiprocessing import Process
class KafkaWrapper():
def __init__(self):
self.consumer = KafkaConsumer(bootstrap_servers='my.server.com')
def consume(self, topic):
self.consumer.subscribe(topic)
for message in self.consumer:
print(message.value)
class ServiceInterface():
def __init__(self):
self.kafka_wrapper = KafkaWrapper()
def start(self, topic):
self.kafka_wrapper.consume(topic)
class ServiceA(ServiceInterface):
pass
class ServiceB(ServiceInterface):
pass
def main():
serviceA = ServiceA()
serviceB = ServiceB()
jobs=[]
# The code works fine if I used threading.Thread here instead of Process
jobs.append(Process(target=serviceA.start, args=("my-topic",)))
jobs.append(Process(target=serviceB.start, args=("my-topic",)))
for job in jobs:
job.start()
for job in jobs:
job.join()
if __name__ == "__main__":
main()
And here's the error I see (Again, my actual code is different from the above sample, and it works fine if I use threading.Thread but not if I use multiprocessing.Process):
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "service_interface.py", line 58, in start
self._kafka_wrapper.start_consuming(self.service_object_id)
File "kafka_wrapper.py", line 141, in start_consuming
for message in self._consumer:
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1082, in __next__
return next(self._iterator)
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1022, in _message_generator
self._client.poll(timeout_ms=poll_ms, sleep=True)
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 556, in poll
responses.extend(self._poll(timeout, sleep=sleep))
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 573, in _poll
ready = self._selector.select(timeout)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/selectors.py", line 577, in select
kev_list = self._kqueue.control(None, max_ev, timeout)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "service_interface.py", line 58, in start
self._kafka_wrapper.start_consuming(self.service_object_id)
File "kafka_wrapper.py", line 141, in start_consuming
for message in self._consumer:
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1082, in __next__
return next(self._iterator)
File "venv/lib/python3.6/site-packages/kafka/consumer/group.py", line 1022, in _message_generator
self._client.poll(timeout_ms=poll_ms, sleep=True)
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 556, in poll
responses.extend(self._poll(timeout, sleep=sleep))
OSError: [Errno 9] Bad file descriptor
File "venv/lib/python3.6/site-packages/kafka/client_async.py", line 573, in _poll
ready = self._selector.select(timeout)
File "/usr/local/Cellar/python3/3.6.2/Frameworks/Python.framework/Versions/3.6/lib/python3.6/selectors.py", line 577, in select
kev_list = self._kqueue.control(None, max_ev, timeout)
OSError: [Errno 9] Bad file descriptor
Kafka consumers could be either multi process or multi threading (make sure the client library used correctly supports Kafka Consumer Group, required in early version of Kafka), the choice is up to you.
However if we want to using processes, the Kafka client library need to do something, to guaranteed itself fork safe, that the underlying TCP connections used (connecting to Kafka servers) ought not be shared by more than one processes. And this is why you got an connection error.
As a workaround, you should not create KafkaConsumer before spawning processes. Instead, move the operation into each process.
Another way is to use a single thread/process fetching message, and use an extra process pool to do the real operations.
I'm writing a python UI in kivy to manage some remote machines with fabric. As I can't use fabric's parallel implementation on Windows 10 (see here), I was hoping to use parallel-ssh to actually perform the parallel remote operations. This issue seems to be caused by the interactions between the libraries, rather than an issue with any single one of them.
I've tried manually loading my private key as suggested here:
from fabric.api import execute
import pssh
from pssh.utils import load_private_key
hosts = ['192.168.0.2']
private_key = load_private_key('C:/Users/democracy/.ssh/id_rsa')
pssh_client = pssh.ParallelSSHClient(hosts, user='XXX', password='YYY', pkey=private_key)
output = pssh_client.run_command('whoami', sudo=True)
pssh_client.join(output)
for host in output:
for line in output[host]['stdout']:
print("Host %s - output: %s" % (host, line))
The above code results in the following backtrace:
Exception: Error reading SSH protocol banner('This operation would block forever', <Hub at 0x242566ab9c8 select pending=0 ref=0>)
Traceback (most recent call last):
File "C:\environments\democracy\lib\site-packages\paramiko\transport.py", line 1884, in _check_banner
buf = self.packetizer.readline(timeout)
File "C:\environments\democracy\lib\site-packages\paramiko\packet.py", line 331, in readline
buf += self._read_timeout(timeout)
File "C:\environments\democracy\lib\site-packages\paramiko\packet.py", line 485, in _read_timeout
x = self.__socket.recv(128)
File "C:\environments\democracy\lib\site-packages\gevent\_socket3.py", line 317, in recv
self._wait(self._read_event)
File "C:\environments\democracy\lib\site-packages\gevent\_socket3.py", line 144, in _wait
self.hub.wait(watcher)
File "C:\environments\democracy\lib\site-packages\gevent\hub.py", line 630, in wait
result = waiter.get()
File "C:\environments\democracy\lib\site-packages\gevent\hub.py", line 878, in get
return self.hub.switch()
File "C:\environments\democracy\lib\site-packages\gevent\hub.py", line 609, in switch
return greenlet.switch(self)
gevent.hub.LoopExit: ('This operation would block forever', <Hub at 0x242566ab9c8 select pending=0 ref=0>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\environments\democracy\lib\site-packages\paramiko\transport.py", line 1740, in run
self._check_banner()
File "C:\environments\democracy\lib\site-packages\paramiko\transport.py", line 1888, in _check_banner
raise SSHException('Error reading SSH protocol banner' + str(e))
paramiko.ssh_exception.SSHException: Error reading SSH protocol banner('This operation would block forever', <Hub at 0x242566ab9c8 select pending=0 ref=0>)
General SSH error - Error reading SSH protocol banner('This operation would block forever', <Hub at 0x242566ab9c8 select pending=0 ref=0>)
The above code works if I import pssh before fabric. Unfortunately, it seems if I do this, any buttons on my kivy interface (which kick off any operations in a background thread) block forever on press. If I go to the console after a button press and send a keyboard interrupt, kivy stops blocking and begins cleanup, but executes the command from the button press before exiting. The stacktrace on sending this interrupt is below:
[INFO ] [Base ] Leaving application in progress...
Traceback (most recent call last):
File "machine_control_ui.py", line 7, in <module>
DemocracyControllerApp().run()
File "C:\environments\democracy\lib\site-packages\kivy\app.py", line 828, in run
runTouchApp()
File "C:\environments\democracy\lib\site-packages\kivy\base.py", line 504, in runTouchApp
EventLoop.window.mainloop()
File "C:\environments\democracy\lib\site-packages\kivy\core\window\window_sdl2.py", line 659, in mainloop
self._mainloop()
File "C:\environments\democracy\lib\site-packages\kivy\core\window\window_sdl2.py", line 405, in _mainloop
EventLoop.idle()
File "C:\environments\democracy\lib\site-packages\kivy\base.py", line 339, in idle
Clock.tick()
File "C:\environments\democracy\lib\site-packages\kivy\clock.py", line 553, in tick
current = self.idle()
File "C:\environments\democracy\lib\site-packages\kivy\clock.py", line 533, in idle
usleep(1000000 * sleeptime)
File "C:\environments\democracy\lib\site-packages\kivy\clock.py", line 717, in usleep
_usleep(microseconds, self._sleep_obj)
File "C:\environments\democracy\lib\site-packages\kivy\clock.py", line 395, in _usleep
_kernel32.WaitForSingleObject(obj, 0xffffffff)
KeyboardInterrupt
*** BUTTON PRESS OPERATION OUTPUTS HERE ***
```
Any insight into why this might be happening and how I can avoid it would be much appreciated. I could potentially investigate other parallel ssh solutions (although I imagine anything using paramiko would have the same issue), or manually kick off a thread per host to achieve the parallel operation otherwise (which probably has its own list of headaches), but I'd prefer to just use the parallel-ssh library if there's a workable solution.
I'm using parallel-ssh 0.92.2 on Python 3 and Windows 10.
From docs -
parallel-ssh uses gevent’s monkey patching to enable asynchronous use
of the Python standard library’s network I/O.
Make sure that ParallelSSH imports come before any other imports in
your code. Otherwise, patching may not be done before the standard
library is loaded which will then cause ParallelSSH to block.
If you are seeing messages like This operation would block forever,
this is the cause.
Monkey patching is only done for the clients under pssh.pssh_client
and pssh.ssh_client for parallel and single host clients respectively.
New native library based clients under pssh.pssh2_client and
pssh.ssh2_client do not perform monkey patching and are an option if
monkey patching is not suitable. These clients will become the default
in a future major release - 2.0.0.
Since monkey patching is used for the client you are using, other uses of the threading, socket etc modules in your application will also have been patched to use gevent which means they no longer run in a native thread but in a co-routine/greenlet.
This is the reason your background thread operations block as they run in a greenlet on the same thread rather than a new thread.
As of 1.2.0, a new client based on libssh2 instead of paramiko is available which does not use monkey patching:
from pssh.pssh2_client import ParallelSSHClient
<..>
Rest of your application can then use the standard library as-is.
so basically i am using python request post to grab data about next bus arriving time, then i used websocket and tornado to make a html webpage to show the data that i grabbed.
However, i really dont know how to auto refresh the data and the html. I tried to use this:
import threading
def getETA():
threading.Timer(5.0, getETA).start()
... the python file ends with
if __name__ == "__main__":
app.listen(8888)
ioloop.IOLoop.instance().start()
getETA()
so the code run and showing this error
Exception in thread Thread-25:
Traceback (most recent call last):
File "C:\Python34\lib\threading.py", line 920, in _bootstrap_inner
self.run()
File "C:\Python34\lib\threading.py", line 1186, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Tan\Desktop\DIP\DIP WEB INTERFACE\testing.py", line 183, in getETA
app.listen(8888)
File "C:\Python34\lib\site-packages\tornado\web.py", line 1788, in listen
server.listen(port, address)
File "C:\Python34\lib\site-packages\tornado\tcpserver.py", line 126, in listen
sockets = bind_sockets(port, address=address)
File "C:\Python34\lib\site-packages\tornado\netutil.py", line 187, in bind_sockets
sock.bind(sockaddr)
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
Please suggest correct way or other method to let my html local host show refreshed value.
first: I am an absolute beginner in python, I used to write PHP before, so if I am getting something complitly wrong please let me know.
I am writing an app. It should serve its information via websockets. I choosed flask-socketio for this. In the background I want to process the data. Because I would like to have the app small I decided against a solution like Celery.
I have shortened the code to:
# -*- coding: utf8 -*-
from flask import Flask, jsonify, abort, make_response, url_for, request, render_template
from flask.ext.socketio import SocketIO, emit
from multiprocessing import Pool
from multiprocessing.managers import BaseManager
import time
import os
def background_stuff(args):
while True:
try:
print args
time.sleep(1)
except Exception as e:
return e
thread = None
_pool = None
app = Flask(__name__)
app.debug = True
socketio = SocketIO(app)
#app.route('/', methods=['GET'])
def get_timers():
return 'timer'
if __name__=='__main__':
_pool = Pool(1)
if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
workers = _pool.apply_async(
func=background_stuff,
args=('do background stuff',),
)
socketio.run(app)
# app.run()
When starting this, i get the following messages:
python test/multitest.py
* Running on http://127.0.0.1:5000/
* Restarting with stat
do background stuff
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 336, in _handle_tasks
for taskseq, set_length in iter(taskqueue.get, None):
File "/usr/lib/python2.7/Queue.py", line 168, in get
self.not_empty.wait()
File "/usr/lib/python2.7/threading.py", line 340, in wait
waiter.acquire()
File "gevent/_semaphore.pyx", line 112, in gevent._semaphore.Semaphore.acquire (gevent/gevent._semaphore.c:3386)
File "/home/phil/work/ttimer/server/local/lib/python2.7/site-packages/gevent/hub.py", line 338, in switch
return greenlet.switch(self)
LoopExit: This operation would block forever
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
127.0.0.1 - - [2015-09-30 00:06:23] "GET / HTTP/1.1" 200 120 0.001860
do background stuff
do background stuff
do background stuff
do background stuff
^CProcess PoolWorker-1:
Process PoolWorker-1:
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 113, in worker
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
result = (True, func(*args, **kwds))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
File "test/multitest.py", line 14, in background_stuff
KeyboardInterrupt
time.sleep(1)
KeyboardInterrupt
return recv()
KeyboardInterrupt
So the background process is working and it answers http requests (127.0.0.1 - - [2015-09-30 00:06:23] "GET / HTTP/1.1" 200 120 0.001860). But just ignoring an error because it seems to work does not seem to be a solution for me. Does anyone can tell my what I am doing wrong here?
If you say I can't do it that way can you tell me why? I would like to learn and understand what I am doing wrong.
I read something about monkepatching, but everything suggested threw just more or other errors. I think it is better to work on the first error instead of blindly trying fixes.
python -V
Python 2.7.9
Greetings
update
I added the 2 lines for monkeypatching, this is what I got:
$python multitest2.py
^CProcess PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
return recv()
KeyboardInterrupt
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 380, in _handle_results
task = get()
KeyboardInterrupt
* Running on http://127.0.0.1:5000/
* Restarting with stat
^CProcess PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
Exception in thread Thread-3:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 380, in _handle_results
task = get()
KeyboardInterrupt
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/usr/lib/python2.7/multiprocessing/queues.py", line 376, in get
return recv()
KeyboardInterrupt
do background stuff
FAILED to start flash policy server: [Errno 98] Address already in use: ('127.0.0.1', 10843)
$do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
do background stuff
On start there is no output at all. After hittinc ctrl-c several times, i get the background stuff output. This continues until i kill the python process via SIGKILL
update 2
what I expect to see is
* Running on http://127.0.0.1:5000/
* Restarting with stat
do background stuff
do background stuff
do background stuff
right after the running of the script. But before I press ctrl-c nothing is happening.
First of all, you need to be aware that the version of Flask-SocketIO that you are using requires gevent, which is a coroutine framework. Using the asynchronous coroutines of gevent with a multiprocessing pool is a strange combination. You are using gevent, so what would make the most sense is to use the gevent pool functionality so that everything is consistent.
Now regarding the problem, I think it is likely due to not having the standard library monkey patched at an early stage. I recommend that you add the following lines at the very top of your script (above your imports, make them lines 1 and 2):
from gevent import monkey
monkey.patch_all()
These will ensure that any calls into standard library for things such as threads, semaphores, etc. go to the gevent implementations.
Update: I tried your example. The original version, without monkey-patching, works fine for me, I do not see the LoopExit error that you reported. Adding the monkey patching prevents the background processes from running, as you reported.
In any case, I converted your script to use gevent.pool and that works reliably for me. Here is the edited script:
from flask import Flask, jsonify, abort, make_response, url_for, request, render_template
from flask.ext.socketio import SocketIO, emit
from gevent.pool import Pool
import time
import os
def background_stuff(args):
while True:
try:
print args
time.sleep(1)
except Exception as e:
return e
thread = None
_pool = None
app = Flask(__name__)
app.debug = True
socketio = SocketIO(app)
#app.route('/', methods=['GET'])
def get_timers():
return 'timer'
if __name__=='__main__':
_pool = Pool(1)
workers = _pool.apply_async(
func=background_stuff,
args=('do background stuff',),
)
socketio.run(app)
Hope this helps!
I read a tutorial about gevent and fount a solution which is simple and clean for my needs:
# -*- coding: utf8 -*-
from flask import Flask
from flask.ext.socketio import SocketIO
import gevent
import os
def background_stuff():
while True:
try:
print 'doing background work ... '
gevent.sleep(1)
except Exception as e:
return e
app = Flask(__name__)
app.debug = True
socketio = SocketIO(app)
#app.route('/', methods=['GET'])
def get_timers():
return 'timer'
if __name__=='__main__':
if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
gevent.spawn(background_stuff)
socketio.run(app)
The tutorial can be found here: http://sdiehl.github.io/gevent-tutorial/#long-polling
It even tells about problems with gevent and multiprocesing: http://sdiehl.github.io/gevent-tutorial/#subprocess , but because I found a simple solution fitting to my needs I didn't try anything else.
I have the following problem:
I wrote a FTPHandler(StreamHandler), which connects via 'transport=paramiko.Transport(...)' and 'transport.connect(...)' to a server and opens a sftp connection with 'SFTPClient.from_transport(...)'.
I am importing this handler in a class named 'JUS_Logger.py', which is my module for logging. This 'FMP_Logger' is imported by another class, 'JUS_Reader'.
The problem is, that if I start 'JUS_Reader', the transport is being initialized, but the Connection fails. There is no exception, the program only hangs. If I kill it, I get the stacktrace
CTraceback (most recent call last):
File "./JUS_Reader.py", line 24, in <module>
from JUS_Logger import logger
File "/<home>/.../JUS_Logger.py", line 74, in <module>
ftpHandler=FTPHandler(ftpOut,10)
File "/<home>/FTPHandler.py", line 21, in __init__
self.transport.connect(username=ftpOut['user'].decode('base64'),password=ftpOut['passwd'].decode('base64'))
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1004, in connect
self.auth_password(username, password)
File "/usr/lib/python2.7/dist-packages/paramiko/transport.py", line 1165, in auth_password
return self.auth_handler.wait_for_response(my_event)
File "/usr/lib/python2.7/dist-packages/paramiko/auth_handler.py", line 158, in wait_for_response
event.wait(0.1)
File "/usr/lib/python2.7/threading.py", line 403, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 262, in wait
_sleep(delay)
However, if I'm running the 'JUS_Logger.py' by itself, everything works, the transport's connection establishes and the SFTClient connects also.
Any ideas? Or further questions?