I'm building an app using Kafka and Spark Streaming. Input data comes from a third part streaming and it's published on a kafka topic. This code shows the Stream Proxy module: it's the way I get the results from the streaming and how I send them to KafkaPublisher (it's shown just a sketch):
def on_result_response(self,*args):
self.kafkaPublisher.pushMessage(str(args[0]))
The KafkaPublisher is realized with these two methods:
class KafkaPublisher:
def __init__(self,address,port,topic):
self.kafka = KafkaClient(str(address)+":"+str(port))
self.producer = SimpleProducer(self.kafka)
self.topic=topic
def pushMessage(self,message):
self.producer.send_messages(self.topic, message)
self.producer = SimpleProducer(self.kafka, async=True)
And the app is launched by this main:
from StreamProxy import StreamProxy
streamProxy=StreamProxy("localhost",9092,"task1")
streamProxy.getStreaming(20) #seconds of streaming
After some batch processing (10 seconds more or less) it's launched the following exceptions:
Exception in thread Thread-2354:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 754, in run
File "/usr/local/lib/python2.7/dist-packages/kafka/producer/base.py", line 164, in _send_upstream
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 649, in send_produce_request
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 253, in _send_broker_aware_request
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 74, in _get_conn
File "/usr/local/lib/python2.7/dist-packages/kafka/conn.py", line 236, in connect
error: [Errno 24] Too many open files
Exception in thread Thread-2355:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 754, in run
File "/usr/local/lib/python2.7/dist-packages/kafka/producer/base.py", line 164, in _send_upstream
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 649, in send_produce_request
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 253, in _send_broker_aware_request
File "/usr/local/lib/python2.7/dist-packages/kafka/client.py", line 74, in _get_conn
File "/usr/local/lib/python2.7/dist-packages/kafka/conn.py", line 236, in connect
error: [Errno 24] Too many open files
Please note that there are many different exceptions with the same message and surely the problem is publisher-side.
Try to delete the row:
self.producer = SimpleProducer(self.kafka, async=True)
Related
I am having a weird issue where I change the location of all my code & data to a different location with more disk space, then I soft link my projects & data to those locations with more space. I assume there must be some file handle issue because wandb's logger is throwing me issues. So my questions:
how do I have wandb only log online and not locally? (e.g. stop trying to log anything to ./wandb[or any secret place it might be logging to] since it's creating issues). Note my code was running fine after I stopped logging to wandb so I assume that was the issue. note that the dir=None is the default to wandb's param.
how do I resolve this issue entirely so that it works seemlessly with all my projects softlinked somewhere else?
More details on the error
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1087, in emit
self.flush()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1067, in flush
self.stream.flush()
OSError: [Errno 116] Stale file handle
Call stack:
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 930, in _bootstrap
self._bootstrap_inner()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/vendor/watchdog/observers/api.py", line 199, in run
self.dispatch_events(self.event_queue, self.timeout)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/vendor/watchdog/observers/api.py", line 368, in dispatch_events
handler.dispatch(event)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/vendor/watchdog/events.py", line 454, in dispatch
_method_map[event_type](event)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/filesync/dir_watcher.py", line 275, in _on_file_created
logger.info("file/dir created: %s", event.src_path)
Message: 'file/dir created: %s'
Arguments: ('/shared/rsaas/miranda9/diversity-for-predictive-success-of-meta-learning/wandb/run-20221023_170722-1tfzh49r/files/output.log',)
--- Logging error ---
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1087, in emit
self.flush()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1067, in flush
self.stream.flush()
OSError: [Errno 116] Stale file handle
Call stack:
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 930, in _bootstrap
self._bootstrap_inner()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/internal/internal_util.py", line 50, in run
self._run()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/internal/internal_util.py", line 101, in _run
self._process(record)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/internal/internal.py", line 263, in _process
self._hm.handle(record)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/internal/handler.py", line 130, in handle
handler(record)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/internal/handler.py", line 138, in handle_request
logger.debug(f"handle_request: {request_type}")
Message: 'handle_request: stop_status'
Arguments: ()
N/A% (0 of 100000) | | Elapsed Time: 0:00:00 | ETA: --:--:-- | 0.0 s/it
Traceback (most recent call last):
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py", line 1814, in <module>
main()
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py", line 1747, in main
train(args=args)
File "/home/miranda9/diversity-for-predictive-success-of-meta-learning/div_src/diversity_src/experiment_mains/main_dist_maml_l2l.py", line 1794, in train
meta_train_iterations_ala_l2l(args, args.agent, args.opt, args.scheduler)
File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/torch_uu/training/meta_training.py", line 167, in meta_train_iterations_ala_l2l
log_zeroth_step(args, meta_learner)
File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/logging_uu/wandb_logging/meta_learning.py", line 92, in log_zeroth_step
log_train_val_stats(args, args.it, step_name, train_loss, train_acc, training=True)
File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/logging_uu/wandb_logging/supervised_learning.py", line 55, in log_train_val_stats
_log_train_val_stats(args=args,
File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/logging_uu/wandb_logging/supervised_learning.py", line 116, in _log_train_val_stats
args.logger.log('\n')
File "/home/miranda9/ultimate-utils/ultimate-utils-proj-src/uutils/logger.py", line 89, in log
print(msg, flush=flush)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/lib/redirect.py", line 640, in write
self._old_write(data)
OSError: [Errno 116] Stale file handle
wandb: Waiting for W&B process to finish... (failed 1). Press Control-C to abort syncing.
wandb: Synced vit_mi Adam_rfs_cifarfs Adam_cosine_scheduler_rfs_cifarfs 0.001: args.jobid=101161: https://wandb.ai/brando/entire-diversity-spectrum/runs/1tfzh49r
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
wandb: Find logs at: ./wandb/run-20221023_170722-1tfzh49r/logs
--- Logging error ---
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/interface/router_sock.py", line 27, in _read_message
resp = self._sock_client.read_server_response(timeout=1)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/lib/sock_client.py", line 283, in read_server_response
data = self._read_packet_bytes(timeout=timeout)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/lib/sock_client.py", line 269, in _read_packet_bytes
raise SockClientClosedError()
wandb.sdk.lib.sock_client.SockClientClosedError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/interface/router.py", line 70, in message_loop
msg = self._read_message()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/interface/router_sock.py", line 29, in _read_message
raise MessageRouterClosedError
wandb.sdk.interface.router.MessageRouterClosedError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1087, in emit
self.flush()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1067, in flush
self.stream.flush()
OSError: [Errno 116] Stale file handle
Call stack:
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 930, in _bootstrap
self._bootstrap_inner()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/interface/router.py", line 77, in message_loop
logger.warning("message_loop has been closed")
Message: 'message_loop has been closed'
Arguments: ()
/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/tempfile.py:817: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/srv/condor/execute/dir_27749/tmpmvf78q6owandb'>
_warnings.warn(warn_message, ResourceWarning)
/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/tempfile.py:817: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/srv/condor/execute/dir_27749/tmpt5etqpw_wandb-artifacts'>
_warnings.warn(warn_message, ResourceWarning)
/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/tempfile.py:817: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/srv/condor/execute/dir_27749/tmp55lzwviywandb-media'>
_warnings.warn(warn_message, ResourceWarning)
/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/tempfile.py:817: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/srv/condor/execute/dir_27749/tmprmk7lnx4wandb-media'>
_warnings.warn(warn_message, ResourceWarning)
Error:
====> about to start train loop
Starting training!
WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)'))': /api/5288891/envelope/
--- Logging error ---
Traceback (most recent call last):
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/logging/__init__.py", line 1086, in emit
stream.write(msg + self.terminator)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/wandb/sdk/lib/redirect.py", line 640, in write
self._old_write(data)
OSError: [Errno 116] Stale file handle
Call stack:
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 930, in _bootstrap
self._bootstrap_inner()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 973, in _bootstrap_inner
self.run()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/threading.py", line 910, in run
self._target(*self._args, **self._kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/sentry_sdk/worker.py", line 128, in _target
callback()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/sentry_sdk/transport.py", line 467, in send_envelope_wrapper
self._send_envelope(envelope)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/sentry_sdk/transport.py", line 384, in _send_envelope
self._send_request(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/sentry_sdk/transport.py", line 230, in _send_request
response = self._pool.request(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/urllib3/request.py", line 78, in request
return self.request_encode_body(
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/urllib3/request.py", line 170, in request_encode_body
return self.urlopen(method, url, **extra_kw)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/urllib3/poolmanager.py", line 375, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/urllib3/connectionpool.py", line 780, in urlopen
log.warning(
Message: "Retrying (%r) after connection broken by '%r': %s"
Arguments: (Retry(total=2, connect=None, read=None, redirect=None, status=None), SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')), '/api/5288891/envelope/')
Bounty
My suggestions on what might solve this are:
Figuring out a way to stop wandb logging locally or minimize the amount of logging wandb is logging locally.
Figure out what is exactly being logged and minimize the space.
have the logging work even if all the folders are being symlinked. (imho this should work out of the box)
figuring out a systematic and simple way to find where the stale file handles are coming from.
I am surprised moving everything to /shared/rsaas/miranda9/ and running experiments from there did not solve the issue.
cross:
https://community.wandb.ai/t/how-to-stop-logging-locally-but-only-save-to-wandbs-servers-and-have-wandb-work-using-soft-links/3305
https://www.reddit.com/r/learnmachinelearning/comments/ybvo73/how_to_stop_logging_locally_but_only_save_to/
gitissue: https://github.com/wandb/wandb/issues/4409
seems like the solution is to not log to weird places with symlinks but log to real paths and instead clean up the wandb local paths often to avoid disk quota errors in your HPC. Not my fav solution but gets it done :).
Wandb should fix this, the whole point of wandb is that it works out of the box and I don't have to do MLOps and I can focus on research.
likely best to see discussions here: https://github.com/wandb/wandb/issues/4409
how do I have wandb only log online and not locally? (e.g. stop trying to log anything to ./wandb[or any secret place it might be logging to] since it's creating issues)
You could try wandb offline given by the wandb documentation to turn off logging:
The command wandb offline sets an environment variable, WANDB_MODE=offline. This stops any data from syncing from your machine to the remote wandb server. If you have multiple projects, they will all stop syncing logged data to W&B servers.
You could also check out this discussion on "wandb sync not logging in while running wandb local", where some people managed to figure out it had something to do with the --network host flag.
In Python, to share data between different process by using multiprocessing, we use multiprocessing.Manager(). I want to get output [1,2,3,4,5,6,7,8,9,10] in the following code, but I am getting EOFError. Why?
The Code is:
import multiprocessing
manager=multiprocessing.Manager()
final_list=manager.list()
input_list_one=[1,2,3,4,5]
input_list_two=[6,7,8,9,10]
def worker(data):
for item in data:
final_list.append(item)
process_1=multiprocessing.Process(target=worker,args=[input_list_one])
process_2=multiprocessing.Process(target=worker,args=[input_list_two])
process_1.start()
process_2.start()
process_1.join()
process_2.join()
print(final_list)
I am getting the following error:
Process SyncManager-1:
Traceback (most recent call last):
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/managers.py", line 539, in _run_server
server = cls._Server(registry, address, authkey, serializer)
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/managers.py", line 139, in __init__
self.listener = Listener(address=address, backlog=16)
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/connection.py", line 438, in __init__
self._listener = SocketListener(address, family, backlog)
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/connection.py", line 576, in __init__
self._socket.bind(address)
PermissionError: [Errno 13] Permission denied
Traceback (most recent call last):
File "/storage/emulated/0/qpython/.last_tmp.py", line 2, in <module>
manager=multiprocessing.Manager()
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/context.py", line 56, in Manager
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/managers.py", line 517, in start
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/connection.py", line 250, in recv
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/connection.py", line 407, in _recv_bytes
File "/data/user/0/org.qpython.qpy3/files/lib/python36.zip/multiprocessing/connection.py", line 383, in _recv
EOFError
1|u0_a823#land:/ $
I am trying to run the remote manager example code from the multiprocessing documentation in pypy3 but I get an error connecting the client.
Traceback (most recent call last):
File "C:/temp/testpypy/mp_client.py", line 7, in <module>
m.connect()
File "C:\Python\pypy3-v6.0.0-win32\lib-python\3\multiprocessing\managers.py", line 455, in connect
conn = Client(self._address, authkey=self._authkey)
File "C:\Python\pypy3-v6.0.0-win32\lib-python\3\multiprocessing\connection.py", line 493, in Client
answer_challenge(c, authkey)
File "C:\Python\pypy3-v6.0.0-win32\lib-python\3\multiprocessing\connection.py", line 732, in answer_challenge
message = connection.recv_bytes(256) # reject large message
File "C:\Python\pypy3-v6.0.0-win32\lib-python\3\multiprocessing\connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "C:\Python\pypy3-v6.0.0-win32\lib-python\3\multiprocessing\connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "C:\Python\pypy3-v6.0.0-win32\lib-python\3\multiprocessing\connection.py", line 386, in _recv
buf.write(chunk)
TypeError: 'str' does not support the buffer interface
if I try to connect to it from a CPython interpreter (which is my ultimate goal) I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "c:\Python\3.5.4.2\WinPython\python-3.5.4.amd64\lib\multiprocessing\managers.py", line 455, in connect
conn = Client(self._address, authkey=self._authkey)
File "c:\Python\3.5.4.2\WinPython\python-3.5.4.amd64\lib\multiprocessing\connection.py", line 493, in Client
answer_challenge(c, authkey)
File "c:\Python\3.5.4.2\WinPython\python-3.5.4.amd64\lib\multiprocessing\connection.py", line 737, in answer_challenge
response = connection.recv_bytes(256) # reject large message
File "c:\Python\3.5.4.2\WinPython\python-3.5.4.amd64\lib\multiprocessing\connection.py", line 218, in recv_bytes
self._bad_message_length()
File "c:\Python\3.5.4.2\WinPython\python-3.5.4.amd64\lib\multiprocessing\connection.py", line 151, in _bad_message_length
raise OSError("bad message length")
OSError: bad message length
Turns out to be a bug in PyPy3.
Here is the fixed ticket:
https://bitbucket.org/pypy/pypy/issues/2841/remote-multprocessing-issue#comment-45861347
I am using Skype4Py and create a skype bot.
I wanted to install the bot on a linux enviroment (Ubuntu 12.04 as I recall right)
And I installed skype and the bot + dependicies.
Now whenever I ask for message.Chat.Type, it gives me a command timeout..
Any solution?
error:
Exception in thread Skype4Py MessageStatus event scheduler:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 551, in __bootstrap_inner
self.run()
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/utils.py", line 225, in run
handler(*self.args, **self.kwargs)
File "functions/messageProcessor.py", line 161, in processMessages
if allowed(message, "url_parse"):
File "functions/messageProcessor.py", line 62, in allowed
chatType = message.Chat.Type
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/chat.py", line 405, in _GetType
return str(self._Property('TYPE'))
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/chat.py", line 33, in _Property
return self._Owner._Property('CHAT', self.Name, PropName, Value, Cache)
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/skype.py", line 296, in _Property
value = self._DoCommand('GET %s' % jarg, jarg)
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/skype.py", line 276, in _DoCommand
self.SendCommand(command)
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/skype.py", line 778, in SendCommand
self._Api.send_command(Command)
File "/usr/local/lib/python2.7/dist-packages/Skype4Py/api/posix_x11.py", line 445, in send_command
raise SkypeAPIError('Skype command timeout')
SkypeAPIError: Skype command timeout
I have a setup containing varnish nginx and 2 pyramid backends one of them running socket.io app. All the stack works correctly on my local development computer, but i can't get to work the websocket part on the production computer. The traceback of the error that trows the socketio app when a client socket.io tries to connect:
[2012-11-20 14:32:00] "GET /socket.io/1/websocket/790777701707 HTTP/1.1" 101 - -
Traceback (most recent call last):
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/greenlet.py", line 390, in run
result = self._run(*self.args, **self.kwargs)
File "/var/pyramid/maxserver/eggs/gevent_socketio-0.3.5_rc2-py2.7.egg/socketio/transports.py", line 226, in send_into_ws
websocket.send(message)
File "/var/pyramid/maxserver/eggs/gevent_websocket-0.3.6-py2.7.egg/geventwebsocket/websocket.py", line 350, in send
return self.send_frame(message, self.OPCODE_TEXT)
File "/var/pyramid/maxserver/eggs/gevent_websocket-0.3.6-py2.7.egg/geventwebsocket/websocket.py", line 340, in send_frame
self._write(combined)
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/socket.py", line 515, in sendall
data_sent += self.send(_get_memory(data, data_sent), flags, timeout=timeleft)
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/socket.py", line 483, in send
return sock.send(data, flags)
error: [Errno 32] Broken pipe
<Greenlet at 0xa449c0c: send_into_ws> failed with error
Traceback (most recent call last):
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/greenlet.py", line 390, in run
result = self._run(*self.args, **self.kwargs)
File "/var/pyramid/maxserver/eggs/gevent_socketio-0.3.5_rc2-py2.7.egg/socketio/transports.py", line 230, in read_from_ws
message = websocket.receive()
File "/var/pyramid/maxserver/eggs/gevent_websocket-0.3.6-py2.7.egg/geventwebsocket/websocket.py", line 296, in receive
result = self._receive()
File "/var/pyramid/maxserver/eggs/gevent_websocket-0.3.6-py2.7.egg/geventwebsocket/websocket.py", line 244, in _receive
frame = self.receive_frame()
File "/var/pyramid/maxserver/eggs/gevent_websocket-0.3.6-py2.7.egg/geventwebsocket/websocket.py", line 177, in receive_frame
data0 = read(2)
File "/var/pyramid/maxserver/eggs/gevent_websocket-0.3.6-py2.7.egg/geventwebsocket/python_fixes.py", line 22, in readinto
return self._sock.recv_into(b)
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/socket.py", line 472, in recv_into
wait_read(sock.fileno(), timeout=self.timeout, event=self._read_event)
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/socket.py", line 169, in wait_read
switch_result = get_hub().switch()
File "/var/pyramid/maxserver/eggs/gevent-0.13.8-py2.7-linux-i686.egg/gevent/hub.py", line 164, in switch
return greenlet.switch(self)
timeout: timed out
<Greenlet at 0xa46157c: read_from_ws> failed with timeout
I've checked all python packages versions and there are identical in both computers. Also upgraded production computer to libevent 1.4.14b to match local computer.
I don't know which way to go to debug this. Help appreciated!