Trouble getting simple Kafka producer going - python

I'm running Kafka locally on my Mac Pro (Sierra; 10.12.6) just to get started with development. I've started ZooKeeper and a Kafka server (0.11.0.1):
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
I've got topics created:
bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
access
my-topic
(not sure what __consumer_offsets is, I created the other two).
I've installed kafka-python (1.3.4).
My sample program is dead simple:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
producer.send('my-topic', 'Another message')
But it croaks with the following message:
Traceback (most recent call last):
File "produce.py", line 3, in <module>
producer = KafkaProducer(bootstrap_servers=['localhost:9092'])
File "/Library/Python/2.7/site-packages/kafka/producer/kafka.py", line 347, in __init__
**self.config)
File "/Library/Python/2.7/site-packages/kafka/client_async.py", line 220, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/Library/Python/2.7/site-packages/kafka/client_async.py", line 861, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
Ideas? Any assistance appreciated.

Please insure that you have the setting defined in the server.config file
advertised.listeners=PLAINTEXT://your.host.name:9092 .
It might be possible that the host name resolution is giving some other host name , by default Kafka uses java.net.InetAddress.getCanonicalHostName()

If you're using wurstmeister/kafka, please notice that in Kafka's last version many parameters have been deprecated.
Instead of using -
KAFKA_HOST:
KAFKA_PORT: 9092
KAFKA_ADVERTISED_HOST_NAME: <IP-ADDRESS>
KAFKA_ADVERTISED_PORT: 9092
you need to use -
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://<IP-ADDRESS>:9092
view this link for more details

Related

openbsd httpd with gunicorn+uvicorn -- Remote protocol error : illegal request line

My web deployment setup is on openBSD and consists of httpd on front with guicorn + uvicorn as the back engine, connected via unix socket.
The setup works, in the sense that requests from httpd are being forwarded to gunicorn over the unix sockets. However, the gunicorn/uvicorn is not able to understand the incoming http request.
The error stack
[2021-11-22 22:52:17 +0530] [1631] [WARNING] Invalid HTTP request received.
Traceback (most recent call last):
File "/home/shared/Builds/Python-3.10.0/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 136, in handle_events
event = self.conn.next_event()
File "/home/shared/Builds/Python-3.10.0/lib/python3.10/site-packages/h11/_connection.py", line 443, in next_event
exc._reraise_as_remote_protocol_error()
File "/home/shared/Builds/Python-3.10.0/lib/python3.10/site-packages/h11/_util.py", line 76, in _reraise_as_remote_protocol_error
raise self
File "/home/shared/Builds/Python-3.10.0/lib/python3.10/site-packages/h11/_connection.py", line 425, in next_event
event = self._extract_next_receive_event()
File "/home/shared/Builds/Python-3.10.0/lib/python3.10/site-packages/h11/_connection.py", line 367, in _extract_next_receive_event
event = self._reader(self._receive_buffer)
File "/home/shared/Builds/Python-3.10.0/lib/python3.10/site-packages/h11/_readers.py", line 68, in maybe_read_from_IDLE_client
raise LocalProtocolError("illegal request line")
h11._util.RemoteProtocolError: illegal request line
I am not sure what are potential causes for illegal request line.
httpd is not support http proxying.
It is support serving static files as well as FastCGI. And error message, indicate that your httpd try to communicate with gunicorn using FastCGI.
So, if you stick to httpd, find a way to run your app using FastCGI server instead of WSGI (gunicorn). Many years ago flup was a popular choice.
Or, just use Nginx instead of httpd.

python: To test sending logs to Syslog Server

please help me, how to send python script logs to syslog server (syslog-ng product), i have already tried below method.. it has two approaches. one is with 'SysLogHandler' and other is with 'SocketHandler'
import logging
import logging.handlers
import socket
my_logger = logging.getLogger('MyLogger')
my_logger.setLevel(logging.DEBUG)
handler = logging.handlers.SysLogHandler(address=('10.10.11.11', 611), socktype=socket.SOCK_STREAM)
#handler = logging.handlers.SocketHandler('10.10.11.11', 611)
my_logger.addHandler(handler)
my_logger.debug('this is debug')
my_logger.critical('this is critical')
result: for SysLogHandler
[ansible#localhost ~]$ python test.py
Traceback (most recent call last):
File "test.py", line 8, in <module>
handler = logging.handlers.SysLogHandler(address=('10.10.11.11', 611), socktype=socket.SOCK_STREAM)
File "/usr/lib64/python3.6/logging/handlers.py", line 847, in __init__
raise err
File "/usr/lib64/python3.6/logging/handlers.py", line 840, in __init__
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused #THIS IS OK for me since server unreachable.
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib64/python3.6/logging/__init__.py", line 1946, in shutdown
h.close()
File "/usr/lib64/python3.6/logging/handlers.py", line 894, in close
self.socket.close()
AttributeError: 'SysLogHandler' object has no attribute 'socket'
result: SocketHandler ==> No output.. i am not sure if it is working or not.
I am not sure what is proper approach to send logs to syslog server via TCP port.. i have used both syslogHandler & SocketHandler.
syslogHandler:
using syslogHandler i am getting ConnectionRefusedError because my remote server is unreachable, probably i will user try..except method.. But i am not sure why i am getting AttributeError: 'SysLogHandler' object has no attribute 'socket'
SocketHandler:
Python logging module handler page says this class is used for sending logs to remote via TCP.. But i cant see any output, and not sure whether this is correct approach for sending logs to syslog server.
please help..
thanks in advance.
This is not a proper solution, If it's not absolutely necessary to use TCP, I would advise you to use UDP. it solved a lot of my issues with syslog handler. TCP seemed to want to bundle messages together and ironically it lost some messages in the batching process.
When i tried this on my computer with a syslog server that is up and listening to port 611, it didn't raise an AttributeError on SysLogHandler. so you should probably set up a server to listen on that port to fix that

Python opc-ua communication using self signed certificate and basic128rsa15 encryption

I want to communicate via python opcua library to an opcua server that uses Basic128Rsa15 encryption.
client.set_security_string("Basic128Rsa15,"
"SignAndEncrypt,"
"cert.pem,"
"key.pem")
I did the same communication to an Prosys server using Basic256Sha256 encryption and all was ok. With Basic128Rsa15 (using KEPserver) I get following error:
In [19]: runfile('opcuaclient.py', wdir='/home/di29394/fue4bfi/python/fuere4bfi')
DEPRECATED! Do not use SecurityPolicyBasic128Rsa15 anymore!
Received an error: MessageAbort(error:StatusCode(BadSecurityChecksFailed), reason:An error occurred verifying security.)
Received an error: MessageAbort(error:StatusCode(BadSecurityChecksFailed), reason:An error occurred verifying security.)
Protocol Error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 101, in _run
self._receive()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 121, in _receive
self._call_callback(0, ua.UaStatusCodeError(msg.Error.value))
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 131, in _call_callback
.format(request_id, self._callbackmap.keys())
opcua.ua.uaerrors._base.UaError: No future object found for request: 0, callbacks in list are
Traceback (most recent call last):
File "<ipython-input-18-4187edd51b2b>", line 1, in <module>
runfile('opcuaclient.py', wdir='/home/opcuauser')
File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/usr/lib/python3/dist-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "opcuaclient.py", line 57, in <module>
connected = client.connect()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/client.py", line 259, in connect
self.open_secure_channel()
File "/usr/local/lib/python3.6/dist-packages/opcua/client/client.py", line 309, in open_secure_channel
result = self.uaclient.open_secure_channel(params)
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 265, in open_secure_channel
return self._uasocket.open_secure_channel(params)
File "/usr/local/lib/python3.6/dist-packages/opcua/client/ua_client.py", line 199, in open_secure_channel
response = struct_from_binary(ua.OpenSecureChannelResponse, future.result(self.timeout))
File "/usr/lib/python3.6/concurrent/futures/_base.py", line 430, in result
raise CancelledError()
CancelledError
The certificate was self signed using cryptography library (snippet):
cert = (
x509.CertificateBuilder()
.subject_name(name)
.issuer_name(name)
.public_key(key.public_key())
.serial_number(1000)
.not_valid_before(now)
.not_valid_after(now + timedelta(days=10*365)) # ggf. auch dynamisch machen
.add_extension(basic_contraints, False)
.add_extension(san, False)
.sign(key, hashes.SHA256(), default_backend())
Do I have to change the certificate generation according to Basic128Rsa15 or is there something different wrong?
Thanks in advance.
I felt not so good about using Basic128Rsa15. But obviously this was not the problem. The problem was, that I've been connected to the KEPServer at least two times with different certificates but same - valid - URI. The server had problems with this, so rejected all incomming connections (the error message seems to be not very helpful). After deleting all requests on the server and connecting again, all was fine (even with Basic128Rsa15).
The error message is actually quite clear !
DEPRECATED! Do not use SecurityPolicyBasic128Rsa15 anymore!
Basic128Rsa15 is not considered as Secure anymore by the OPC Foundation and recommended to deprecate it.
Source: http://opcfoundation-onlineapplications.org/ProfileReporting/index.htm?ModifyProfile.aspx?ProfileID=a84d5b70-47b2-45ca-a0cc-e98fe8528f3d
There might be an option to still use it with KEPServerEx but I will not recommend using it for something different than testing.
Note: Basic256 is also considered obsolete by the OPC Foundation, the minimum recommended OPC UA Security Policy is then Basic256Sha256.
Some OPC UA Client and Server already support the latest and more secure Security Policies :
Aes128Sha256RsaOaep
Aes256Sha256RsaPss
I used to following the line
client.set_security_string("Basic256Sha256,SignAndEncrypt,xxxxx.der,xxxxx.pem")
please try this

How do I troubleshoot spur failing to make an SSH connection in python?

I have two nearly identical devices. spur will connect via ssh with one, but not the other. How do I figure out why?
>>> shell1 = spur.SshShell('10.201.140.242', 'username', 'password', missing_host_key=spur.ssh.MissingHostKey.accept)
>>> results = shell1.run(['ls', '-a'])
>>> results.output
'.\n..\n.aptitude\n.bashrc\n.cache\n.config\n.profile\n'
>>> shell2 = spur.SshShell('10.201.129.56', 'username', 'password', missing_host_key=spur.ssh.MissingHostKey.accept)
>>> results = shell2.run(['ls', '-a'])
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "E:\development\virtenv\lib\site-packages\spur\ssh.py", line 166, in run
return self.spawn(*args, **kwargs).wait_for_result()
File "E:\development\virtenv\lib\site-packages\spur\ssh.py", line 178, in spawn
channel = self._get_ssh_transport().open_session()
File "E:\development\virtenv\lib\site-packages\spur\ssh.py", line 268, in _get_ssh_transport
raise self._connection_error(error)
ConnectionError: Error creating SSH connection
Original error: ('10.201.129.56', <paramiko.ecdsakey.ECDSAKey object at 0x11328070>, <paramiko.ecdsakey.ECDSAKey object at 0x1135F350>)
I'm confused by the error message. What does returning the ip, and two key objects supposed to mean? Is there helpful info here I'm supposed to glean from that?
Both devices will accept ssh connections from the command line, so that sets aside an obvious problem.
Both are running the same version of Ubuntu, with the same login credentials. Home directories are even the same (no .ssh dir). Even further, both of their sshd_config files are identical (so, also using the same version among other config options).
The problem doesn't seem to be in the ssh settings, but the error gives no indication of where the problem could be!
Any ideas?
Enabling logging doesn't add much.
shell1:
11:32:11|[ INFO] - paramiko.transport - _log - Connected (version 2.0, client OpenSSH_5.9p1)
11:32:11|[ INFO] - paramiko.transport - _log - Authentication (password) successful!
shell2:
11:32:25|[ INFO] - paramiko.transport - _log - Connected (version 2.0, client OpenSSH_5.9p1)
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "E:\development\virtenv\lib\site-packages\spur\ssh.py", line 166, in run
return self.spawn(*args, **kwargs).wait_for_result()
File "E:\development\virtenv\lib\site-packages\spur\ssh.py", line 178, in spawn
File "E:\development\virtenv\lib\site-packages\spur\ssh.py", line 268, in _get_ssh_transport
raise self._connection_error(error)
ConnectionError: Error creating SSH connection
Original error: ('10.201.129.56', <paramiko.ecdsakey.ECDSAKey object at 0x1132EF10>, <paramiko.ecdsakey.ECDSAKey object at 0x11366DF0>)
It might be telling that the SSH reports connection. The failure occurs before/during authentication. But as I said above, the passwords are the same -- both connections even use the same copy-pasted pw, which works without error on a command line connection.

Pika blocking_connection.py random timeout connecting to RabbitMQ

i have a rabbit mq running on machine
both client and rabbitMQ are running on the same network
rabbitMQ has many clients
i can ping client from rabbitMQ and back
longest latency measured between the machine is 12.1 ms
network details : Standard Switch network (network of virtual machines running on single physical machine - using vmware VC)
im getting random timeouts when initializing RPC connection
/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/blocking_connection.py
problem is that the timeout isn't consistent and happens from time to time.
when manually testing this issue and running the blocking_connection.py 1000 times from the same machine that it fails no timeout accrue.
this is the error i get when failing :
2013-04-23 08:24:23,396 runtest-trigger.24397 24397 DEBUG producer_rabbit initiate_rpc_connection Connecting to RabbitMQ RPC queue rpcqueue_java on host: auto-db1
2013-04-23 08:24:25,350 runtest-trigger.24397 24397 ERROR testrunner go Run 1354: cought exception: timed out
Traceback (most recent call last):
File "/testrunner.py", line 193, in go
self.set_runparams(jobid)
File "/testrunner.py", line 483, in set_runparams
self.runparams.producers_testrun = self.initialize_producers_testrun(self.runparams)
File "/basehandler.py", line 114, in initialize_producers_testrun
producer.set_testcase_checkout()
File "/baseproducer.py", line 73, in set_testcase_checkout
self.checkout_handler = pm_checkout.get_producer(self.testcasecheckout)
File "/producer_manager.py", line 101, in get_producer
producer = self.load_producer(plugin_dir, producer_name)
File "/producer_manager.py", line 20, in load_producer
producer = getattr(producer_module, 'Producer')(producer_name, self.runparams)
File "/producer_rabbit.py", line 13, in __init__
self.initiate_rpc_connection()
File "/producer_rabbit.py", line 67, in initiate_rpc_connection
self.connection = pika.BlockingConnection(pika.ConnectionParameters( host=self.conf.rpc_proxy))
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/blocking_connection.py", line 32, in __init__
BaseConnection.__init__(self, parameters, None, reconnection_strategy)
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/base_connection.py", line 50, in __init__
reconnection_strategy)
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/connection.py", line 170, in __init__
self._connect()
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/connection.py", line 228, in _connect
self.parameters.port or spec.PORT)
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/blocking_connection.py", line 44, in _adapter_connect
self._handle_read()
File "/usr/lib/python2.6/site-packages/pika-0.9.5-py2.6.egg/pika/adapters/base_connection.py", line 151, in _handle_read
data = self.socket.recv(self._suggested_buffer_size)
timeout: timed out
please assist
I had a similar issue. If everything looks fine, then you most likely have some sort of miss configuration, e.g. bad binding. If miss configured, then you'll get a timeout because the script can't reach where it thinks it needs to go, so the error can be miss leading in this case.
For my problem, I specifically had issues with both my rabbitmq.config file and my bindings and had to use my python solution shown in: RabbitMQ creating queues and bindings from a command line over the command line example I showed. Once updated and configured properly, everything worked fine. Hopefully this gets you in the right direction.
Pika provides some time out issue when connecting different hosts.Solution is to pass a socket_timeout argument in connection parameter.Pika should upgrade to >=0.9.14
credentials = pika.PlainCredentials(RABBITMQ_USER, RABBITMQ_PASS)
connection = pika.BlockingConnection(pika.ConnectionParameters(
credentials=credentials,
host=RABBITMQ_HOST,
socket_timeout=300))
channel = connection.channel()

Categories