I tried to install Kafka on my Raspberry. And test it on 'hello-kafka' topic:
~ $ /usr/local/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic hello-kafka
>Test message 1
>Test message 2
>Test message 3
>^Z
[4]+ Stopped /usr/local/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic hello-kafka
$ /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello-kafka --from-beginning
Test message 1
Test message 2
Test message 3
^CProcessed a total of 3 messages
Then I tried to check that server works from another machine.
Checking zookeeper:
(venv)$ telnet 192.168.1.10 2181
Trying 192.168.1.10...
Connected to 192.168.1.10.
Escape character is '^]'.
srvr
Zookeeper version: 3.6.0--b4c89dc7f6083829e18fae6e446907ae0b1f22d7, built on 02/25/2020 14:38 GMT
Latency min/avg/max: 0/0.8736/59
Received: 10146
Sent: 10149
Connections: 2
Outstanding: 0
Zxid: 0x96
Mode: standalone
Node count: 139
Connection closed by foreign host.
And Kafka:
(venv) $ telnet 192.168.1.10 9092
Trying 192.168.1.10...
Connected to 192.168.1.10.
Escape character is '^]'.
tets
Connection closed by foreign host.
Then I wrote a Python script:
# -*- coding: utf-8 -*-
from confluent_kafka import Producer
def callback(err, msg):
if err is not None:
print(f'Failed to deliver message: {str(msg)}: {str(err)}')
else:
print(f'Message produced: {str(msg)}')
config = {
'bootstrap.servers': '192.168.1.10:9092'
}
producer = Producer(config)
producer.produce('hello-kafka', value=b"Hello from Python", callback=callback)
producer.poll(5)
There is script output (no any prints):
(venv) $ python kafka-producer.py
(venv) $ python kafka-producer.py
(venv) $
And no new messages in Kafka:
$ /usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello-kafka --from-beginning
Test message 1
Test message 2
Test message 3
^CProcessed a total of 3 messages
$ ^C
Somebody can tell me what I am doing wrong?
The correct fix is to update your broker configuration in server.properties to set the advertised listener correctly. If your client cannot resolve raspberrypi then change the advertised listener to something that your client can reach, i.e. the IP address:
advertised.listeners=PLAINTEXT://192.168.1.10:9092
Changing the /etc/hosts file on your client is a workaround that for a test project with a Raspberry Pi is fine, but as a general best practice should be discouraged (because the client will break as soon as it's moved to another machine which doesn't have the /etc/hosts hack)
I turned on log and saw next message:
WARNING:kafka.conn:DNS lookup failed for raspberrypi:9092, exception was [Errno 8] nodename nor servname provided, or not known. Is your advertised.listeners (called advertised.host.name before Kafka 9) correct and resolvable?
ERROR:kafka.conn:DNS lookup failed for raspberrypi:9092 (AddressFamily.AF_UNSPEC)
Then I added to /etc/hosts on client machine next string:
192.168.1.10 raspberrypi
And it completely fix this situation.
Related
I have been struggeling with a strange case of random client MQTT publish failing for certain payloads. It happends randomly when trying to publish some large amount of BASE64 data.
I've finally managed to narrow it down to payloads containing a lot of consequtive forwards slashes (/). I've searched the net to find a good reason why this happends, but havent found anythong. Is it a MQTT feature, a Paho client feature or a broker feature, or just some bug...
Setup:
Python 3.8.8 (Windows 10)
paho-mqtt 1.5.0
mosquitto 1.6.9-1 amd64
On my setup, it fails when I send a payload of 255 '/' to a 1 character topic 'a'. Larger topic length, reduces the possible number of forward slashes.
Code to reproduce error:
import paho.mqtt.client as mqtt_client
import time
address = 'some.server.com'
port = 1883
connected = False
def on_connect(client, userdata, flags, rc):
global connected
connected = True
print("Connected!")
client = mqtt_client.Client()
client.on_connect = on_connect
client.connect(host=address, port=port, keepalive=60)
client.loop_start()
while not connected:
time.sleep(1)
payload = '/'*205
print('Payload: {}'.format(payload))
client.publish(topic='a', payload=payload)
time.sleep(2)
client.loop_stop()
client.disconnect()
print('Done!')
This generates this output:
Connected!
Payload: /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Connected!
Done!
This produces the following error in /var/log/mosquitto/mosquitto.log for the mosquitto broker:
1616605010: New connection from a.b.c.d on port 1883.
1616605010: New client connected from a.b.c.d as auto-CEF15129-E74C-F00A-A6FA-5B5FDA0CEF1D (p2, c1, k60).
1616605011: Socket error on client auto-CEF15129-E74C-F00A-A6FA-5B5FDA0CEF1D, disconnecting.
1616605012: New connection from a.b.c.d on port 1883.
1616605012: New client connected from a.b.c.d as auto-0149B6DB-5997-9E08-366A-304F21FDF2E1 (p2, c1, k60).
1616605013: Client auto-0149B6DB-5997-9E08-366A-304F21FDF2E1 disconnected.
I observe that the client() connects twice, but do not know why, but this is probably caused by a disconnect...
Any Ideas?
Update 1: I've tested this on Linux Ubunit running Python 3.7.3, and same paho-mqtt version, and this does not produce the same error... Seems like some problem in Windows then.
Update 2:
I also tried running mosquitto_pub and experienced the same error, so this has to be Windows-related (or system related) in some way. Possibly firewall? I will close question if I find manage to solve this.
"C:\Program Files\mosquitto\mosquitto_pub.exe" -h some.server.com -t a -m '/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////'
Update 3 The issue is related to OpenVPN. I closed my open VPN connection and MQTT messages were passing! I'm running OpenVPN Client (Windows version 3.2.2 (1455). I have no idea what causes this conlflict...
I have this consumer code written on the host machine
from kafka import KafkaConsumer
KAFKA_HOSTS = 'divolte-kafka:9092'
KAFKA_VERSION = (0,11,5)
topic = "csptest"
consumer = KafkaConsumer(topic, bootstrap_servers=KAFKA_HOSTS, api_version=KAFKA_VERSION)
for msg in consumer:
print(msg)
Kafka is installed in the docker with this configuration
version: "3.3"
services:
# Kafka/Zookeeper container
divolte-kafka:
image: krisgeus/docker-kafka
container_name: divolte-kafka
environment:
ADVERTISED_HOST: divolte-kafka
KAFKA_ADVERTISED_HOST_NAME: 192.168.65.0
LOG_RETENTION_HOURS: 1
AUTO_CREATE_TOPICS: "false"
KAFKA_CREATE_TOPICS: divolte:4:1
ADVERTISED_LISTENERS: OUTSIDE://divolte-kafka:9092,INTERNAL://localhost:9093
LISTENERS: OUTSIDE://0.0.0.0:9092,INTERNAL://0.0.0.0:9093
SECURITY_PROTOCOL_MAP: OUTSIDE:PLAINTEXT,INTERNAL:PLAINTEXT
INTER_BROKER: INTERNAL
ports:
- 9092:9092 # kafka broker
expose:
- "9092"
networks:
- divolte.io
when I try running producer and consumer as follows, it works. BUT When I start the producer and access the topic "csptest" in the consumer code written in python in the host machine, I get no message (Nothing gets printed). Thanks for helping me out.
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2 --topic csptest
# producer
./kafka-console-producer.sh --broker-list localhost:9092 --topic csptest
> dd
> hi
> jhj
# consumer
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic csptest --group topic_group
> dd
> hi
> jhj
Your shell scripts work while you're in the container, but that's not going to help you test code running outside of Docker
You seem to have the internal and external listeners flipped
From the host, you want to connect on the internal listener, so you need to expose port 9093 and connect on localhost:9093, not use the Kafka container service name
I am creating an application where I need to send mail for some particular logs.
Here is my rule file:
es_host: localhost
es_port: 9200
name: Log Level Test
type: frequency
index: testindexv4
num_events: 1
timeframe:
hours: 4
filter:
- term:
log_level.keyword: "ERROR"
- query:
query_string:
query: "log_level.keyword: ERROR"
alert:
- "email"
email:
- "<mailId>#gmail.com"
Here is the config.yaml
rules_folder: myrules
run_every:
seconds: 2
buffer_time:
seconds: 10
es_host: localhost
es_port: 9200
writeback_index: elastalert_status
alert_time_limit:
days: 2
Here is smpt_auth.yaml
alert:
- email
email:
- "<mailId>#gmail.com"
smtp_host: "smtp.gmail.com"
smtp_port: 587
smtp_ssl: true
from_addr: "<mailId>#gmail.com"
smtp_auth_file: 'D:\ELK_Info\ElastAlert\elastalert-master\smtp_auth_user.yaml'
Here is smtp_auth_user.yaml
user: "<mailId>#gmail.com"
password: "<password>"
When I run this command:
python -m elastalert.elastalert --verbose --rule myrules\myrule.yaml
I get an error as:
ERROR:root: Error while running alert email: Error connecting to SMTP host: [Errno 10061] No connection could be made because the target machine actively refused it.
Any idea on how to resolve the same, please?
Try checking the following link please:
https://stackoverflow.com/a/36532619/5062759
From my understanding, it's not recommended AT ALL to use Gmail to send emails out. There's a limit to it, so if you're doing it for production services (especially logs) you'll hit the cap quickly. Amazon's SES system gives developer credits I believe so you can tinker with that or if you really like Google you could use: https://cloud.google.com/appengine/docs/standard/go/mail/.
The sendemail command by default attempts to use localhost as the SMTP server and ignores the settings used for scheduled search alerts. If you do not have an SMTP server or forwarder installed (which on Windows is quite likely), sendemail will fail when trying to connect to localhost.
To work around this, specify server in sendemail as follows:
my search terms | sendemail to=foo#bar.com sendresults=true server=mail.bar.com
Can you try setting smtp_host in smtp_auth.yaml to a local smtp host (email
server). Your admin can help you with figuring one out.
I am running Telegram CLI as a daemon on my localhost .
I need to forward messages from my current session while using the original msg_id .
For that matter I use the following bash command :
echo -e 'fwd peer msg_id' | nc.traditional -w 1 127.0.0.1 1234
The problem is when the command is sent it's like a new session is being opened and it can't detect the correct msg_id of the running daemon :
ANSWER 26
FAIL: 22: unknown message
How can I externally send a message to my daemon while using the msg_ids from the current session ?
I've implemented a basic remote syslog server in Python with the following code:
self.UDPsock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.UDPsock.bind(self.addr)
self.UDPsock.settimeout(1)
while self.carryOn:
try:
data = self.UDPsock.recv(self.buf)
print data
except socket.timeout:
pass
I'm using this to receive log messages from my router (Tomato Shibby v108). I'm particularly interested in intercepting messages from my mobile so that I can create a "presence" script.
I originally tried the following iptable entry for testing:
iptables -I FORWARD -s 192.168.2.54 -m limit --limit 1/minute --limit-burst 1 -j LOG
This worked as expected and I would receive messages such as:
<12>Apr 1 21:51:47 kernel: IN=br0 OUT=ppp0 SRC=192.168.2.54 DST=17.158.8.77 LEN=70 TOS=0x00 PREC=0x00 TTL=63 ID=23055 DF PROTO=TCP SPT=60779 DPT=993 WINDOW=65535 RES=0x00 ACK PSH URGP=0 MARK=0x5
However, I don't want to rely on a static IP, so changed the iptable filter to trigger on the MAC address:
iptables -t raw -A PREROUTING -m mac --mac-source SOURCE_MAC -m limit --limit 1/minute --limit-burst 1 -j LOG --log-ip-options
The problem here was that I now received >50 messages per log entry, all duplicates of the form:
<12>Apr 1 19:54:00 kernel: IN=br0 OUT= MAC=DEST_MAC:SOURCE_MAC:08:00 SRC=192.168.2.54 DST=224.0.0.251 LEN=101 TOS=0x00 PREC=0x00 TTL=255 ID=36530 PROTO=UDP SPT=5353 DPT=5353 LEN=81
When I changed the filter to:
iptables -t raw -A PREROUTING -m mac --mac-source SOURCE_MAC -m limit --limit 1/minute --limit-burst 1 -j LOG
It reduced the number of duplicates to 4:
<12>Apr 2 12:21:55 kernel: IN=br0 OUT= MAC=DEST_MAC:SOURCE_MAC:08:00 SRC=192.168.2.54 DST=224.0.0.251 LEN=101 TOS=0x00 PREC=0x00 TTL=255 ID=1384 PROTO=UDP SPT=5353 DPT=5353 LEN=81
Can anyone offer any insight as to why this is happening? I'm assuming there is some sort of "funny" character that is causing this. Can I alter either the iptable entry or the Python program to only receive a single log entry per message?