Joining twitch irc through helix api, (5000+ channels) connection reset error - python

To start off im using an anonymous connection joining the channels which means there are no JOIN limits, I have tried different variations of sleeping, I started off just joining from a text however that had a lot of problems because it was connecting all the sockets before joining so I couldnt see what caused it. However this is the best version I have created so far, its pretty scuffed but I am just trying to understand what the issue is. If anyone has any insight on doing a big task like this I would appreciate it a lot!
(oauth and helix headers are from a random alt account I made for testing and its trying to join 10k channels in the example but stops around 2k-3k max)
import requests
import socket
import time
import threading
import random
connections_made = 0
sockets = []
def connect():
global sockets
global connections_made
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print("CONNECTING TO IRC")
sock.connect(('irc.chat.twitch.tv', 6667))
sock.send(bytes('PASS oauth:'+ '\r\n', 'utf-8'))
sock.send(bytes('NICK justinfan' + str(random.randint(10000,99999)) + '\r\n', 'utf-8'))
sockets.append(sock)
connections_made += 1
print(f"socket: {len(sockets)}")
for i in range(2):
connect() # initial for .recv reading
helix_headers = {'client-id': 'q6batx0epp608isickayubi39itsckt', 'authorization': 'Bearer rk0ixn6169ar7y5xey9msvk1h8zrs8'}
def request(channels_to_join,cursor):
request_amount = int(channels_to_join / 100) # 100 requests = 10000 channels
user_list = []
sock_numb = 0
total_chans_joined = 0
count_every_request = 0
for i in range(request_amount):
time.sleep(1)
# 3k channels with time.sleep(1) 1.5k channels with time.sleep(2) 30 seconds then connection reset error (when bulk joining 100 channels and waiting for the next request)
# waiting 30 seconds doesnt fix this either stop at about 500 channels so lasted 2.5minutes?
# waiting 60 seconds at 500 channels breaks
if count_every_request == 1: # for every 100 channels
connect()
count_every_request = 0
r = requests.get("https://api.twitch.tv/helix/streams?first=100&after=" + cursor,headers=helix_headers)
cursor = r.json()['pagination']['cursor']
count_every_request += 1
for everything in r.json()['data']:
user_list.append(everything['user_login'])
channel = everything['user_login']
# join channel
if sock_numb == connections_made: # makes it so when joining sockets it joins up to the amount of sockets that there are and then loops back
sock_numb = 0
print(f"JOINING #{channel} with socket: {sock_numb} total joined: {total_chans_joined}")
sockets[sock_numb].send(bytes('JOIN #' + channel + '\r\n', 'utf-8'))
total_chans_joined += 1
sock_numb += 1
def loop():
print("Looping")
try:
while True:
time.sleep(0.1)
for i in range(connections_made):
data = sockets[i].recv(4096).decode("utf-8",errors='replace').strip()
if data == "":
continue
print(data)
if "PING :tmi.twitch.tv" in data:
print("PONG")
sockets[i].send(bytes('PONG :tmi.twitch.tv' + '\r\n', 'utf-8'))
except Exception as e:
print(str(e) + " error in loop ")
pass
thread_loop = threading.Thread(target=loop)
thread_loop.start()
request(channels_to_join=10000,cursor = "eyJiIjp7IkN1cnNvciI6ImV5SnpJam80T0RrMU1TNDVNRFkwTWpnd09URTVNU3dpWkNJNlptRnNjMlVzSW5RaU9uUnlkV1Y5In0sImEiOnsiQ3Vyc29yIjoiZXlKeklqbzFNakF6TGpJM056UTFPVEUzT1RReE1Td2laQ0k2Wm1Gc2MyVXNJblFpT25SeWRXVjkifX0")

The likely problem is that your bot can't keep up with the message send buffer.
So you connect to many channels, but are not processing the incoming chat messages in a timely fashion. So the "queue" of messages to send from Twitch to You exceeds Twitch's buffer. And it DC's you
Or as per the IRC Rate limit guide you are sneding too many commands and getting Disconnected from the server.
Large chat bots will often split groups of channels over multiple connections to solve this issue.

Related

Trying to use Try/Except without an actual loop. Code is stuck on the except portion

Writing a python script that sets up pickup games. srv.connect() will time out if the IP/RCON are put in wrong and/or the server is down altogether. I do not want the discord bot to crash just because a server is down so I am trying to use a try/except to keep the bot going. A person can start a pickup by typing !pickup size 4 servername ... and if srv.connect() can not get a handshake with the server, itll time out and send a message in discord saying it can not find the server. Then they could do !pickup size 4 servername2 and itll work. Problem right now is that after doing !pickup size 4 servername it seems stuck on saying the rcon/ip is down even though servername2 should be running just fine. Any help?
if(message.content.startswith('!pickup size')):
if(pickupActive == 0):
x = message.content.split()
if(len(x) >= 4):
pServer = x[3]
if(pServer in servers):
srv = Console(host=servers[pServer][0], port= servers[pServer][1], password=servers[pServer][2])
try:
srv.connect()
servercfg = srv.execute("servercfgfile")
#print(servers[pServer][3])
if (servers[pServer][3] in servercfg): #and (pickupActive == 0)): #change "server.cfg" to whatever your server configurtion filename is for pickup play. keep the quotations
totalPlayers = [" "] * int(x[2])
initialPlayerCount = len(totalPlayers)
pickupLeader = message.author.name
totalPlayers.insert(playerCount, pickupLeader)
totalPlayers.remove(" ")
PopulateTable()
await message.channel.send("```Pickup Game Starting! " + pickupLeader + " is the leader for the Pickup Game! Come join in! type '!pickup add' to join```")
await message.channel.send("```" + msg + "```")
pickupActive = 1
else:
await message.channel.send("`" + servers[pServer][4] + "`")
except:
await message.channel.send("Can not connect to the server for rcon..")
srv.disconnect()
else:
await message.channel.send("`Please specify the size of the pickup and a valid server name..`")
else:
await message.channel.send("`Proper formatting not used, please say !pickup size # server (Example: !pickup size 3 nwo)`")
else:
await message.channel.send("`Already a pickup game in progress, please add up to the current pickup game..`")

How to run 2 differents loops in 2 differents threads?

I'm doing a telemetry application using Azure IoT Hub, Azure IoT SDK in Python and a raspberry pi with temperature and humidity sensors.
Humidity + Temperature sensors => Rasperry Pi => Azure IoT Hub
In my first implementation thanks azure examples, I used one loop that collect data from the temperature sensor and the humidity sensor, and send them to Azure IoT Hub in the same time every 60 second.
>>> 1 Loop every 60s = Collect data & send data of temperature and humidity
Now I would like to send them with different frequencies, I mean :
One loop will collect the data of the temperature sensor and send it to Azure IoT Hub every 60 seconds;
Whereas a second loop will collect the data of the humidity sensor and send it to Azure IoT Hub every 600 seconds.
>>> 1 Loop every 60s= Collect data & send data of temperature
>>> 2 Loop every 600s= Collect data & send data of humidity
I think the tool I need is multi-threading, but I don't understand which library or structure I have to implement in my case.
Here is the code provided by Azure, including one loop that handles temperature and humidity at the same time. Reading the data and sending to Azure every 60 seconds.
import random
import time
import sys
# Using the Python Device SDK for IoT Hub:
from iothub_client import IoTHubClient, IoTHubClientError,
IoTHubTransportProvider, IoTHubClientResult
from iothub_client import IoTHubMessage, IoTHubMessageDispositionResult,
IoTHubError, DeviceMethodReturnValue
# The device connection string to authenticate the device with your IoT hub.
CONNECTION_STRING = "{Your IoT hub device connection string}"
# Using the MQTT protocol.
PROTOCOL = IoTHubTransportProvider.MQTT
MESSAGE_TIMEOUT = 10000
# Define the JSON message to send to IoT Hub.
TEMPERATURE = 20.0
HUMIDITY = 60
MSG_TXT = "{\"temperature\": %.2f,\"humidity\": %.2f}"
def send_confirmation_callback(message, result, user_context):
print ( "IoT Hub responded to message with status: %s" % (result) )
def iothub_client_init():
# Create an IoT Hub client
client = IoTHubClient(CONNECTION_STRING, PROTOCOL)
return client
def iothub_client_telemetry_sample_run():
try:
client = iothub_client_init()
print ( "IoT Hub device sending periodic messages, press Ctrl-C to exit" )
#******************LOOP*******************************
while True:
# Build the message with simulated telemetry values.
temperature = TEMPERATURE + (random.random() * 15)
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted = MSG_TXT % (temperature, humidity)
message = IoTHubMessage(msg_txt_formatted)
# Send the message.
print( "Sending message: %s" % message.get_string() )
client.send_event_async(message, send_confirmation_callback, None)
time.sleep(60)
except IoTHubError as iothub_error:
print ( "Unexpected error %s from IoTHub" % iothub_error )
return
except KeyboardInterrupt:
print ( "IoTHubClient sample stopped" )
if __name__ == '__main__':
print ( "IoT Hub Quickstart #1 - Simulated device" )
print ( "Press Ctrl-C to exit" )
iothub_client_telemetry_sample_run()
I would like to use the same structure of functions, including two loops that handles temperature and humidity, one every 60s and one every 600s.
while True:
# Build the message with simulated telemetry values.
temperature = TEMPERATURE + (random.random() * 15)
msg_txt_formatted1 = MSG_TXT1 % (temperature)
message1 = IoTHubMessage(msg_txt_formatted1)
# Send the message.
print( "Sending message: %s" % message1.get_string() )
client.send_event_async(message1, send_confirmation_callback, None)
time.sleep(60)
while True:
# Build the message with simulated telemetry values.
humidity = HUMIDITY + (random.random() * 20)
msg_txt_formatted2 = MSG_TXT2 % (humidity)
message2 = IoTHubMessage(msg_txt_formatted2)
# Send the message.
print( "Sending message: %s" % message2.get_string() )
client.send_event_async(message2, send_confirmation_callback, None)
time.sleep(600)
How can I do that? How to call those loops with multi-threading or another method?
It may be simpler to do something like
while True:
loop_b()
for _ in range(10):
loop_a()
time.sleep(60)
or even
while True:
time.sleep(1)
now = time.time()
if now % 60 == 0:
loop_a()
if now % 600 == 0:
loop_b()
But if you really want to use threads, then:
import threading
class LoopAThread(threading.Thread):
def run(self):
loop_a()
class LoopBThread(threading.Thread):
def run(self):
loop_b()
...
thread_a = LoopAThread()
thread_b = LoopBThread()
thread_a.start()
thread_b.start()
thread_a.join()
thread_b.join()
Here are two competing approaches to consider
Don't bother with threads at all. Just have one loop that sleeps every 60 seconds like you have now. Keep track of the last time you sent humidity data. If 600 seconds has passed, then send it. Otherwise, skip it and go to sleep for 60 seconds. Something like this:
from datetime import datetime, timedelta
def iothub_client_telemetry_sample_run():
last_humidity_run = None
humidity_period = timedelta(seconds=600)
client = iothub_client_init()
while True:
now = datetime.now()
send_temperature_data(client)
if not last_humidity_run or now - last_humidity_run >= humidity_period:
send_humidity_data(client)
last_humidity_run = now
time.sleep(60)
Rename iothub_client_telemetry_sample_run to temperature_thread_func or something like it. Create a separate function that looks just like it for humidity. Spawn two threads from the main function of your program. Set them to daemon mode so they shutdown when the user exits
from threading import Thread
def temperature_thread_func():
client = iothub_client_init()
while True:
send_temperature_data(client)
time.sleep(60)
def humidity_thread_func():
client = iothub_client_init()
while True:
send_humidity_data(client)
time.sleep(600)
if __name__ == '__main__':
temp_thread = Thread(target=temperature_thread_func)
temp_thread.daemon = True
humidity_thread = Thread(target=humidity_thread_func)
humidity_thread.daemon = True
input('Polling for data. Press a key to exit')
Notes:
If you decide to use threads, consider using an
event
to terminate them cleanly.
time.sleep is not a precise way to keep
time. You might need a different timing mechanism if the samples need
to be taken at precise moments.

socket implement in the thread

i have to read some data from database and send it from a tcp socket
so i fetch data from database
#main
while True:
cursor.execute("UPDATE `raw` SET `zombie`='"+zombieId+"' WHERE result='pending' AND protocol='tcp' AND zombie='0' LIMIT 1;")
# time.sleep(0.2)
cursor.execute("select * from raw WHERE `result`='pending' AND `protocol`='tcp' and `zombie`='"+zombieId+"' limit 1;")
if cursor.rowcount>0 :
waitedSig = cursor.fetchone()
time.sleep(0.2)
t = threading.Thread(target=sendData , args=((waitedSig),))
t.start()
time.sleep(0.6)
else:
time.sleep(1)
on the thread i will send data to target
def sendData(sig):
timedata = datetime.datetime.fromtimestamp(int(sig[16]))
devimei = sig[23]
devdate = timedata.strftime("%d%m%y")
devtime = timedata.strftime("%H%M%S")
lat= format(sig[2])
lon= format(sig[3])
satcount = format(sig[5])
speed = format(sig[4])
batery = format(sig[7])
if sig[9]>1000:
band='00'
elif sig[9]>850:
band='02'
else:
band='01'
hdop = format(sig[10])
gsmQ = format(sig[6])
lac = format(sig[12])
cid = format(sig[13])
str='$MGV002,'+devimei+',12345,S,'+devdate+','+devtime+',A,'+lat+',N,'+lon+',E,0,'+satcount+',00,'+hdop+','+speed+',0,,,432,11,'+lac+','
try:
clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = clientsocket.connect(('ip',port))
clientsocket.send(str)
data = clientsocket.recv(1024)
print(str(datetime.datetime.now())+' -> send completed :'+format(sig[0]))
clientsocket.close()
except:
print(str(datetime.datetime.now())+' -> connection to tcp server failed!!')
this will work really good but there are two boring problem:
1) if i remove 0.2 and 0.6 sleep delay the script crash due to duplicate socket usage,it seems system try to open an other socket until the previous don't finished its job yet!
2) if some thing goes wrong in the sendData function,the whole script stop working until i manually restart the script
so
1) can i create a thread queue to run one after other and don't affect each other?!
2) how can i handle errors in the thread function to close just that specific thread and script continue its work with next database record?!
This looks like a good application of a thread pool. In your implementation you create one thread and socket per item in your database table, and that could tax the system extremely. Here I've created 20 workers as an example. There are diminishing returns on the number of workers as you start to stress the system.
import multiprocessing.pool
def sender():
pool = multiprocessing.pool.ThreadPool(20) # pick your size...
cursor.execute("select * from database")
pool.map(sendData, cursor, chunksize=1)
def sendData(sig):
try:
clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = clientsocket.connect(('ip',port))
clientsocket.sendall(sig)
data = clientsocket.recv(1024)
print(str(datetime.datetime.now())+' -> send completed :'+format(sig[0]))
clientsocket.shutdown(socket.SOCK_RDWR)
clientsocket.close()
except:
print(str(datetime.datetime.now())+' -> connection to tcp server fa

Redis: # of channels degrading latency. How to prevent degradation?

pub.py
import redis
import datetime
import time
import json
import sys
import threading
import gevent
from gevent import monkey
monkey.patch_all()
def main(chan):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
while True:
def getpkg():
package = {'time': time.time(),
'signature' : 'content'
}
return package
#test 2: complex data
now = json.dumps(getpkg())
# send it
r.publish(chan, now)
print 'Sending {0}'.format(now)
print 'data type is %s' % type(now)
time.sleep(1)
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=main, args=(x,))
t.setDaemon(True)
t.start()
if __name__ == '__main__':
num_of_chan = 10
zerg_rush(num_of_chan)
cnt = 0
stop_cnt = 21
while True:
print 'Waiting'
cnt += 1
if cnt == stop_cnt:
sys.exit(0)
time.sleep(30)
sub.py
import redis
import threading
import time
import json
import gevent
from gevent import monkey
monkey.patch_all()
def callback(ind):
redis_host = '10.235.13.29'
r = redis.client.StrictRedis(host=redis_host, port=6379)
sub = r.pubsub()
sub.subscribe(str(ind))
start = False
avg = 0
tot = 0
sum = 0
while True:
for m in sub.listen():
if not start:
start = True
continue
got_time = time.time()
decoded = json.loads(m['data'])
sent_time = float(decoded['time'])
dur = got_time - sent_time
tot += 1
sum += dur
avg = sum / tot
print decoded #'Recieved: {0}'.format(m['data'])
file_name = 'logs/sub_%s' % ind
f = open(file_name, 'a')
f.write('processing no. %s' % tot)
f.write('it took %s' % dur)
f.write('current avg: %s\n' % avg)
f.close()
def zerg_rush(n):
for x in range(n):
t = threading.Thread(target=callback, args=(x,))
t.setDaemon(True)
t.start()
def main():
num_of_chan = 10
zerg_rush(num_of_chan)
while True:
print 'Waiting'
time.sleep(30)
if __name__ == '__main__':
main()
I am testing redis pubsub to replace the use of rsh to communicate with remote boxes.
One of the things I have tested for was the number of channels affecting latency of publish and pubsub.listen().
Test: One publisher and one subscriber per channel (publisher publish every one second). Incremented the number of channels from and observed the latency (The duration from the moment publisher publish a message to the moment subscriber got the message via listen)
num of chan--------------avg latency in seconds
10:----------------------------------0.004453
50:----------------------------------0.005246
100:---------------------------------0.0155
200:---------------------------------0.0221
300:---------------------------------0.0621
Note: tested on 2 CPU + 4GB RAM + 1 NICsĀ RHEL6.4 VM.
What can I do to maintain low latency with high number of channels?
Redis is single-threaded so increasing more cpus wont help. maybe more RAM? if so, how much more?
Anything I can do code-wise or bottleneck is in Redis itself?
Maybe the limitation comes from the way my test codes are written with threading?
EDIT:
Redis Cluster vs ZeroMQ in Pub/Sub, for horizontally scaled distributed systems
Accepted answer says "You want to minimize latency, I guess. The number of channels is irrelevant. The key factors are the number of publishers and number of subscribers, message size, number of messages per second per publisher, number of messages received by each subscriber, roughly. ZeroMQ can do several million small messages per second from one node to another; your bottleneck will be the network long before it's the software. Most high-volume pubsub architectures therefore use something like PGM multicast, which ZeroMQ supports."
From my testings, i dont know if this is true. (The claim that the number of channels is irrelevant)
For example, i did a testing.
1) One channel. 100 publishers publishing to a channel with 1 subscriber listening. Publisher publishing one second at a time. latency was 0.00965 seconds
2) Same testing except 1000 publishers. latency was 0.00808 seconds
Now during my channel testing:
300 channels with 1 pub - 1 sub resulted in 0.0621 and this is only 600 connections which is less than above testing yet significantly slow in latency

waking up consumer ever 3 sec and taking 6 chunks of data from producer in rabbit mq

I have written following working producer consumer code for rabbit mq in python.
But I have a twist in it. The consumer is continuously putting data in the queue ever 0.5 sec but now i want my consumer to wake up every 3 sec and take all the 6 data which publisher has put in the queue and sleep back again for 3 sec. I want to go in an infinite loop for this.
But I am not sure how would i achieve this in rabbit mq
producer
import pika
import time
import datetime
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
value=str(int(time.time()))
for i in range (1000):
channel.basic_publish(exchange='',routing_key='hello',body='{"action": "print", "method": "onData", "data": "Madan Mohan"}')
time.sleep(0.5)
connection.close()
consumer
#!/usr/bin/env python
import pika
import time
import json
import datetime
connection = pika.BlockingConnection(pika.ConnectionParameters(
host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello')
def callback(ch, method, properties, body):
#print " current time: %s " % (str(int((time.time())*1000)))
d=json.loads(body)
print d
channel.basic_consume(callback,
queue='hello',
no_ack=True)
channel.start_consuming()
The first solution to use sleep in callback. But probably it is not a good solution, as basic_consume is intended to get messages as faster as possible (asynchronously).
got = 0
def callback(ch, method, properties, body):
#print " current time: %s " % (str(int((time.time())*1000)))
d=json.loads(body)
print d
got = got + 1
if got == 6
got = 0
time.sleep(3)
Use channel.basic_get. It is a more appropriate solution to fetch messages synchronously.
got = 0
while True
channel.basic_get(callback,
queue='hello',
no_ack=True)
got = got + 1
if got == 6
got = 0
time.sleep(3)

Categories