I'm trying to integrate my telegram bot with my webcam (DLINK DCS-942LB).
Using NIPCA standard (Network IP Camera Application Programming Interface) I managed to solve quite everything.
I'm now working on a polling mechanism.
The basic should be:
telegram bot keeps polling the camera using http://CAMERA_IP:CAMERA_PORT/config/notify_stream.cgi
when an event happens, telegram bot sends a notification to users
The problem is: the notify_stream.cgi page keeps updating every 1 second adding events.
I am not able to poll the notify_stream.cgi as I have requests hanging (doesn't get a response):
This can be reproduce with a simple script:
import requests
myurl = "http://CAMERA_IP:CAMERA_PORT/config/notify_stream.cgi"
response = requests.get(myurl, auth=("USERNAME", "PASSWORD"))
This results in requests hanging until I stop it manually.
Is it possible to keep listening the notify_stream.cgi and passing new lines to a function?
Thanks to the comment received, using session and strem works fine.
Here is the code:
import requests
def getwebcameventstream(webcam_url, webcam_username, webcam_password):
requestsession = requests.Session()
eventhandler = ["first_evet", "second_event", "third_event"]
with requestsession.get(webcam_url, auth=(webcam_username, webcam_password), stream=True) as webcam_response:
for event in webcam_response.iter_lines():
if event in eventhandler:
handlewebcamalarm(event)
def handlewebcamalarm(event):
print ("New event received :" + str(event))
url = 'http://CAMERA_IP:CAMERA_PORT/config/notify_stream.cgi'
username="myusername"
password="mypassword"
getwebcamstream(url, username, password)
Related
as the title states, I'm writing a Slack Bot in Python and using NGROK to host it locally. I'm not super experienced with decorators, and I can get the bot posting messages in slack, however I can't seem to handle two events at once. For example, I want to handle a message and have the message keep repeating in slack until a thumbs up reaction is added to that message. The issue is I cannot figure out how to handle an event while another event is still running, please see the following code:
rom slack import WebClient
import os
import time
from pathlib import Path
from dotenv import load_dotenv
from flask import Flask
from slackeventsapi import SlackEventAdapter
env_path = Path('.') / '.env'
load_dotenv(dotenv_path=env_path)
app = Flask(__name__)
slack_event_adapter = SlackEventAdapter(
os.environ['SIGNING_SECRET'],'/slack/events',app)
client = WebClient(token=os.environ['SLACK_TOKEN'])
BOT_ID = client.api_call("auth.test")['user_id']
state = {}
#slack_event_adapter.on('message')
def handle_message(event_data):
message = event_data.get('event', {})
channel_id = message.get('channel')
user_id = message.get('user')
text = message.get('text')
messageid = message.get('ts')
state[messageid] = {"channel_id": channel_id, "user_id": user_id, "text": text}
if BOT_ID != user_id:
if text[0:12] == ":red_circle:":
time.sleep(5)
client.chat_postMessage(channel=channel_id, text=text)
if text[0:21] == ":large_yellow_circle:":
client.chat_postMessage(channel=channel_id, text="it's a yellow question!")
if text[0:14] == ":white_circle:":
client.chat_postMessage(channel=channel_id, text="it's a white question!")
#slack_event_adapter.on('reaction_added')
def reaction_added(event_data):
reaction = event_data.get('event',{})
emoji = reaction.get('reaction')
emoji_id = reaction.get('item',{}).get('ts')
emoji_channel_id = reaction.get('item',{}).get('channel')
client.chat_postMessage(channel=emoji_channel_id, text=emoji)
for message_id, message_data in state.items():
channel_id = message_data["channel_id"]
text = message_data["text"]
client.chat_postMessage(channel=channel_id, text=text)
print(message_id,message_data)
if __name__ == "__main__":
app.run(debug=True)
I can handle individual events, but I cannot handle them while another is running. Please help! :)
Flask is a synchronous web framework.
When it's running a view handler, it uses up a web worker thread. If you does something like time.sleep(...), that worker thread will still be occupied and unavailable to handle other requests until the sleep finishes.
There are a couple options you can do here.
You can use Bolt for Python, which is a Python Slack library that natively support asynchronous even processing. Instead of time.sleep(), you can do await asyncio.sleep(...), which returns the thread to the async loop, and allow the worker thread to process other events.
If you already have an existing slack application and don't want to rewrite your entire codebase to Bolt, then you'll need to handle the event processing yourself. You can do this by doing your work in an ThreadLoopExecutor, or by building your own async event Queue mechanism, or use Celery. Or if your slack bot has very low volume, you can probably just add more web workers, and hope for the best that you don't run out of workers.
Summary
I am using Pyppeteer to open a headless browser, to load HTML & CSS to create a PDF, the html is acessed via a HTTP request as it is laid out in a server side react client.
The PDF is triggered by a button press in a front end React site.
The issue
The majority of the time, the PDF prints perfectly, however, occasionally the pdf prints blank, once it happens once, it seems more likely to happen again multiple times in a row.
I initially thought this was from consecutive download requests happening too close together, but that doesn't seem to be the only cause.
I am seeing 2 errors:
RuntimeError: You cannot use AsyncToSync in the same thread as an async event loop - just await the async function directly.
and
pyppeteer.errors.TimeoutError: Timeout exceeded while waiting for event
Additionally, when this issue happens, I get the following message, from the dumpio log:
"Uncaught (in promise) SyntaxError: Unexpected token < in JSON at position 0"
The code
When the webpage is first loaded, I've added a "launch" function to predownload the browser, so it is preinstalled, to reduce waiting time of the first PDF download:
from flask import Flask
import asyncio
from pyppeteer import launch
import os
download_in_progress = False
app = Flask(__name__)
async def preload_browser(): #pre-downloading chrome client to save time for first pdf generation.
print("downloading browser")
await launch(
headless=True,
handleSIGINT=False,
handleSIGTERM=False,
handleSIGHUP=False,
autoClose=False,
args=['--no-sandbox', '--single-process', '--font-render-hinting=none']
)
Then the code for creating the PDF is triggered by app route via an async def:
#app.route(
"/my app route with :variables",
methods=["POST"]
)
async def go_to_pdf_download(self, id, name, version_number, data):
global download_in_progress
while download_in_progress:
print("download in progress, please wait")
await asyncio.sleep(1)
else:
download_in_progress = True
download_pdf = await pdf(self, id, name, version_number, data)
return pdf
I found that the headless browser was failing with multiple function calls, so I tried adding a while loop. This worked in my local (docker) container, however it didn't work consistently in our my test environment, so will likely remove this (instead I am removing the download button in the react app, once clicked, until the PDF is returned).
The code itself:
async def pdf(self, id, name, version_number, data):
global download_in_progress
url = "my website URL"
try:
print("opening browser")
browser = await launch( #can't use initBrowser here for some reason, so calling again to access it without having to redownload chrome.
headless=True,
handleSIGINT=False,
handleSIGTERM=False,
handleSIGHUP=False,
autoClose=False,
args=['--no-sandbox', '--single-process', '--font-render-hinting=none'],
dumpio = True #used to generate console.log statements in terminal for debugging
)
page = await browser.newPage()
if os.getenv("running environment") == "local":
pass
elif self._oidc_identity and self._oidc_data:
await page.setExtraHTTPHeaders({
"x-amzn-oidc-identity": self._oidc_identity,
"x-amzn-oidc-data": self._oidc_data
})
await page.goto(url, {'waitUntil' : ['domcontentloaded']}) #wait until doesn't seem to do anything
await page.waitForResponse(lambda res: res.status == 200) #otherwise was completing download before PDF was generated
#used to use a sleep timer for 5 seconds but this was inconsistent, so now waiting for http response
pdf = await page.pdf({
'printBackground':True,
'format': 'A4',
'scale': 1,
'preferCSSPageSize': True
})
download_in_progress = False
return pdf
except Exception as e:
download_in_progress = False
raise e
(I've amended some of the code to hide variable names etc)
I have thought I've solved this issue multiple times, however I just seem to be missing something. Any suggestions (or code improvements!) would be greatly appreciated.
Things I've tried
Off the top of my head, the main things I have tried so solve this is:
Adding in wait until loops - to create a queue system of generation to stop simultaneous calls, if statements to block multiple requests, adding packages, various different "wait for X", manual sleep timers instead of trying to detect when the process is complete (up to 20 seconds and still failed occasionally). Tried using Selenium however this encountered a lot of issues.
I am trying to use the Slack-bolt API for python to listen to DMs to the slack bot that contain specific text. Here is my file that initiates the Slack-Bolt listener
import os
from server import *
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
# Initializes your app with your bot token and socket mode handler
app = App(token=MY_TOKEN)
# Listens to incoming messages that contain "list"
#app.message("list")
def message_hello(message, say):
# say() sends a message to the channel where the event was triggered
res = requests.get(url + '/api/users/list')
say("The list of users is: ", res.json())
# Start your app
if __name__ == "__main__":
SocketModeHandler(app, "TOKEN").start()
When I send messages to my bot I am getting "127.0.0.1 - - [20/Mar/2022 00:23:47] "POST /api HTTP/1.1" 200 -" but the listener is not executing the code it contains. I cannot get it to say hello back inside of Slack in any way.
Thanks
Instead of setting the app to listen for every word posted, I would suggest using the "app_mention" event which triggers only when the message sent begins with
#your_bot_name followed by your message. This way you will avoid getting random responses from your bot when sending messages which contain specific keywords.
#app.event("app_mention")
def test(ack,event,logger):
ack()
name = event["user"] # gets the name of the user who triggered the event
channel =event["channel"] # gets the channel in which the event was triggered
text = event["text"].lower() # gets the lowercase text of your sent message
ts = event["ts"] # gets the timestamp of the message (this is used for replying in threads)
if any(x in text for x in ("users list","list of users")): # if you need specific combinations of keywords i would recommend using this method)
# if text == "list" :
try:
app.client.chat_postMessage(channel = channel, thread_ts=ts, text = f"*Hi <#{name}>, Here is a random response*")
except:
print(logger.info)
In the end you could trigger a response from your bot app by posting a message like so:
#your_bot_name show me the users list
or
#your_bot_name show me the list of users
I am attempting to optimize a simple web scraper that I made. It gets a list of urls from a table on a main page and then goes to each of those "sub" urls and gets information from those pages. I was able to successfully write it synchronously and using concurrent.futures.ThreadPoolExecutor(). However, I am trying to optimize it to use asyncio and httpx as these seem to be very fast for making hundreds of http requests.
I wrote the following script using asyncio and httpx however, I keep getting the following errors:
httpcore.RemoteProtocolError: Server disconnected without sending a response.
RuntimeError: The connection pool was closed while 4 HTTP requests/responses were still in-flight.
It appears that I keep losing connection when I run the script. I even attempted running a synchronous version of it and get the same error. I was thinking that the remote server was blocking my requests, however, I am able to run my original program and go to each of the urls from the same IP address without issue.
What would cause this exception and how do you fix it?
import httpx
import asyncio
async def get_response(client, url):
resp = await client.get(url, headers=random_user_agent()) # Gets a random user agent.
html = resp.text
return html
async def main():
async with httpx.AsyncClient() as client:
tasks = []
# Get list of urls to parse.
urls = get_events('https://main-url-to-parse.com')
# Get the responses for the detail page for each event
for url in urls:
tasks.append(asyncio.ensure_future(get_response(client, url)))
detail_responses = await asyncio.gather(*tasks)
for resp in detail_responses:
event = get_details(resp) # Parse url and get desired info
asyncio.run(main())
I've had a same issue, the problem occurs when there is an exception in one of the asyncio.gather tasks, when it's raised, it causes httpxclient to call __ aexit __ and cancel all the current requests, you could bypass it by using return_exceptions=True for asyncio.gather:
async def main():
async with httpx.AsyncClient() as client:
tasks = []
# Get list of urls to parse.
urls = get_events('https://main-url-to-parse.com')
# Get the responses for the detail page for each event
for url in urls:
tasks.append(asyncio.ensure_future(get_response(client, url)))
detail_responses = await asyncio.gather(*tasks, return_exceptions=True)
for resp in detail_responses:
# here you would need to do smth with the exceptions
# if isinstance(resp, Exception): ...
event = get_details(resp) # Parse url and get desired info
Below is a simple app to send mesg to the browser. if there is a new mesg from the redis channel it will be sent other wise send the last know value in a non-blocking way.
But i am doing something wrong. can someone please help me understand
from gevent import monkey, Greenlet
monkey.patch_all()
from flask import Flask,render_template,request,redirect,url_for,abort,session,Response,jsonify
app = Flask(__name__)
myglobaldict = {'somedata':''}
class RedisLiveData:
def __init__(self, channel_name):
self.channel_name = channel_name
self.redis_conn = redis.Redis(host='localhost', port=6379, db=0)
pubsub = self.redis_conn.pubsub()
gevent.spawn(self.sub, pubsub)
def sub(self,pubsub):
pubsub.subscribe(self.channel_name)
for message in pubsub.listen():
gevent.spawn(process_rcvd_mesg, message['data'])
def process_rcvd_mesg(mesg):
print "Received new message %s " % mesg
myglobaldict['somedata'] = mesg
g = RedisLiveData("test_channel")
#app.route('/latestmessage')
def latestmessage():
return Response(myglobaldict,mimetype="application/json")
if __name__ == '__main__':
app.run()
on the javascript side i am just using a simple $.ajax get to view the messages.
but the client http://localhost:5000/latestmessage has the old message even after the redis update.
It should be the cache issue.
You can add a timestamp or a random number to the request http://localhost:5000/latestmessage?t=timestamp sent from the ajax.
I suggest you to use POST instead of GET as http method, you eliminate the caching problem and some annoying behaviour from browsers like chrome where the requests after the first will wait for the first to complete before being sent to the webserver.
If you want to keep the GET method then you can ask jquery to make the request non cache-able by the browser with the setting parameter cache
$.ajax(..., {cache:false})