Below is the python code for reading and responding to message from slack channel to python. I wrote this script by using their tutorials and ended up here with the problem. Also I am unable to send message to slack using client.chat_postMessage(channel="XXXXXXXXXXX", text=msg, user="XXXXXXXXXXX")
I don't know why but when I write command "/hi" in channel, the python reads the event and prints data but if I try any keyword like check and knock knock, the python doesn't responds to this,
import os
# Use the package we installed
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from os.path import join, dirname
import time
import re
from datetime import datetime
dotenv_path = join(dirname(__file__), '.env')
load_dotenv(dotenv_path)
# Initializes your app with your bot token and signing secret
app = App(
token=os.environ['SLACK_BOT_TOKEN'],
signing_secret=os.environ['SIGNING_SECRET']
)
# Add functionality here
#app.message("check")
def say_hello(message, client, body, logger):
print(message)
print(client)
print(body)
msg = "Hi there from Python"
try:
client.chat_postMessage(channel="XXXXXXXXXXX", text=msg, user="XXXXXXXXXXX")
except Exception as e:
logger.exception(f"Failed to post a message {e}")
print(e)
#app.message("knock knock")
def ask_who(message, say):
say("_Who's there?_")
#app.event("message")
def handle_message_events(body, logger):
logger.info(body)
print("messaging", body)
#app.command("/hi")
def handle_some_command(ack, body, logger):
ack()
logger.info(body)
print(body)
# Start your app
if __name__ == "__main__":
SocketModeHandler(app, os.environ["SLACK_APP_TOKEN"]).start()
Here is the manifest of my app from slackbolt
_metadata:
major_version: 1
minor_version: 1
display_information:
name: Hotline App
features:
app_home:
home_tab_enabled: true
messages_tab_enabled: true
messages_tab_read_only_enabled: false
bot_user:
display_name: Hotline Bot
always_online: false
slash_commands:
- command: /hi
description: greets user
should_escape: false
oauth_config:
scopes:
user:
- chat:write
- channels:read
- im:history
- channels:history
- groups:history
bot:
- incoming-webhook
- calls:read
- calls:write
- app_mentions:read
- channels:history
- channels:join
- channels:manage
- channels:read
- chat:write
- chat:write.customize
- chat:write.public
- commands
- dnd:read
- emoji:read
- files:read
- files:write
- groups:history
- groups:read
- groups:write
- im:history
- im:read
- im:write
- links:read
- links:write
- mpim:history
- mpim:read
- mpim:write
- pins:read
- pins:write
- reactions:read
- reactions:write
- reminders:read
- reminders:write
- remote_files:read
- remote_files:share
- remote_files:write
- team:read
- usergroups:write
- usergroups:read
- users.profile:read
- users:read
- users:read.email
- users:write
- workflow.steps:execute
settings:
event_subscriptions:
user_events:
- channel_archive
- channel_created
- channel_deleted
- channel_rename
- message.channels
- message.groups
- message.im
bot_events:
- app_mention
- channel_archive
- channel_created
- channel_deleted
- channel_history_changed
- channel_id_changed
- channel_left
- channel_rename
- channel_shared
- channel_unarchive
- channel_unshared
- dnd_updated_user
- email_domain_changed
- emoji_changed
- file_change
- file_created
- file_deleted
- file_public
- file_shared
- file_unshared
- group_archive
- group_deleted
- group_history_changed
- group_left
- group_rename
- group_unarchive
- im_history_changed
- link_shared
- member_joined_channel
- member_left_channel
- message.channels
- message.groups
- message.im
- message.mpim
- pin_added
- pin_removed
- reaction_added
- reaction_removed
- subteam_created
- subteam_members_changed
- subteam_updated
- team_domain_change
- team_join
- team_rename
- user_change
interactivity:
is_enabled: true
org_deploy_enabled: false
socket_mode_enabled: true
Any help to this problem from experts may reduce my headache and workload, Thanks in advance!
Kind regards,
Gohar
The bot must be a member of the channel where the message is being sent — please make sure to invite the bot into that channel and it should begin receiving those message events.
Also, this is somewhat incidental to your question, but as a security precaution, please only request the scopes necessary for your bot to function. You risk creating a token with far too permissive a number of scopes. You likely don't need user scopes for this app. The same holds true for events — consider only subscribing to events your app actually requires.
Related
I was building a documentation site for my python project using mkdocstrings.
For generating the code referece files I followed this instructions https://mkdocstrings.github.io/recipes/
I get these errors:
INFO
- Building documentation... INFO
- Cleaning site directory INFO
- The following pages exist in the docs directory, but are not included in the "nav" configuration: - reference\SUMMARY.md
- reference_init_.md
... ...
- reference\tests\manual_tests.md ERROR
- mkdocstrings: No module named ' ' ERROR
- Error reading page 'reference/init.md': ERROR
- Could not collect ' '
This is my file structure:
This is my docs folder:
I have the same gen_ref_pages.py file shown in the page:
from pathlib import Path
import mkdocs_gen_files
nav = mkdocs_gen_files.Nav()
for path in sorted(Path("src").rglob("*.py")):
module_path = path.relative_to("src").with_suffix("")
doc_path = path.relative_to("src").with_suffix(".md")
full_doc_path = Path("reference", doc_path)
parts = tuple(module_path.parts)
if parts[-1] == "__init__":
parts = parts[:-1]
elif parts[-1] == "__main__":
continue
nav[parts] = doc_path.as_posix() #
with mkdocs_gen_files.open(full_doc_path, "w") as fd:
ident = ".".join(parts)
fd.write(f"::: {ident}")
mkdocs_gen_files.set_edit_path(full_doc_path, path)
with mkdocs_gen_files.open("reference/SUMMARY.md", "w") as nav_file: #
nav_file.writelines(nav.build_literate_nav()) # ```
This is my mkdocs.yml:
``` site_name: CA Prediction Docs
theme:
name: "material"
palette:
primary: deep purple
logo: assets/logo.png
favicon: assets/favicon.png
features:
- navigation.instant
- navigation.tabs
- navigation.expand
- navigation.top
# - navigation.sections
- search.highlight
- navigation.footer
icon:
repo: fontawesome/brands/git-alt
copyright: Copyright © 2022 - 2023 Ezequiel González
extra:
social:
- icon: fontawesome/brands/github
link: https://github.com/ezegonmac
- icon: fontawesome/brands/linkedin
link: https://www.linkedin.com/in/ezequiel-gonzalez-macho-329583223/
repo_url: https://github.com/ezegonmac/TFG-CellularAutomata
repo_name: ezegonmac/TFG-CellularAutomata
plugins:
- search
- gen-files:
scripts:
- docs/gen_ref_pages.py
- mkdocstrings
nav:
- Introduction: index.md
- Getting Started: getting-started.md
- API Reference: reference.md
# - Reference: reference/
- Explanation: explanation.md
I believe the crux of the issue is the version mismatch, but I'm not sure how to get around this.
This is my conda environment file:
channels:
- anaconda
- defaults
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- bzip2=1.0.8=h7f98852_4
- ca-certificates=2022.12.7=ha878542_0
- cffi=1.15.1=py311h409f033_3
- cryptography=39.0.1=py311h9b4c7bb_0
- dnspython=2.3.0=pyhd8ed1ab_0
- greenlet=2.0.2=py311hcafe171_0
- idna=3.3=pyhd3eb1b0_0
- ld_impl_linux-64=2.39=hc81fddc_0
- libblas=3.9.0=16_linux64_openblas
- libcblas=3.9.0=16_linux64_openblas
- libffi=3.4.2=h7f98852_5
- libgcc-ng=12.2.0=h65d4601_19
- libgfortran-ng=11.2.0=h00389a5_1
- libgfortran5=11.2.0=h1234567_1
- libgomp=12.2.0=h65d4601_19
- liblapack=3.9.0=16_linux64_openblas
- libnsl=2.0.0=h7f98852_0
- libopenblas=0.3.21=h043d6bf_0
- libprotobuf=3.21.12=h3eb15da_0
- libsqlite=3.40.0=h753d276_0
- libstdcxx-ng=12.2.0=h46fd767_19
- libuuid=2.32.1=h7f98852_1000
- libzlib=1.2.13=h166bdaf_4
- lz4-c=1.9.3=h295c915_1
- mysql-common=8.0.32=ha901b37_0
- mysql-connector-python=8.0.31=py311h0cf059c_2
- mysql-libs=8.0.32=hd7da12d_0
- ncurses=6.3=h27087fc_1
- numpy=1.24.2=py311h8e6699e_0
- openssl=3.0.8=h0b41bf4_0
- pandas=1.5.3=py311h2872171_0
- protobuf=4.21.12=py311hcafe171_0
- pycparser=2.21=pyhd3eb1b0_0
- python=3.11.0=ha86cf86_0_cpython
- python-dateutil=2.8.2=pyhd3eb1b0_0
- python_abi=3.11=3_cp311
- pytz=2021.3=pyhd3eb1b0_0
- readline=8.1.2=h0f457ee_0
- six=1.16.0=pyhd3eb1b0_1
- sqlalchemy=2.0.2=py311h2582759_0
- tk=8.6.12=h27826a3_0
- typing-extensions=4.4.0=hd8ed1ab_0
- typing_extensions=4.4.0=pyha770c72_0
- tzdata=2022f=h191b570_0
- xz=5.2.6=h166bdaf_0
- zlib=1.2.13=h166bdaf_4
- zstd=1.5.2=ha4553b6_0
- pip:
- pip==22.3.1
- setuptools==65.5.1
- wheel==0.38.4
and this is my code:
import pandas as pd
import mysql.connector
database = 'db'
host = 'host'
port = '3306'
user = 'user'
password = 'pass'
con = mysql.connector.connect(user=user,
password=password,
host=host,
database=database,
ssl_disabled=True)
I get the following error:
mysql.connector.errors.OperationalError: 1043 (08S01): Bad handshake
I've tried with ssl_disabled and with it enabled.
I was specifying my ssl settings incorrectly. The following worked:
con = mysql.connector.connect(user=user,
password=password,
host=host,
database=database,
use_pure=True,
charset='latin1')
The charset parameter might not be necessary depending on the DB
I want to add data inside the 'tags' key in this YAML script
# Generated by Chef, local modifications will be overwritten
---
env: nonprod
api_key: 5d9k8h43124g40j9ocmnb619h762d458
hostname: ''
bind_host: localhost
additional_endpoints: {}
tags:
- application_name:testin123
- cloud_supportteam:eagles
- technical_applicationid:0000
- application:default
- lifecycle:default
- function:default-api-key
dogstatsd_non_local_traffic: false
histogram_aggregates:
- max
- median
- avg
- count
which should be like this,
tags:
- application_name:testing123
- cloud_supportteam:eagles
- technical_applicationid:0000
- application:default
- lifecycle:default
- function:default-api-key
- managed_by:Teams
so far I have created this script that will append the data at the end of the file seems not the solution,
import yaml
data = {
'tags': {
'- managed_by': 'Teams'
}
}
with open('test.yml', 'a') as outfile:
yaml.dump(data, outfile,indent=2)
Figured out it like this and this is working,
import yaml
from yaml.loader import SafeLoader
with open('test.yaml', 'r') as f:
data = dict(yaml.load(f,Loader=SafeLoader))
data ['tags'].append('managed_by:teams')
print(data['tags'])
with open ('test.yaml', 'w') as write:
data2 = yaml.dump(data,write,sort_keys= False, default_flow_style=False)
and the output was like this,
['application_name:testin123', 'cloud_supportteam:eagles', 'technical_applicationid:0000', 'application:default', 'lifecycle:default', 'function:default-api-key', 'managed_by:teams']
and the test.yaml file was updated,
tags:
- application_name:testing123
- cloud_supportteam:eagles
- technical_applicationid:0000
- application:default
- lifecycle:default
- function:default-api-key
- managed_by:teams
I sent my logger information into 'seq' module. I have:
_log_format = f"%(asctime)s - [%(levelname)s] - request_id=%(request_id)s - %(name)s - (%(filename)s).%(funcName)s(%(lineno)d) - %(message)s"
logger.info("Hello, {name}", name="world")
As result in 'seq' I have only:
No one from: f"%(asctime)s - [%(levelname)s] - request_id=%(request_id)s - %(name)s - (%(filename)s).%(funcName)s(%(lineno)d) - %(message)s" was add into 'seq'.
In stream is all right:
2021-08-20 14:40:24,244 - [INFO] - request_id=None - app - (__init__.py).create_app(43) - Hello, world
I sent logs to 'seq' by this way:
import seqlog
seqlog.log_to_seq(
server_url="http://localhost:5341/",
api_key="My API Key",
level=logging.INFO,
batch_size=10,
auto_flush_timeout=10, # seconds
override_root_logger=True,
)
I'm running a program that works with requests. I need to write the feedback time to my database. This code works fine, but it updates my DB too often. How can I make index() method wait for 60s? time.sleep(60) doesn't work here.
app = Flask(__name__)
app.config['SECRET_KEY'] = 'secret!'
dbconn = mysql.connector.connect(host="myhost",
database='mydb',
user='root', password='12345')
#app.route('/', methods = ['GET', 'POST'])
def index():
if request.method == 'POST':
cursor = dbconn.cursor()
time_check = datetime.datetime.now()
query = ("update mytable set response_time=%s where service_name = 'my_service'")
param = time_check
cursor.execute(query, (param,))
print("sent query")
dbconn.commit()
cursor.close()
#time.sleep(60)
return render_template('index.html')
if __name__ == '__main__':
app.run(host = "myhostaddress", port = 1010)
As already suggested in the comment, using some dedicated task queue would probably be the best solution. If you don't want to bring any dependency though, you might adapt this simple example:
from queue import Queue
import random
from threading import Thread
import time
from flask import Flask
app = Flask(__name__)
#app.route('/')
def index():
n = random.randint(0, 100)
q.put(n)
return '%s\n' % n
def worker():
while True:
item = q.get()
if item is None:
break
print('Processing %s' % item) # do the work e.g. update database
time.sleep(1)
q.task_done()
if __name__ == '__main__':
q = Queue()
t = Thread(target=worker)
t.start()
app.run(host='0.0.0.0')
q.join()
q.put(None)
t.join()
And the test:
pasmen#nyx:~$ for x in 1 2 3 4 5 6 7 8 9 10; do curl http://0.0.0.0:5000; done
1
90
79
25
45
50
77
25
36
99
Output:
(venv) pasmen#nyx:~/tmp/test$ python test.py
* Serving Flask app "test" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
Processing 1
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [04/Apr/2019 14:57:57] "GET / HTTP/1.1" 200 -
Processing 90
Processing 79
Processing 25
Processing 45
Processing 50
Processing 77
Processing 25
Processing 36
Processing 99
As you can see, the HTTP requests are processed immediately while there's a 1 second delay between the actual work carried by worker.