I'm trying to use the catch_up() function to get all file updates on boot, however, everytime I run my code, only half of the file is downloaded, sometimes the file is completely empt.
However, when I try to run with "iter_messages" I manage to download everything perfectly.
HELP!?
#client.on(events.NewMessage)
async def new_messages(event):
if hasattr(event.message.peer_id, "channel_id"):
print("Um dos canais");
else:
if hasattr(event.message.peer_id, 'chat_id'):
print("Tipo: ","chat");
dialog = int(event.message.peer_id.chat_id);
else:
print("Tipo: ","conversa");
dialog = int(event.message.peer_id.user_id)
'''getting the files'''
path = ""
if hasattr(event.media, "document"):
print("================\n", event.message.id, "\n================");
path = await client.download_media(event.media, file="arquivos_chimera/");
print(event)
if hasattr(event.media, "photo"):
print("================\n", event.message.id, "\n================");
path = await client.download_media(event.media, file="imagens_chimera/")
print(event)
'''getting the Telegram date'''
data = str(event.message.date);
'''text of the message'''
temp_message = await async_ajuste_SQL(event.message.message);
if path != "":
temp_message = path + " - " + temp_message;
'''Quem enviou a mensagem'''
if event.message.from_id==None:
from_ = event.message.peer_id.user_id;
else:
from_ = event.message.from_id.user_id
cur.execute(f"insert into tabela_de_mensagens values ({event.message.id}, {dialog}, {from_}, '{data}', '{temp_message}', 0);");
con.commit();
async def main():
await client.catch_up();
NOTE: the problem only ocurrs to images, delete, edit and new message updates come perfectly
So, after some testing, I realized that the problem was the fact that I was using an event handler without using a keep alive function, i.e., the event handler only works while the main function works, so, if you try to run the event handler with catch_up alone, it will only get the first updates, but will stop shortly after that (hence, why my image files were created, but not completed).
To get a solution, you can look at the following links:
https://github.com/LonamiWebs/Telethon/issues/1534
https://github.com/LonamiWebs/Telethon/issues/3146
https://docs.python.org/3.8/library/asyncio-task.html#asyncio.wait
Related
I have a Python Rumps application that monitors a folder for new files using the rumps.Timer(...) feature. When it sees new files, it transfers them offsite (to AWS s3) and runs a GET request. sometimes that transfer, and get request can take over 1 second, and sometimes up to about 5 seconds. During this time, the application is frozen and can't do anything else.
Here is the current code:
class MyApp(rumps.App):
def __init__(self):
super(MyApp, self).__init__("App", quit_button="Stop")
self.process_timer = rumps.Timer(self.my_tick, 1)
self.process_timer.start()
def my_tick(self, sender):
named_set = set()
for file in os.listdir(self.process_folder):
fullpath = os.path.join(self.process_folder, file)
if os.path.isfile(fullpath) and fullpath.endswith(('.jpg', '.JPG')):
named_set.add(file)
if len(named_set) == 0:
self.files_in_folder = set()
new_files = sorted(named_set - self.files_in_folder)
if len(new_files) > 0:
for new_file in new_files:
# upload file
self.s3_client.upload_file(
new_file,
'##bucket##',
'##key##'
)
# GET request
return requests.get(
'##url##',
params={'file': new_file}
)
self.files_in_folder = named_set
if __name__ == "__main__":
MyApp().run()
Is there a way to have this transfer and GET request run as a background process?
I've tried using subprocess with the transfer code in a separate script
subprocess.Popen(['python3', 'transferscript.py', newfile])
and it doesn't appear to do anything. It will work if I run that line outside of rumps, but once it's in rumps, it will not run.
Edit: code provided
My task is to call a function with photoshop through a telegram bot. To do this, I take user information from the telegram chat for the program (for example, some text to change it inside the psd), but when I call this function, my code gives the error "Please check if you have Photoshop installed correctly.", BUT if call the same function not through the bot, then everything works fine. What could be the problem?
What I already tried - reinstall photoshop, install newer version of photoshop, add path in windows environment variables. Running through pywin32 is not profitable for me.
here I take and send the argument to the function
#bot.message_handler(content_types=['text'])
def func(message):
if(message.text == '/ph'):
user_info = {'test' : 'exampletext'}
test_edit_text(user_info)
here is a function, it just changes the text
def test_edit_text(info_from):
try:
psApp = ps.Application()
psApp.Open(r"mypath\first.psd")
doc = psApp.Application.ActiveDocument
print(info_from['test'])
text_from_user = info_from['test']
layer1init = doc.ArtLayers["layer1"]
text_new_layer1 = layer1init.TextItem
text_new_layer1 .contents = f"{text_from_user .upper()}"
options = ps.JPEGSaveOptions(quality=5)
jpg = r'mypath\photo.jpg'
doc.saveAs(jpg, options, True)
except Exception as e:
print(e)
if we call the "test_edit_text()" function separately, not through the bot, everything will work
How can I check whether a thread is completed is completed in Python? I have this code:
async def transcribe():
# Initializes the Deepgram SDK
global response
dg_client = Deepgram(DEEPGRAM_API_KEY)
global filename
global PATH_TO_FILE
PATH_TO_FILE = filename
with open(filename, 'rb') as audio:
source = {'buffer': audio, 'mimetype': MIMETYPE}
options = {'punctuate': True, 'language': 'en-US'}
print('Requesting transcript...')
print('Your file may take up to a couple minutes to process.')
print('While you wait, did you know that Deepgram accepts over 40 audio file formats? Even MP4s.')
print('To learn more about customizing your transcripts check out developers.deepgram.com')
response = await dg_client.transcription.prerecorded(source, options)
print(response)
print(response['results']['channels'][0]['alternatives'][0]['transcript'])
def runTranscribe():
asyncio.run(transcribe())
thread = threading.Thread(
target = runTranscribe
)
I found about the is_alive() method, but it is to find whether it is alive, not to find whether it is finished. So it's gonna be great if someone can help me. I'm using Python 3.10. Thank you!
while True:
if thread.is_alive():
pass
else:
#do something
break
OK, I have been trying to think of a solution/find a solution myself for quite some time but everything I am attempting either ends up not a solution, or too complex for me to attempt without knowing it will work.
I have a discord bot, made in python. The bots purpose is to parse a blog for HTML links, when a new HTML link is posted, it will the post the link into discord.
I am using a textfile to save the latest link, and then parsing the website every 30seconds to check if a new link has been posted by comparing the link at position 0 in the array to the link in the textfile.
Now, I have managed to host my bot on Heroku with some success however I have since learned that Heroku cannot modify my textfile since it pulls the code from github, any changes are reverted after ~24hours.
Since learning this I have attempted to host the textfile on an AWS S3 bucket, however I have now learned that it can add and delete files, but not modify existing ones, and can only write new files from existing files on my system, meaning if I could do this, I wouldn't need to do this since I would be able to modify the file actually on my system and not need to host it anywhere.
I am looking for hopefully simple solutions/suggestions.
I am open to changing the hosting/whatever is needed, however I cannot pay for hosting.
Thanks in advance.
EDIT
So, I am editing this because I have a working solution thanks to a suggestion commented below.
The solution is to get my python bot to commit the new file to github, and then use that commited file's content as the reference.
import base64
import os
from github import Github
from github import InputGitTreeElement
user = os.environ.get("GITHUB_USER")
password = os.environ.get("GITHUB_PASSWORD")
g = Github(user,password)
repo = g.get_user().get_repo('YOUR REPO NAME HERE')
file_list = [
'last_update.txt'
]
file_names = [
'last_update.txt',
]
def git_commit():
commit_message = 'News link update'
master_ref = repo.get_git_ref('heads/master')
master_sha = master_ref.object.sha
base_tree = repo.get_git_tree(master_sha)
element_list = list()
for i, entry in enumerate(file_list):
with open(entry) as input_file:
data = input_file.read()
if entry.endswith('.png'):
data = base64.b64encode(data)
element = InputGitTreeElement(file_names[i], '100644', 'blob', data)
element_list.append(element)
tree = repo.create_git_tree(element_list, base_tree)
parent = repo.get_git_commit(master_sha)
commit = repo.create_git_commit(commit_message, tree, [parent])
master_ref.edit(commit.sha)
I then have a method called 'check_latest_link' which checks my github repo's RAW format, and parses that HTML to source the contents and then assigns that content as a string to my variable 'last_saved_link'
import requests
def check_latest_link():
res = requests.get('[YOUR GITHUB PAGE LINK - RAW FORMAT]')
content = res.text
return(content)
Then in my main method I have the follow :
#client.event
async def task():
await client.wait_until_ready()
print('Running')
while True:
channel = discord.Object(id=channel_id)
#parse_links() is a method to parse HTML links from a website
news_links = parse_links()
last_saved_link = check_latest_link()
print('Running')
await asyncio.sleep(5)
#below compares the parsed HTML, to the saved reference,
#if they are not the same then there is a new link to post.
if last_saved_link != news_links[0]:
#the 3 methods below (read_file, delete_contents and save_to_file)
#are methods that simply do what they suggest to a text file specified elsewhere
read_file()
delete_contents()
save_to_file(news_links[0])
#then we have the git_commit previously shown.
git_commit()
#after git_commit, I was having an issue with the github reference
#not updating for a few minutes, so the bot posts the message and
#then goes to sleep for 500 seconds, this stops the bot from
#posting duplicate messages. because this is an async function,
#it will not stop other async functions from executing.
await client.send_message(channel, news_links[0])
await asyncio.sleep(500)
I am posting this so I can close the thread with an "Answer" - please refer to post edit.
EDIT: I think I've figured out a solution using subprocess.Popen with separate .py files for each stream being monitored. It's not pretty, but it works.
I'm working on a script to monitor a streaming site for several different accounts and to record when they are online. I am using the livestreamer package for downloading a stream when it comes online, but the problem is that the program will only record one stream at a time. I have the program loop through a list and if a stream is online, start recording with subprocess.call(["livestreamer"... The problem is that once the program starts recording, it stops going through the loop and doesn't check or record any of the other livestreams. I've tried using Process and Thread, but none of these seem to work. Any ideas?
Code below. Asterisks are not literally part of code.
import os,urllib.request,time,subprocess,datetime,random
status = {
"********":False,
"********":False,
"********":False
}
def gen_name(tag):
return stuff <<Bunch of unimportant code stuff here.
def dl(tag):
subprocess.call(["livestreamer","********.com/"+tag,"best","-o",".\\tmp\\"+gen_name(tag)])
def loopCheck():
while True:
for tag in status:
data = urllib.request.urlopen("http://*******.com/" + tag + "/").read().decode()
if data.find(".m3u8") != -1:
print(tag + " is online!")
if status[tag] == False:
status[tag] = True
dl(tag)
else:
print(tag+ " is offline.")
status[tag] = False
time.sleep(15)
loopCheck()