EDIT: I think I've figured out a solution using subprocess.Popen with separate .py files for each stream being monitored. It's not pretty, but it works.
I'm working on a script to monitor a streaming site for several different accounts and to record when they are online. I am using the livestreamer package for downloading a stream when it comes online, but the problem is that the program will only record one stream at a time. I have the program loop through a list and if a stream is online, start recording with subprocess.call(["livestreamer"... The problem is that once the program starts recording, it stops going through the loop and doesn't check or record any of the other livestreams. I've tried using Process and Thread, but none of these seem to work. Any ideas?
Code below. Asterisks are not literally part of code.
import os,urllib.request,time,subprocess,datetime,random
status = {
"********":False,
"********":False,
"********":False
}
def gen_name(tag):
return stuff <<Bunch of unimportant code stuff here.
def dl(tag):
subprocess.call(["livestreamer","********.com/"+tag,"best","-o",".\\tmp\\"+gen_name(tag)])
def loopCheck():
while True:
for tag in status:
data = urllib.request.urlopen("http://*******.com/" + tag + "/").read().decode()
if data.find(".m3u8") != -1:
print(tag + " is online!")
if status[tag] == False:
status[tag] = True
dl(tag)
else:
print(tag+ " is offline.")
status[tag] = False
time.sleep(15)
loopCheck()
Related
Hello so I don't stream right but wanted to make a video on peoples reactions when they are suddenly hit with a lot of people (this would be accompanied by a chat bot too and ill tell them what it was as well as ask for use permissions). So I thought it would be fun to look at view bots for twitch and found one online (code below). so I ran in installed streamlink via Pip and windows executable and it seems to run "found matching plugin twitch for URL "Stream link"" but it doesn't actually increase viewership and I can only assume this is because its not actually opening the Vlc instances, so here I am wondering what I need to do I have the latest version of python and git isnt trying to download and install anything so im assuming streamlink is all I need but im kind confused why it woudnt be opening the VLC instance any help is most appreciated.
Edit: oh and I do have the proxies and using a small amount to try and get it to work first, and will buy more later but after I get this to work!
import concurrent.futures, time, random, os
#desired channel url
channel_url = 'https://www.twitch.tv/StreamerName'
#number of viewer bots
botcount = 10
#path to proxies.txt file
proxypath = "C:\Proxy\proxy.txt"
#path to vlc
playerpath = r'"C:\Program Files\VideoLAN\VLC\vlc.exe"'
#takes proxies from proxies.txt and returns to list
def create_proxy_list(proxyfile, shared_list):
with open(proxyfile, 'r') as file:
proxies = [line.strip() for line in file]
for i in proxies:
shared_list.append((i))
return shared_list
#takes random proxies from the proxies list and adds them to another list
def randproxy(proxylist, botcount):
randomproxylist = list()
for _ in range(botcount):
proxy = random.choice(proxylist)
randomproxylist.append(proxy)
proxylist.remove(proxy)
return (randomproxylist)
#launches a viewer bot after a short delay
def launchbots(proxy):
time.sleep(random.randint(5, 10))
os.system(f'streamlink --player={playerpath} --player-no-close --player-http --hls-segment-timeout 30 --hls-segment-attempts 3 --retry-open 1 --retry-streams 1 --retry-max 1 --http-stream-timeout 3600 --http-proxy {proxy} {channel_url} worst')
#calls the launchbots function asynchronously
def main(randomproxylist):
with concurrent.futures.ThreadPoolExecutor() as executer:
executer.map(launchbots, randomproxylist)
if __name__ == "__main__":
main(randproxy(create_proxy_list(proxypath, shared_list=list()), botcount))
I write this post because I have not found solutions for my specific case. I refer to this article, which, however, did not work for me on Windows 10 version 1909.
I programmed a "python_code_a.py" script that has the task of uploading, one at a time, all the images contained in a local folder on a converter server and to download them, always one at a time, from the server to my PC in another folder. How the script works depends on the server, which is public and not owned by me, so it is possible, approximately every two and a half hours, that the script crashes due to an unexpected connection error. Obviously, it is not possible to consider the fact that he stays all day observing the Python shell and acting in case the script stops.
As reported in the article above, I compiled a second file with the name "python_code_b.py", which had the task of acting in case "python_code_a.py" had stopped by restarting the latter. When I try to get it to run from the "python.exe" CMD, however, the latter responds to the input with "...", nothing else.
I attach a general example of "python_code_a.py":
processnumber= 0
photosindex= 100000
photo = 0
path = 0
while photosindex<"number of photos in folder":
photo = str('your_path'+str(photoindex)+'.png')
path = str('your_path'+str(photoindex)+'.jpg')
print ('It\'s converting: '+ photo)
import requests
r = requests.post(
"converter_site",
files={
'image': open(photo , 'rb'),
},
headers={'api-key': 'your_api_key'}
)
file= r.json()
json_output = file['output_url']
import urllib.request
while photosindex<'number of photos in folder':
urllib.request.urlretrieve( json_output , path )
print('Finished process number: '+str(processnumber))
break
photosindex= photosindex +1
processnumber= processnumber +1
print(
)
print('---------------------------------------------------')
print('Every pending job has been completed.')
print(
)
How can I solve it?
you can use error capturing:
while photosindex<"number of photos in folder":
try:
#Your code
except:
print("Something else went wrong")
https://www.w3schools.com/python/python_try_except.asp
OK, I have been trying to think of a solution/find a solution myself for quite some time but everything I am attempting either ends up not a solution, or too complex for me to attempt without knowing it will work.
I have a discord bot, made in python. The bots purpose is to parse a blog for HTML links, when a new HTML link is posted, it will the post the link into discord.
I am using a textfile to save the latest link, and then parsing the website every 30seconds to check if a new link has been posted by comparing the link at position 0 in the array to the link in the textfile.
Now, I have managed to host my bot on Heroku with some success however I have since learned that Heroku cannot modify my textfile since it pulls the code from github, any changes are reverted after ~24hours.
Since learning this I have attempted to host the textfile on an AWS S3 bucket, however I have now learned that it can add and delete files, but not modify existing ones, and can only write new files from existing files on my system, meaning if I could do this, I wouldn't need to do this since I would be able to modify the file actually on my system and not need to host it anywhere.
I am looking for hopefully simple solutions/suggestions.
I am open to changing the hosting/whatever is needed, however I cannot pay for hosting.
Thanks in advance.
EDIT
So, I am editing this because I have a working solution thanks to a suggestion commented below.
The solution is to get my python bot to commit the new file to github, and then use that commited file's content as the reference.
import base64
import os
from github import Github
from github import InputGitTreeElement
user = os.environ.get("GITHUB_USER")
password = os.environ.get("GITHUB_PASSWORD")
g = Github(user,password)
repo = g.get_user().get_repo('YOUR REPO NAME HERE')
file_list = [
'last_update.txt'
]
file_names = [
'last_update.txt',
]
def git_commit():
commit_message = 'News link update'
master_ref = repo.get_git_ref('heads/master')
master_sha = master_ref.object.sha
base_tree = repo.get_git_tree(master_sha)
element_list = list()
for i, entry in enumerate(file_list):
with open(entry) as input_file:
data = input_file.read()
if entry.endswith('.png'):
data = base64.b64encode(data)
element = InputGitTreeElement(file_names[i], '100644', 'blob', data)
element_list.append(element)
tree = repo.create_git_tree(element_list, base_tree)
parent = repo.get_git_commit(master_sha)
commit = repo.create_git_commit(commit_message, tree, [parent])
master_ref.edit(commit.sha)
I then have a method called 'check_latest_link' which checks my github repo's RAW format, and parses that HTML to source the contents and then assigns that content as a string to my variable 'last_saved_link'
import requests
def check_latest_link():
res = requests.get('[YOUR GITHUB PAGE LINK - RAW FORMAT]')
content = res.text
return(content)
Then in my main method I have the follow :
#client.event
async def task():
await client.wait_until_ready()
print('Running')
while True:
channel = discord.Object(id=channel_id)
#parse_links() is a method to parse HTML links from a website
news_links = parse_links()
last_saved_link = check_latest_link()
print('Running')
await asyncio.sleep(5)
#below compares the parsed HTML, to the saved reference,
#if they are not the same then there is a new link to post.
if last_saved_link != news_links[0]:
#the 3 methods below (read_file, delete_contents and save_to_file)
#are methods that simply do what they suggest to a text file specified elsewhere
read_file()
delete_contents()
save_to_file(news_links[0])
#then we have the git_commit previously shown.
git_commit()
#after git_commit, I was having an issue with the github reference
#not updating for a few minutes, so the bot posts the message and
#then goes to sleep for 500 seconds, this stops the bot from
#posting duplicate messages. because this is an async function,
#it will not stop other async functions from executing.
await client.send_message(channel, news_links[0])
await asyncio.sleep(500)
I am posting this so I can close the thread with an "Answer" - please refer to post edit.
I am trying to test this demo program from lynda using Python 3. I am using Pycharm as my IDE. I already added and installed the request package, but when I run the program, it runs cleanly and shows a message "Process finished with exit code 0", but does not show any output from print statement. Where am I going wrong ?
import urllib.request # instead of urllib2 like in Python 2.7
import json
def printResults(data):
# Use the json module to load the string data into a dictionary
theJSON = json.loads(data)
# now we can access the contents of the JSON like any other Python object
if "title" in theJSON["metadata"]:
print(theJSON["metadata"]["title"])
# output the number of events, plus the magnitude and each event name
count = theJSON["metadata"]["count"];
print(str(count) + " events recorded")
# for each event, print the place where it occurred
for i in theJSON["features"]:
print(i["properties"]["place"])
# print the events that only have a magnitude greater than 4
for i in theJSON["features"]:
if i["properties"]["mag"] >= 4.0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"])
# print only the events where at least 1 person reported feeling something
print("Events that were felt:")
for i in theJSON["features"]:
feltReports = i["properties"]["felt"]
if feltReports != None:
if feltReports > 0:
print("%2.1f" % i["properties"]["mag"], i["properties"]["place"], " reported " + str(feltReports) + " times")
# Open the URL and read the data
urlData = "http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/2.5_day.geojson"
webUrl = urllib.request.urlopen(urlData)
print(webUrl.getcode())
if webUrl.getcode() == 200:
data = webUrl.read()
data = data.decode("utf-8") # in Python 3.x we need to explicitly decode the response to a string
# print out our customized results
printResults(data)
else:
print("Received an error from server, cannot retrieve results " + str(webUrl.getcode()))
Not sure if you left this out on purpose, but this script isn't actually executing any code beyond the imports and function definition. Assuming you didn't leave it out on purpose, you would need the following at the end of your file.
if __name__ == '__main__':
data = "" # your data
printResults(data)
The check on __name__ equaling "__main__" is just so your code is only executing when the file is explicitly run. To always run your printResults(data) function when the file is accessed (like, say, if its imported into another module) you could just call it at the bottom of your file like so:
data = "" # your data
printResults(data)
I had to restart the IDE after installing the module. I just realized and tried it now with "Run as Admin". Strangely seems to work now.But not sure if it was a temp error, since even without restart, it was able to detect the module and its methods.
Your comments re: having to restart your IDE makes me think that pycharm might not automatically detect newly installed python packages. This SO answer seems to offer a solution.
SO answer
I have created a data transfer program using python and the pyserial module. I am currently using it to communicate text file over a radio device between a Raspberry Pi and my computer. The problem is, the file I am trying to send, which contains 5000 lines of text and is 93.0 Kb in size is taking quite a while to send. To be exact, it takes about a full minute. I need this to be done within seconds. Here is the following code, I am sure that there are many optimizations that can be made with file reading and such that would increase the data transfer speed. My radio device has a data speed of 250 kbps, which is obviously not being reached. Any help would be greatly appreciated.
Code to send(located on raspberry pi)
def s_file():
print 'start'
readline = lambda : iter(lambda:ser.read(1),"\n")
name = "".join(readline())
print name
file_loc = directory_name + name
sleep(1)
print('Waiting for command from client to send file...')
while "".join(readline()) != "<<SENDFILE>>":
pass
with open(file_loc) as FileObj:
for lines in FileObj:
ser.write(lines)
ser.write("\n<<EOF>>\n")
print 'done'
Code to receive(on my laptop)
def r_f_bird(self): #send command to bird to start func,
if ser_open == True:
readline = lambda : iter(lambda:ser.read(1),"\n")
NAME = self.tb2.get()
ser.write('/' + NAME)
print NAME
sleep(0.5)
ser.write('\n<<SENDFILE>>\n')
start = clock()
with open(str(NAME),"wb") as outfile:
while True:
line = "".join(readline())
if line == "<<EOF>>":
break
print >> outfile, line
elapsed = clock() - start
print elapsed
ser.flush()
else:
pass
Perhaps the overhead of ser.read(1) is slowing things down. It seems like you have a \n at the end of each line, so try using pySerial's readline() method rather than rolling your own. Changing line = "".join(readline()) to line = ser.readline() ought to do it. You will also need to change your loop end condition to == "<<EOF>>\n".
You may also need to add a ser.flush() on the writing side.