So I have been playing around with Captcha, Multiprocessing and Flask.
What have I done so far:
Basically what I have done is that I have created own Mutltiprocessing where I enter how many tasks/processers I want to run during this script. It will say if I enter 3 then it will give me a process of 3 "threads" which works fine.
Also whenever webbrowser.open('http://Thrillofit.baller.com:5000/solve') opens, It works aswell. Also get a captcha that is able to solve.
Basically meaning the captcha does work and also the multiprocessing:
What's the issue then?
The issue is sitting now whenever I run the program it gets stuck when solving, basically I can solve how many captchas as I want but it will not continue which I can't get a grip of why? A GIF pic of what is happening: https://i.gyazo.com/d5f183471f20be5eda6be939d255a157.mp4
In the video you can see that i am trying to solve the captcha but nothing happeneds to the program like it gets stuck.
MY thoughts is that it might be a issue between Multiprocessing and Captcha/flask but I can't see the issue and I have been completely blind at this point. Maybe there is someone that might see the problem?
The goals is to whenever I solve a captcha, It should print out a token (Inside the def passwd(): method but nothing prints out and just gets stuck pretty much...
import requests, time, json, re, sys, os, timeit, random, multiprocessing, traceback, threading, names, logging, webbrowser, _thread
from flask import Flask, render_template, request, redirect
tokens = []
def captureToken(token):
expiry = datetime.now().timestamp() + 115
tokenDict = {
'expiry': expiry,
'token': token
}
tokens.append(tokenDict)
return
def sendToken():
while not tokens:
pass
token = tokens.pop(0)
return token['token']
def manageTokens():
while True:
for item in tokens:
if item['expiry'] < datetime.now().timestamp():
tokens.remove(item)
time.sleep(5)
app = Flask(__name__)
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
#app.route('/', methods=['GET'])
def base():
return redirect("http://Thrillofit.baller.com:5000/solve", code=302)
#app.route('/solve', methods=['GET'])
def solve():
sitekey = "6LdyFRkUAAAAAF2YmQ9baZ6ytpVnbVSAymVpTXKi"
return render_template('index.html', sitekey=sitekey)
#app.route('/submit', methods=['POST'])
def submit():
token = request.form.get('g-recaptcha-response', '')
captureToken(token)
return redirect("http://Thrillofit.baller.com:5000/solve", code=302)
def info(thread):
global prod
prod = int(thread) + 1
passwd()
def passwd():
lilcapttoken = sendToken()
print(lilcaptoken)
def main():
webbrowser.open('http://Thrillofit.baller.com:5000/solve')
user_input = 0
while True:
try:
user_input = int(input(Fore.WHITE + 'How many tasks do you wanna run? [NUMBERS] \n' + Fore.RESET))
except ValueError:
print(Fore.RED + "Stop being stupid" + Fore.RESET)
continue
else:
HowManyThread = user_input
print()
i = 0
jobs = []
for i in range(HowManyThread):
p = multiprocessing.Process(target=info, args=(str(i),))
jobs.append(p)
time.sleep(.5)
p.start()
for p in jobs:
p.join()
sys.exit()
if __name__ == '__main__':
try:
_thread.start_new_thread(app.run, ())
_thread.start_new_thread(manageTokens, ())
main()
except Exception as e:
print(e)
print(traceback.print_exc())
print(traceback)
Related
I'm writing a Python application and would like to prompt a user for their password. If they do not enter it within 60 seconds, the program should exit. If they do enter it, their password should be hidden in the terminal.
How do I do this?
Context
I've seen posts like this one that suggest methods to input some user input with a timeout. However, these solutions don't hide input like getpass does. I'm wondering if there's some functionality that could use getpass or something like it.
Inspired by this (which only seems to only work on Unix systems), you can timeout a function call by signal.alarm()
import getpass
import signal
def timeout(signum, frame):
print('Timeout!')
exit()
def get_user_password():
upass = getpass.getpass()
return upass
if __name__ == "__main__":
signal.signal(signal.SIGALRM, timeout)
timeout_in_s = 10
signal.alarm(timeout_in_s)
upass = get_user_password()
signal.alarm(0) # Stop alarm
It's not a perfect solution, but you could try using multiprocessing:
import multiprocessing
import queue
def take_input(q):
stdin = open(0)
print("Enter your input: (y/n) >> ", end="", flush=True)
user_input = stdin.readline()
q.put(user_input)
def confirm_user_input():
value = ''
if __name__ == "__main__":
q = multiprocessing.Queue()
process = multiprocessing.Process(target=take_input, args=(q,))
process.start()
try:
value = q.get(timeout=10)
except queue.Empty:
print("no input...")
process.terminate()
process.join()
if value.strip() == 'y':
print("confirmed: do something in this case...")
else:
print("not confirmed: do something else in that case...")
confirm_user_input()
This doesn't use input(), but it waits for response for n seconds and if there's no input, continues with something else and it doesn't require Unix system. Keep in mind it needs to be in main. Otherwise you should call it from the main with something like this:
if __name__ == '__main__':
freeze_support()
confirm_user_input()
In that case remove if __name__ == "__main__": from the function.
May be you could implement it in your code.
I have a python script (xyz.py) that I run through the command prompt. My question is that don't we have any method which helps to resume the python code automatically from where it was lost the VPN connection, without any manual intervention. This will help to avoid monitoring the code frequently. Below is my code but it reads from the start if there is any disconnection. Please suggest.
filename = 'xyz.py'
while True:
p = subprocess.Popen('python '+filename, shell=True).wait()
""" #if your there is an error from running 'xyz.py',
the while loop will be repeated,
otherwise the program will break from the loop"""
if p != 0:
continue
else:
break
If me, time.sleep will be used:
import os
import time
from datetime import datetime
import requests
script = 'xyz.py'
def main():
network_check_url = 'http://8.8.8.8'
while True:
try:
requests.get(network_check_url)
except Exception as e:
print(datetime.now(), e)
time.sleep(1)
else:
print('Network is ok. {datetime.now():%Y-%m-%d_%H:%M:%S}')
os.system(f'python {script}')
return
if __name__ == '__main__':
main()
Getting the specifics out of the way, I'm writing an open source P2P social network over IPFS and Flask -- I know, it's been done. I'm choosing Flask because pyinstaller can put it in an exe file.
I am attempting to update my IPNS every 10 minutes to publish all status updates I've added to the network during said 10 minutes. The cron function from setup class (from library.py) is where that updater function is stored. At first, I threaded the cron function from init of setup. The server hung. Then I moved the threading process over to app.before_first_request. The server still hangs.
https://pastebin.com/bXHTuH83 (main.py)
from flask import Flask, jsonify
from library import *
#=========================TO BE DELETED=========================================
def pretty(json):
json = dumps(loads(json), indent=4, sort_keys=True)
return json
#===============================================================================
app = Flask(__name__)
GANN = setup()
#app.before_first_request
def cron_job():
Thread(target=GANN.cron())
#app.route("/")
def home():
return "Hello World!!!"
if __name__ == "__main__":
app.run(port="80", debug=True, threaded=True)
https://pastebin.com/W5P8Tpvd (library.py)
from threading import Thread
from time import time, sleep
import urllib.request
from json import loads, dumps
def api(*argv, **kwargs):
url = "http://127.0.0.1:5001/api/v0/"
for arg in argv:
arg = arg.replace(" ", "/")
if arg[:-1] != "/":
arg += "/"
url += arg
url = url[0:-1]
if kwargs:
url+="?"
for val in kwargs:
url = url + val + "=" + kwargs[val] + "&"
url = url[0:-1]
print(url)
try:
with urllib.request.urlopen(url, timeout=300) as response:
return response.read()
except:
return b"""{"ERROR": "CANNOT CONNECT TO IPFS!"}"""
class setup():
def __init__(self):
api("files", "mkdir", arg="/GANN", parents="True")
self.root_hash = ""
def update_root(self):
try:
for entry in loads(api("files", "ls", l="True").decode())["Entries"]:
if entry["Name"] == "GANN":
self.root_hash = entry["Hash"]
except:
return """{"ERROR": "CANNOT FIND ROOT DIRECTORY"}"""
def publish_root(self):
api("name", "publish", arg=self.root_hash)
def cron(self):
while True:
print("CRON Thread Started!")
self.update_root()
self.publish_root()
sleep(600)
I have searched the web for a couple days and have yet to find a threading technique that will split from the main process and not hang the server from taking other requests. I believe I'm on a single stream connection, as IPFS blocks connections to every other device in my home when it's started. It takes a couple minutes for the CLI IPNS update to go through, so I set urllib's timeout to 300 seconds.
Well what I think the threading code is not correct.
#app.before_first_request
def cron_job():
Thread(target=GANN.cron())
Here you created a Thread object. The argument must be callable, but you called your method already here. so the right way would be
Thread(target=GANN.cron)
So the thread can call the cron function later. having said that, the Thread must be started, so it will call the function target you gave. So it must be ike
thread_cron = Thread(target=GANN.cron)
thread_cron.start()
Since you called the GANN.cron() , the method starts executing and your app hung!
How can I simultaneously run the following code OR run the TTS function after returning the text?
CODE:
def main(q):
# CODE BEFORE THIS.
# TTS IS JUST A SIMPLE TEXT TO SPEECH FUNCTION
time.sleep(random.uniform(0.5, 2))
response = 'BOT: '+ response
# TTS
# SIMULTANEOUSLY RUN BELOW
if(responsetts!=None):
tts(responsetts)
else:
tts(response)
return response
if __name__ == '__main__':
while True:
query=input('U: ')
print(main(query))
The simple solution in case you want your tts function to run after the response was printed would be, to just let main print out response before calling tts. But for more flexibility and better responsiveness of your prompt you can use a separate thread for your tts call.
The threading module offers a Timer, which is a subclass of Thread. Timer has an interval parameter for adding a sleep before the target function gets executed. You could use this to add a delay if you want, or just use Thread if you don't need this feature. I use espeak in my example instead of tts:
import time
import random
import subprocess
from threading import Timer
from functools import partial
def _espeak(msg):
# Speak slowly in a female english voice
cmd = ["espeak", '-s130', '-ven+f5', msg]
subprocess.run(cmd)
def _vocalize(response, responsetts=None, interval=0):
# "Comparisons to singletons like None should always be done with is or
# is not, never the equality operators." -PEP 8
if responsetts is not None:
response = responsetts
Timer(interval=interval, function=_espeak, args=(response,)).start()
def _get_response(q):
time.sleep(random.uniform(0.5, 2))
response = '42'
response = 'BOT: '+ response
return response
def _handle_query(q):
response = _get_response(q)
print(response)
_vocalize(response, interval=0)
def main():
prompt = partial(input, 'U: ')
# alternative to using partial: iter(lambda: input('U: '), 'q')
for query in iter(prompt, 'q'): # quits on input 'q'
_handle_query(query)
if __name__ == '__main__':
main()
I am building a flask app and need some background processes to run. I decided to go with multiprocess, but it's producing two processes when running within Flask. Does anyone know why this would happen? I've tested it on OS X and Ubuntu 12.04, with the same results. Here is an example:
import time
import multiprocessing
from flask import Flask
app = Flask(__name__)
backProc = None
def testFun():
print('Starting')
while True:
time.sleep(3)
print('looping')
time.sleep(3)
print('3 Seconds Later')
#app.route('/')
def root():
return 'Started a background process with PID ' + str(backProc.pid) + " is running: " + str(backProc.is_alive())
#app.route('/kill')
def kill():
backProc.terminate()
return 'killed: ' + str(backProc.pid)
#app.route('/kill_all')
def kill_all():
proc = multiprocessing.active_children()
for p in proc:
p.terminate()
return 'killed all'
#app.route('/active')
def active():
proc = multiprocessing.active_children()
arr = []
for p in proc:
print(p.pid)
arr.append(p.pid)
return str(arr)
#app.route('/start')
def start():
global backProc
backProc = multiprocessing.Process(target=testFun, args=(), daemon=True)
backProc.start()
return 'started: ' + str(backProc.pid)
if __name__ == '__main__':
app.run(port=int("7879"))
This is a problem with the Flask auto-reload feature, which is used during development to automatically restart the webserver when changes in code is detected, in order to serve up the new code without requiring a manual restart.
In the guide, the “app.run()” call is always placed within an “if __name__ == ‘__main__’” condition, since the reloader is set to on by default. When using multiprocessing, this condition will result in false, so you have to instead disable the Flask autoreload when using it in a function like so:
def startWebserver():
app.run(debug=True, use_reloader=False)
Link for reference:
http://blog.davidvassallo.me/2013/10/23/nugget-post-python-flask-framework-and-multiprocessing/