PyMongo with Multiprocessing Pool in Python - python

I have a problem I'm struggling with for weeks.
def doSomething(item):
client = MongoClient(DB_CONNECTION_STRING, maxPoolSize=100000, connect=False)
dbclient = client.mydb
try:
getAWebpage()
except MyException as e:
client.close()
# I know that the recursion here could cause problems, but this is not the point
doSomething(item)
else:
# do something with the web page and save in the DB
dbclient.collection.insertone({...})
...
client.close()
def singleTask(item):
doSomething(item)
time.sleep(0.3)
def startTask(list, threads=10):
pool = ThreadPool(threads)
results = pool.map(singleTask, list)
pool.close()
pool.join()
return results
I see in the MongoDB log that it opens a lot of connections without closing them and at a certain point MongoDB stops accepting connections, crashing everything.
What is wrong with this code? I can't really understand.
I know that the recursion could cause problem with the overall code but I'm first trying to solve this problem.
Any help will be much appreciated! Thank you

Related

kubernetes python client: block and wait for child pods to dissappear when deleting deployment

I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.
try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
However, I see two problems with this.
If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.
Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.
I'm basically looking to replace the kubectl delete --wait functionality with a python script.
Thanks for any insights into this.
import json
def delete_pod(pod_name):
return v1.delete_namespaced_pod(name=pod_name, namespace="default")
def delete_pod_if_exists(pod_name):
def run():
delete_pod(pod_name)
while True:
try:
run()
except ApiException as e:
has_deleted = json.loads(e.body)['code'] == 404
if has_deleted:
return
May be you can try this way and handle exceptions based your requirement
def delete_deployment():
""" Delete deployment """
while True:
try:
deployment = api_client.delete_namespaced_deployment(
name="deployment_name",
namespace="deployment_namespace",
body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5),
)
except ApiException:
break
print("Deployment 'deployment_name' has been deleted.")

Restarting/Rebuilding a timed out process using Pebble in Python?

I am using concurrent futures to download reports from a remote server using an API. To inform me that the report has downloaded correctly, I just have the function print out its ID.
I have an issue where there are rare times that a report download will hang in-definitely. I do not get a Timeout Error or a Connection Reset error, just hanging there for hours until I kill the whole process. This is a known issue with the API with no known workaround.
I did some research and switched to using a Pebble based approach to implement a timeout on the function. My aim is then to record the ID of the report that failed to download and start again.
Unfortunately, I ran into a bit of a brick wall as I do not know how to actually retrieve the ID of the report I failed to download. I am using a similar layout to this answer:
from pebble import ProcessPool
from concurrent.futures import TimeoutError
def sometimes_stalling_download_function(report_id):
...
return report_id
with ProcessPool() as pool:
future = pool.map(sometimes_stalling_download_function, report_id_list, timeout=10)
iterator = future.result()
while True:
try:
result = next(iterator)
except StopIteration:
break
except TimeoutError as error:
print("function took longer than %d seconds" % error.args[1])
#Retrieve report ID here
failed_accounts.append(result)
What I want to do is retrieve the report ID in the event of a timeout but it does not seem to be reachable from that exception. Is it possible to have the function output the ID anyway in the case of a timeout exception or will I have to re-think how I am downloading the reports entirely?
The map function returns a future object which yields the results in the same order they were submitted.
Therefore, to understand which report_id is causing the timeout you can simply check its position in the report_id_list.
index = 0
while True:
try:
result = next(iterator)
except StopIteration:
break
except TimeoutError as error:
print("function took longer than %d seconds" % error.args[1])
#Retrieve report ID here
failed_accounts.append(report_id_list[index])
finally:
index += 1

How can I dynamically create new process in python?

This is my main function. If I receive new offer, I need to check the payment. I have HandleNewOffer() function on that. But the problem with this code happens if there are 2(or more) offers at the same time. One of the buyers will have to wait until the closing of the transaction. So is this possible to generate new process with HandleNewOffer() function and kill it when it`s done to make several transactions at the same time? Thank you in advance.
def handler():
try:
conn = k.call('GET', '/api/').json() #connect
response = conn.call('GET', '/api/notifications/').json()
notifications = response['data']
for notification in notifications:
if notification['contact']:
HandleNewOffer(notification) # need to dynamically start new process if notification
except Exception as err:
error= ('Error')
Send(error)
I'd recommend to use the Pool of workers pattern here to limit the amount of concurrent calls to HandleNewOffer.
The concurrent.futures module offers ready-made implementations of the above mentioned pattern.
from concurrent.futures import ProcessPoolExecutor
def handler():
with ProcessPoolExecutor() as pool:
try:
conn = k.call('GET', '/api/').json() #connect
response = conn.call('GET', '/api/notifications/').json()
# collect notifications to process into a list
notifications = [n for n in response['data'] if n['contact']]
# send the list of notifications to the concurrent workers
results = pool.map(HandleNewOffer, notifications)
# iterate over the list of results from every HandleNewOffer call
for result in results:
print(result)
except Exception as err:
error= ('Error')
Send(error)
This logic will handle as many offers in parallel as many CPU cores you computer has.

flask-sqlalchemy lost connection to MySQL db

Working with a MySQL database and flask-sqlalchemy I am encountering a lost connection error ('Lost connection to MySQL server during query'). I already have adapted the app.config['SQLALCHEMY_POOL_RECYCLE'] to be smaller than the engine timeout. I also added a pool_pre_ping, to ensure the database is not going away during two requests. Now I have no idea left, how this can still be an issue, since it is my understanding that flask-sqlalchemy should be taking care of opening and closing sessions correctly.
As a workaround, I thought about a way to tell flask-sqlalchemy to catch lost connection responses and restart the connection on the fly. But I have no idea how to do this. So, my questions are:
Do you know what could possibly cause my connection loss?
Do you think, my recent approach of catching is a good idea or do you have a better suggestion?
If this is a good idea, how can I do this most conveniently? I don't want to wrap all requests in try-catch-statements, since I have a lot of code.
I do not know the answer to your 1st and 2nd questions, but for the 3rd question, I used a decorator to wrap all my functions instead of using try / except directly inside the functions. The explicit pre_ping and session rollback / close somehow also solved the problem of Lost Connection for me (mariadb was the backend I was using)!
def manage_session(f):
def inner(*args, **kwargs):
# MANUAL PRE PING
try:
db.session.execute("SELECT 1;")
db.session.commit()
except:
db.session.rollback()
finally:
db.session.close()
# SESSION COMMIT, ROLLBACK, CLOSE
try:
res = f(*args, **kwargs)
db.session.commit()
return res
except Exception as e:
db.session.rollback()
raise e
# OR return traceback.format_exc()
finally:
db.session.close()
return inner
and then wrapping my functions with the decorator:
#manage_session
my_funtion(*args, **kwargs):
return "result"

Python 2.7: Thread hanging, no idea how to debug.

I made a script to download wallpapers as a learning exercise to better familiarize myself with Python/Threading. Everything works well unless there is an exception trying to request a URL. This is the function I hit the exception (not a method of the same class, if that matters).
def open_url(url):
"""Opens URL and returns html"""
try:
response = urllib2.urlopen(url)
link = response.geturl()
html = response.read()
response.close()
return(html)
except urllib2.URLError, e:
if hasattr(e, 'reason'):
logging.debug('failed to reach a server.')
logging.debug('Reason: %s', e.reason)
logging.debug(url)
return None
elif hasattr(e, 'code'):
logging.debug('The server couldn\'t fulfill the request.')
logging.debug('Code: %s', e.reason)
logging.debug(url)
return None
else:
logging.debug('Shit fucked up2')
return None
At the end of my script:
main_thread = threading.currentThread()
for thread in threading.enumerate():
if thread is main_thread: continue
while thread.isAlive():
thread.join(2)
break
From my current understanding (which may be wrong) if the thread is not completed it's task within 2 seconds of reaching this it should time out. Instead it will stick in the last while. If I take that out it will just hang once the script is done executing.
Also, I decided it was time to man up and leave Notepad++ for a real IDE with debugging tools so I downloaded Wing. I'm a big fan of Wing, but the script doesn't hang there... What do you all use to write Python?
There is no thread interruption in Python and no way to cancel a thread. It can only finish execution by itself. The join method only waits 2 seconds or until termination, it does not kill anything. You need to implement timeout mechanism in the thread itself.
I hit the books and figured out enough to correct the issue I was having. I was able to remove that code that was near the end of my script completely. I corrected this issue by spawning the thread pool differently.
for i in range(queue.qsize()):
td = ThreadDownload(queue)
td.start()
queue.join()
I also was not using a try: for queue.get() during the thread's execution.
try:
img_url = self.queue.get()
...
except Queue.Empty:
...

Categories