Using a monad for counting looping in python - python

I am learning functional programming at the moment, and ofcourse I want to implement what I have learned whenever I can.
I am in the middle of a project where I have to send some http requests to a server, and I want to count how many of these requests returned a status_code 200.
Right now I have some stupid code setup as follows:
global counter
while True:
now_url = 127.0.0.1
status, value = getStatus(now_url)
counter += c
Where counter is a global counter, and if the getStatus gets a status_code of 200, the value will be 1 otherwise it will be 0.
So I was thinking maybe instead of using a global counter I could just pass around the state of the previous loop, and I would get rid of the stupid global counter value.
So I tried to implement the getStatus in a monadic way with the bind and return as such
def bind(f, arg):
res = f(arg[0])
return res[0], arg[1] + res[1]
def ret(f):
return (f, 0)
But this is not trivival since I am not using function composition in the getStatus function that is defined as such
def getStatus(now_url):
try:
respones = requests.get(now_url)
if respones.status_code == 200:
return respones.status_code, 1
else:
return respones.status_code, 0
except Exception as e:
return e, 0
So the question is how to restructure my code in such a way that I can use the power of monads to count the number of status_code == 200.
Hope you can help :)

Related

How can I increase code readability in python for this?

I am running my script in Flask and to catch and print the errors on the server I have made functions that either return None if succeeded else return an error message.
The problem is I have many functions which runs one after another and uses global variable from earlier function, this makes the code unorganized. What can I do?
App.py
from flask import Flask
from main import *
app = Flask(__name__)
#app.route('/')
def main():
input = request.args.get('input')
first_response = function1(input)
if first_response is None:
second_response = function2() # no input from hereon
if second_response is None:
third_response = function3() # functions are imported from main.py
if third_response is None:
...
if ...
else ...
else:
return third_response
else:
return second_response
else:
return first_response
main.py
def function1(input):
global new_variable1
if input is valid:
new_variable1 = round(input,2)
else:
return "the value is not integer"
def function2():
global new_variable2
if new_variable1 > 8:
new_variable2 = new_variable1 / 8
else:
return "the division is not working, value is 0"
def function3():
...
This is just a demo of what's going on. The last function will return a value either side. So if everything goes right I would be able to see the right output as well as I will see error on any given function.
The code works fine, but I need better alternative to do this.
Thanks!
Ah...you have (correctly) determined that you have two things to do:
Process your data, and
Deal with errors.
So let's process the data replacing global with parameteters (and come back to the error handling in a bit). You want to do something like this.
main.py
def function1(some_number):
if some_number is valid:
return round(some_number, 2)
def function2(a_rounded_number):
if a_rounded_number > 8:
return a_rounded_number / 8
So each function should return the results of its work. Then the calling routine can just send the results of each function to the next function, like this:
app.py
# [code snipped]
result1 = function1(the_input_value)
result2 = function2(result1)
result3 = function3(result2)
But...how do we deal with unexpected or error conditions? We use exceptions, like this:
main.py
def function1(some_number):
if some_number is valid:
return round(some_number, 2)
else:
raise ValueError("some_number was not valid")
and then in the calling routine
app.py
try:
result1 = function1(some_input_value)
except (ValueError as some_exception):
return str(some_exception)

How to solve python Celery error when using chain EncodeError(RuntimeError('maximum recursion depth exceeded while getting the str of an object))

How do you run a chain task in a for loop since the signatures are generated dynamically. The following approach was used because defining the tester task as:
#task
def tester(items):
ch = []
for i in items:
ch.append(test.si(i))
return chain(ch)()
would raise an error of EncodeError(RuntimeError('maximum recursion depth exceeded while getting the str of an object',),) if the chains are too large which is os or system specific.
E.g calling the task as follows
item = range(1,40000) #40,000 raises exception but #3,000 doesn't after setting sys.setrecursionlimit(15000)
tester.delay(item)
raises the EcodeError. In the past I used to have this error when length of item is 5000 i.e range(1,5000). Which i fixed by importing sys and calling sys.setrecursionlimit(15000) at the top of the module. But there is a limitation to this so I decided to refactor a little and use the approach below. That is trying, to split the list and do it in chunks after chunks.The problem is it doesn't seem to continue after 2000 i.e test prints 2000 to screen.
#task
def test(i):
print i
#task
def tester(items):
ch = []
for i in items:
ch.append(test.si(i))
counter = 1
if len(ch) > 2000:
ch_length = len(ch) #4k
while ch_length >= 2000:
do = ch[0:2000] # 2k
print "Doing...NO#...{}".format(counter)
ret = chain(do)() #doing 2k
print "Ending...NO#...{}".format(counter)
ch = ch[2000:] #take all left i.e 2k
ch_length = len(ch) #2k
if ch_length <= 2000 and ch_length > 0:
print "DOING LAST {}".format(counter)
ret = chain(ch)()
print "ENDING LAST {}".format(counter)
break
else:
break
counter +=1
else:
ret = chain(ch)()
return ret
According to celery documentation, a chain basically executes task within it one after the other. I expect the while loop to continue only first iteration is conpleted in the chain before proceeding.
I hope someone has experience with this and could help.
Merry Xmas in advance!
It seems you met this issue: https://github.com/celery/celery/issues/1078
Also calling chain(ch)() seems to execute it asynchronously. Try to explicitely call apply() on it.
#app.task
def tester(items):
ch = []
for i in items:
ch.append(test.si(i))
PSIZE = 1000
for cl in range(0, len(ch), PSIZE):
print("cl: %s" % cl)
chain(ch[cl:cl + PSIZE]).apply()
print("cl: %s END" % cl)
return None

How do I write this as a context manager?

The race-condition-free way of updating a variable in redis is:
r = redis.Redis()
with r.pipeline() as p:
while 1:
try:
p.watch(KEY)
val = p.get(KEY)
newval = int(val) + 42
p.multi()
p.set(KEY, newval)
p.execute() # raises WatchError if anyone else changed KEY
break
except redis.WatchError:
continue # retry
this is significantly more complex than the straight forward version (which contains a race-condition):
r = redis.Redis()
val = r.get(KEY)
newval = int(val) + 42
r.set(KEY, newval)
so I thought a context manager would make this easier to work with, however, I'm having problems...
My initial idea was
with update(KEY) as val:
newval = val + 42
somehow return newval to the contextmanager...?
there wasn't an obvious way to do the last line, so I tried::
#contextmanager
def update(key, cn=None):
"""Usage::
with update(KEY) as (p, val):
newval = int(val) + 42
p.set(KEY, newval)
"""
r = cn or redis.Redis()
with r.pipeline() as p:
while 1:
try:
p.watch(key) # --> immediate mode
val = p.get(key)
p.multi() # --> back to buffered mode
yield (p, val)
p.execute() # raises WatchError if anyone has changed `key`
break # success, break out of while loop
except redis.WatchError:
pass # someone else got there before us, retry.
which works great as long as I don't catch a WatchError, then I get
File "c:\python27\Lib\contextlib.py", line 28, in __exit__
raise RuntimeError("generator didn't stop")
RuntimeError: generator didn't stop
what am I doing wrong?
I think the problem is that you yield multiple times (when the task is repeated) but a context manager is only entered once (the yield is just a syntactic sugar for the __enter__ method). So as soon as the yield can be executed multiple times, you have a problem.
I’m not prefectly sure how to solve this in a good way, and I can’t test it either, so I’m only giving some suggestions.
First of all, I would avoid yielding the rather internal p; you should yield some object that is specifically made for the update process. For example something like this:
with update(KEY) as updater:
updater.value = int(updater.original) + 42
Of course this still doesn’t solve the multiple yields, and you cannot yield that object earlier as you won’t have the original value at that point either. So instead, we could specify a delegate responsible for the value updating instead.
with update(KEY) as updater:
updater.process = lambda value: value + 42
This would store a function inside the yielded object which you can then use inside the context manager to keep trying to update the value until it succeeded. And you can yield that updater from the context manager early, before entering the while loop.
Of course, if you have made it this far, there isn’t actually any need for a context manager left. Instead, you can just make a function:
update(key, lambda value: value + 42)

Python - how to handle outcome variables that are conditional set correctly

Consider the following:
def funcA():
some process = dynamicVar
if dynamicVar == 1:
return dynamicVar
else:
print "no dynamicVar"
def main():
outcome = funcA()
If the 'some process' part results in a 1, the var dynamicVar is passed back as outcome to the main func. If dynamicVar is anything but 1, the routine fails as no arguments are being return.
I could wrap the outcome as a list:
def funcA():
outcomeList = []
some process = dynamicVar
if dynamicVar == 1:
outcomeList.append(dynamicVar)
return outcomeList
else:
print "no dynamicVar"
return outcomeList
def main():
outcome = funcA()
if outcome != []:
do something using dynamicVar
else:
do something else!
or maybe as a dictionary item. Each of the 2 solutions I can think of involve another set of processing in the main / requesting func.
Is this the 'correct' way to handle this eventuality? or is there a better way?
What is the proper way of dealing with this. I was particularly thinking about trying to catch try: / except: errors, so in that example the uses are reversed, so something along the lines of:
def funcA():
some process = dynamicVar
if dynamicVar == 1:
return
else:
outcome = "no dynamicVar"
return outcome
def main():
try:
funcA()
except:
outcome = funcA.dynamicVar
In Python, all function that do not return a value will implicitly return None. So you can just check if outcome is not None in main().
I believe when you write a function, it's return value should be clear and expected. You should return what you say you will return. That being said, you can use None as a meaningful return value to indicate that the operation failed or produced no results:
def doSomething():
"""
doSomething will return a string value
If there is no value available, None will be returned
"""
if check_something():
return "a string"
# this is being explicit. If you did not do this,
# None would still be returned. But it is nice
# to be verbose so it reads properly with intent.
return None
Or you can make sure to always return a default of the same type:
def doSomething():
"""
doSomething will return a string value
If there is no value available, and empty string
will be returned
"""
if check_something():
return "a string"
return ""
This handles the case with a bunch of complex conditional tests that eventually just fall through:
def doSomething():
if foo:
if bar:
if biz:
return "value"
return ""

Global variables in Ironpython

I'm having terrible trouble trying to understand ironpython scoping rules.
With the following script:
global data
// function for call xml-rpc
def CallListDatabases(self):
global synC, synCtx, result, data
self.synCtx = synC.Current
service = XmlRpcService("http://localhost:8000/rpc")
req = XmlRpcRequest(service, 'vocab_list')
req.XmlRpcCallCompleteHandler += self.req_XmlRpcCallCompleteHandler
result = req.Execute(self)
//if call xml-rpc complete then use working rpc
def req_XmlRpcCallCompleteHandler (self, response, userState):
global synCtx, synC, data
word = []
f = response.TryCast(clr.GetClrType(Fault))
if f != None:
self.synCtx.Post(self.SetCallResult, f)
if f.FaultCode == -1:
pass
else:
self.synCtx.Post(self.SetCallResult, response)
// show result with rpc complete
def SetCallResult(self, userState):
global data, result
if userState.GetType() == clr.GetClrType(Fault):
f = userState
if f != None:
print str(f.FaultString)
return
response = userState
result = response.TryCast(clr.GetClrType(Array[str]))
data = result //I want to use value it
print "value: "+data //show value
Problem
print "value: "+data
value: [] <<<======== Not value
First of all, you don't seem to ever be calling any of the functions you have defined. If you are calling the functions, it appears that the return value of response.TryCast(clr.GetClrType(Array[str])) is an empty list. Have you tried printing the value of result within SetCallResult()? I'd bet that it's [].

Categories