Error while running 'classloop' - python

i was messing around with classes and thought i could try and make a class just loop
here is what i did:
class A():
def __init__(self):
print 1
self.loop()
def loop(self):
print 2
self.__init__()
A()
it prints 1 & 2 back and fourth for a while then i get a error that starts looping that looks like this:
Traceback (most recent call last):
File "C:/Python27/classloop.py", line 12, in <module>
A()
then it starts looping these two errors back and fourth:
File "C:/Python27/classloop.py", line 4, in __init__
self.loop()
File "C:/Python27/classloop.py", line 9, in loop
self.__init__()
just wondering why this happens all of the sudden why doesnt it just keep looping?

Every function call creates a stack frame, and every return pops a frame off the stack. This means that if you recurse too deep without returning, Python will run out of room on the stack and throw an exception. You can increase the limit, but most of the time, this will only make your program run a bit longer before crashing, or worse, the Python interpreter will corrupt its memory and go crazy.

In python there is a maximum recursion limit.
The default is 1000.
You can see that by typing:
import sys
print sys.getrecursionlimit()
in the terminal.
If you want to increase it use:
sys.setrecursionlimit(10000) # 10000 is just an example

Related

Using schedule module to reming me to drink water every ten seconds

I am using schedule module to remind me to drink water every ten seconds
import schedule
def remindDrink():
print("Drink Water")
while True:
schedule.every().day.at("16:35").do(remindDrink())
So the problem here is that the task gets executed, but immedieately, not at the given time, and VSCode throws a weird error at me
Traceback (most recent call last):
File "e:\Code\Python Code\randomModule.py", line 12, in <module>
schedule.every().day.at("16:31").do(sendNotification())
File "C:\Users\PC\AppData\Local\Programs\Python\Python310\lib\site-packages\schedule\__init__.py", line 625, in do
self.job_func = functools.partial(job_func, *args, **kwargs)
TypeError: the first argument must be callable
PS E:\Code\Python Code>
This is the error, what am I doing wrong?
Same module different approach, I personally prefer this approach because it keeps my work clean, easy to read and to understand at your first glance and ofcourse easy to refactor.
from schedule import every, repeat, run_pending
import time
#repeat(every().day.at("16:35"))
def remindDrink():
print("Drink Water")
while True:
run_pending()
time.sleep(1)
Your broken code fixed:
Your broken code is fixed below, now the choice is yours, you can either use the above code or this:
import schedule
import time
def remindDrink():
print("Drink Water")
schedule.every().day.at("16:35").do(remindDrink)
while True:
schedule.run_pending()
time.sleep(1)
Remove the () from remindDrink() in the last line inside the do() function
Your code should look like this:
schedule.every().day.at("16:35").do(remindDrink)
Refer back to this question: TypeError: the first argument must be callable in scheduler library
quick thought, shedule....do(), within do() you don't run the function, just put the name of the function inside do.
'''
schedule.every().day.at("16:35").do(remindDrink)
'''

Python shorthand exception handling

There's a certain problem I've been having with exception handling in Python. There have been many situations where there is an area of code where I want all exceptions to be ignored. Say I have 100 lines of code where I want this to happen.
This is what most would think would be the solution:
try:
line 1
line 2
line 3
...
line 99
line 100
except:
pass
This actually does not work in my situation (and many other situations). Assume line 3 has an exception. Once the exception is thrown, it goes straight to "pass", and skips lines 4-100. The only solution I've been able to come up with is this:
try:
line 1
except:
pass
try:
line 2
except:
pass
try:
line 3
except:
pass
...
try:
line 99
except:
pass
try:
line 100
except:
pass
But, as is obvious, this is extremely ugly, sloppy, and takes absolutely forever. How can I do the above code in a shorter, cleaner way? Bonus points if you give a method that allows "pass" to be replaced with other code.
As other answers have already stated, you should consider refactoring your code.
That said, I couldn't resist not hacking something together to be able to execute your function without failing and breaking out in case an exception occurs.
import ast, _ast, compiler
def fail():
print "Hello, World!"
raise Exception
x = [4, 5]
print x
if __name__ == '__main__':
with open(__file__, 'r') as source:
tree = ast.parse(source.read(), __file__)
for node in ast.iter_child_nodes(tree):
if isinstance(node, _ast.FunctionDef):
_locals = {}
for line in node.body:
mod = ast.Module()
mod.body = [line]
try:
exec(compile(mod, filename='<ast>', mode='exec'), _locals, globals())
except:
import traceback
traceback.print_exc()
The code executes any function it finds in global scope, and prevents it from exiting in the event it fails. It does so by iterating over the AST of the file, and creating a new module to execute for each line of the the function.
As you would expect, the output of the program is:
Hello, World!
Traceback (most recent call last):
File "kek.py", line 18, in <module>
exec(compile(m, filename='<ast>', mode='exec'), _locals, globals())
File "<ast>", line 3, in <module>
Exception
[4, 5]
I should emphasize that using this in any production code is a bad idea. But for the sake of argument, something like this would work. You could even wrap it in a nice decorator, though that wouldn't do anything to the fact that it's a bad idea.
Happy refactoring!
You could try breaking the code into smaller chunks so that it can properly handle errors instead of needing to abandon all progress and loop back through.
Another solution that can be used in addition to that is making checks, if set flags for your code to check if it can proceed or if it needs to repeat the last step you would be able to prevent extra iterations.
Example:
while not continue:
try:
a = input('Enter data')
except:
pass
if a != null:
continue = true

Why is using a while loop better?

I am making a program that uses a loop to run forever and I have a snippet of code below to show you how I acheive the loop. This is just an example and not the actual program. It is the same idea though.
import time
def start():
print "hello"
time.sleep(0.2)
start()
start()
All of my programmer friends tell me not to do this, and use a while loop instead. Like this:
import time
def start():
while True:
print "Hello"
time.sleep(0.2)
start()
Why should I use the while loop instead when both methods work perfectly fine?
Each time you are recursing, you are pushing a frame context onto your program stack. Soon you would have used up your entire allotted stack space causing stackoverflow, no pun intended ;)
The second approach has no such flaws. Hence by the looks of it 2nd is better than 1st approach (unless more of the program is presented).
If you run the program continuously with the recursion, you will get RuntimeError:
Traceback (most recent call last):
File "t.py", line 8, in <module>
start()
File "t.py", line 7, in start
start()
File "t.py", line 7, in start
start()
...
File "t.py", line 7, in start
start()
RuntimeError: maximum recursion depth exceeded
>>> import sys
>>> sys.getrecursionlimit()
1000
the while loop amd recursion both have their advantages and disadvantages but while loop is just a test and conditional jump. wheras ecursion involves pushing a stack frame, jumping, returning, and popping back from the stack.So they might have preferred while loop :)
Python does not have Tail Call Optimization (TCO) and cannot elide the stack. As such, recursion should not be used for an unbound depth (in languages without TCO) lest you get a stack-overflow error!
While the current code might appear to be fine, consider this simple change which will quickly reveal a problem.
def start():
print "hello"
# let's go faster! -- time.sleep(0.2)
start()
start()

Testing joining list with a 60MB string with `timeit` causes MemoryError

My test appends creates a list containing a 60MB string and a 5-byte string. This list is then joined with join():
import timeit
setup_str = 'str_5byte = "\xfa\xea\x02\x02\x02"; L = [str_5byte]; str_60mb = str_5byte * 12000000'
t = timeit.Timer('L.append(str_60mb); str_long = "".join(L)', setup=setup_str)
t.timeit(100)
Returns this exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python25\lib\timeit.py", line 161, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
MemoryError
I assume that after every execution the variables are deleted and garbage-collected, so why am I running out of memory? Running the test with 8 executions is OK, but higher than that and I get this error.
Yes, with t.timeit the setup statement is executed only once and then the test statement multiple times. This means the list L persists and grows every iteration, resulting obviously in your system running out of memory.
Try min(t.repeat(repeat=100, number=1)) to execute the setup each time before the evaluation of the test statement.
Here's the docs if you need more info.
on a quick experiment, I'm pretty sure setup does exactly the opposite of what you expect - it's run before every call. So you get an extra 60 mb each time through, which does not get collected. When I move that setup code directly into the test code, I am able to run.

Python: subclassing run method of threading gives an error

I encountered weird behavior when subclassing Thread of the threading-module of Python 2.7.3.
Consider the next code, called test.py:
import threading
def target_function(): print 'Everything OK'
class SpecificThread(threading.Thread):
def run(self):
try:
if self.__target:
self.__target(*self.__args, **self.__kwargs)
finally:
# Avoid a refcycle if the thread is running a function with
# an argument that has a member that points to the thread.
del self.__target, self.__args, self.__kwargs
def check():
thread = SpecificThread(target=target_function)
#thread = threading.Thread(target=target_function)
thread.run()
print thread.name, 'is running:', thread.is_alive()
This code raises the following error when check() is run:
>>> check()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.py", line 18, in check
thread.run()
File "test.py", line 13, in run
del self.__target, self.__args, self.__kwargs
AttributeError: _SpecificThread__target
Although, the run() method of SpecificThread is exactly the same as the code in the original threading.py module.
If threading.Thread is used or when SpecificThread does not overwrite the run() method, the script runs flawless. I do not understand why overwriting does not work, considering that the Python documentation states that it is allowed.
Thanks!
The thing you've encountered is called name mangling in Python.
It means that all non-system attributes (attributes like "__attrname__") starting with double underscore are automatically renamed by interpreter as _Classname__attrname). That's a kind of protection mechanizm and such design usually means that you souldn't even touch those fields (they are already handled in a proper way), and usually referred to as "private fields".
So, if you want for some reason to get to those fields, use notation above:
self._Thread__target
Note, that field starts with _Thread, not with _SpecificThread, because this attribute was defined in Thread class.

Categories