Infinite Monkey Theorem: Maximum Recursion Depth exceeded - python

I was trying to solve the Infinite Monkey Theorem which is part of a programming assignment that I came across online.
The problem statement is:
The theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type a given text, such as the complete works of William Shakespeare. Well, suppose we replace a monkey with a Python function. How long do you think it would take for a Python function to generate just one sentence of Shakespeare? The sentence we’ll shoot for is: “methinks it is like a weasel”
I am trying to see a) whether it will be possible to generate the string b) After how many iterations was the string generated
I have set recursion limit as 10000 looking at a previous SO question, but I am still getting the run time error for Maximum recursion depth reached.
I am still finding my way around python. I hope to see suggestions on how I could do it in a better way without coming across recursion depth issue.
Here is my code so far:
import random
import sys
alphabet=['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z',' ']
quote="methinks it is like a weasel"
msg='cont'
count=0
sys.setrecursionlimit(10000)
def generate(msg):
sentence=''
while len(sentence)!=27:
#random.choice() prints a random element from list 'alphabet'
sentence=sentence+random.choice(alphabet)
if msg=='cont':
verify(sentence)
def verify(msg2):
global count
if msg2.find(quote)==-1:
count+=1
generate('cont')
else:
print 'sentence is ',msg2 ,'count is',count
if __name__ == '__main__':
generate(msg)

This is a case where it's better to think before doing. If we ignore capitalization and punctuation, your string is comprised of 28 characters, each of which can in principle be any of the 26 letters of the alphabet or a space. The number of combinations is 2728, which happens to be 11972515182562019788602740026717047105681. If you could enumerate a billion guesses per second, 2728 / 1E9 (tries/sec) / 3600 (sec/hr) / 24 (hrs/day) / 365.25 (days/yr) / 14E9 (yrs/current age of universe)
=> 27099008032844.297. The good news is that you might stumble on the answer at any point, so the expected amount of time is only half of 27 trillion times the current age of the universe.
Blowing out the stack is the least of your problems.
The reason it's called the infinite monkey theorem is that you can divide by the number of monkeys who can process this in parallel, and if that's infinity the solution time becomes the per monkey amount of time to generate a guess, 1 billionth of a second.

It would be better not to call verify() from generate() (and vice-versa) in the likely event that the monkeys have not written Shakespeare.
Having two functions which repeatedly call one another without returning if what causes the recursion depth to be exceeded.
Instead of using recursion, you could simply check whether you've produced your sentence with an iterative approach. For example have a loop which takes a random sentence, then checks whether it matches your required sentence, and if so, outputs the number of tries it took (and if not loops back to the start).

done = False
count = 1
while not done:
msg = generate()
if verify(msg):
print 'success, count = ', count
done = True
count += 1

Maybe something like the following. It runs on CPython 2.[67], CPython 3.[01234], pypy 2.4.0, pypy3 2.3.1 and jython 2.7b3. It should take a very long time to run with --production, even on pypy or pypy3.
#!/usr/local/cpython-3.4/bin/python
'''Infinite monkeys randomly typing Shakespeare (or one monkey randomly typing Shakespeare very fast'''
# pylint: disable=superfluous-parens
# superfluous-parens: Parentheses are good for clarity and portability
import sys
import itertools
def generate(alphabet, desired_string, divisor):
'''Generate matches'''
desired_tuple = tuple(desired_string)
num_possibilities = len(alphabet) ** len(desired_string)
for candidateno, candidate_tuple in enumerate(itertools.product(alphabet, repeat=len(desired_string))):
if candidateno % divisor == 0:
sys.stderr.write('checking candidateno {0} ({1}%)\n'.format(candidateno, candidateno * 100.0 / num_possibilities))
if candidate_tuple == desired_tuple:
match = ''.join(candidate_tuple)
yield match
def usage(retval):
'''Output a usage message'''
sys.stderr.write('Usage: {0} --production\n'.format(sys.argv[0]))
sys.exit(retval)
def print_them(alphabet, quote, divisor):
'''Print the matches'''
for matchno, match in enumerate(generate(alphabet, quote, divisor)):
print('{0} {1}'.format(matchno, match))
def main():
'''Main function'''
production = False
while sys.argv[1:]:
if sys.argv[1] == '--production':
production = True
elif sys.argv[1] in ['--help', '-h']:
usage(0)
else:
sys.stderr.write('{0}: Unrecognized option: {1}\n'.format(sys.argv[0], sys.argv[1]))
usage(1)
if production:
print_them(alphabet='abcdefghijklmnopqrstuvwxyz ', quote='methinks it is like a weasel', divisor=10000)
else:
print_them(alphabet='abcdef', quote='cab', divisor=10)
main()

Related

Trying to speed up recursive function, but memozation is making it take longer

I am trying to implement regulation 9.3 from the FIDE Chess Olympiad pairing system.
Below is the script I'm trying to run. When I comment out the #cached line, it actually runs faster. I want to use this function for even values of n up to ~100.
import itertools
from copy import deepcopy
from memoization import cached
#cached
def pairing(n, usedTeams = [], teams = None, reverse = False):
"""
Returns the pairings of a list of teams based on their index in their position in the pool.
Arguments:
n = number of Teams
usedTeams = a parameter used in recursion to carry the found matches to the end of the recursion (i.e. a leaf node)
teams = used in recursion ^^
reverse = if you need to prioritize finding a pairing for the lowest rated team
Returns:
A list of lists of match pairings
"""
# print('trying to pair', n, ' teams')
# if n > 10:
# return None
if teams is None:
teams = list(range(0,n))
global matches
matches = []
if reverse == True:
teams.reverse()
usedTeams = deepcopy(usedTeams)
oppTeams = []
if len(teams) == 2:
usedTeams.append([teams[0], teams[1]])
matches.append(usedTeams)
elif len(teams) > 2:
team = teams[0]
oppTeams = [teams[i] for i in itertools.chain(range(round(n/2), n), range(round(n/2)-1,0,-1))]
currUsed = deepcopy(usedTeams)
for opp in oppTeams:
newUsed = currUsed + [[team, opp]]
if len(oppTeams) > 1:
tmpTeams = [t for t in teams if t not in [team, opp]]
pairing(len(tmpTeams), newUsed, tmpTeams)
return matches
import time
start = time.process_time()
pairing(12, [], None)
print(time.process_time() - start)
Any tips for making this run faster, or using memoization differently?
I modified your code to find out:
import itertools
from copy import deepcopy
from memoization import cached
# set up a records of call parameters
from collections import defaultdict
calls = defaultdict(int)
#cached
def pairing(n, usedTeams=[], teams=None, reverse=False):
# count this call
calls[(
n,
tuple(tuple(t) for t in usedTeams) if usedTeams is not None else None,
tuple(teams) if teams is not None else None,
reverse
)] += 1
... # your same code here, left out for brevity
import time
start = time.process_time()
pairing(12, [], None)
print(time.process_time() - start)
# print the average number of calls for any parameter combination
print(sum(calls.values()) / len(calls))
Output:
0.265625
1.0
The average number of calls using any combination of parameters is 1.0 - in other words, memoization will do exactly nothing, except add overhead. Memoization can only speed up your code if the function gets called with the same parameters repeatedly, and only when that's sufficiently frequent to offset the overhead cost of memoization.
In this case, you're adding the overhead, but since the function is never called with the same parameters, not even once, there is no benefit.
And my test is being generous, assuming that #cached will somehow cleverly figure out that two lists passed in have the same contents for example, without incurring an impossible overhead - which I don't know it does. So, the test assumes the most favourable effectiveness of #cached, but to no avail.
More in general, it's safe to assume there's no magic sauce you can just throw at a program without some analysis and careful application to make it faster. If there were, the language / compiler would likely do it as a default, or offer it as an easy option (for example when trading space for speed, as with memoization). You can of course get lucky and have the particular sauce you throw at it work in some case, but even then it would probably pay to carefully analyse where it does the most good, or any good at all, instead of drowning your code in it.

Mime type optimisation in python

I want to solve the mime challenge in coding games.com. My code can pass all the test but not the optimisation test.
I tried to remove all useless functions like parsing to string but the problem is on the way I think about it.
import sys
import math
# Auto-generated code below aims at helping you parse
# the standard input according to the problem statement.
n = int(input()) # Number of elements which make up the association table.
q = int(input())
# Number Q of file names to be analyzed.
dico = {}
# My function
def check(word):
for item in dico:
if(word[-len(item)-1:].upper() == "."+item.upper()):
return(dico[item])
return("UNKNOWN")
for i in range(n):
# ext: file extension
# mt: MIME type.
ext, mt = input().split()
dico[ext] = mt
for i in range(q):
fname = input()
fname = fname
print(check(fname))
# Write an action using print
# To debug: print("Debug messages...", file=sys.stderr)
#print("Debug message...", file=sys.stderr)
Failure
Process has timed out. This may mean that your solution is not optimized enough to handle some cases.
This is the right idea, but one detail appears to be destroying the performance. The problem is the line for item in dico:, which unnecessarily loops over every entry in the dictionary. This is a linear search O(n), checking for the target item-by-item. But this pretty much defeats the purpose of the dictionary data structure, which is to offer constant-time O(1) lookups. "Constant time" means that no matter how big the dictionary gets, the time it takes to find an item is always the same (thanks to hashing).
To draw a metaphor, imagine you're looking for a spoon in your kitchen. If you know where all the utensils, appliances and cookware are are ahead of time, you don't need to look in every drawer to find the utensils. Instead, you just go straight to the utensils drawer containing the spoon you want, and it's one-shot!
On the other hand, if you're in someone else's kitchen, it can be difficult to find a spoon. You have to start at one end of the cupboard and check every drawer until you find the utensils. In the worst-case, you might get unlucky and have to check every drawer before you find the utensil drawer.
Back to the code, the above snippet is using the latter approach, but we're dealing with trying to find something in 10k unfamiliar kitchens each with 10k drawers. Pretty slow, right?
If you can adjust the solution to check the dictionary in constant time, without a loop, then you can handle n = 10000 and q = 10000 without having to make q * n iterations (you can do it in q iterations instead--so much faster!).
Thank you for your help,
I figured out the solution.
n = int(input()) # Number of elements which make up the association table.
q = int(input()) # Number Q of file names to be analyzed.
dico = {}
# My function
def check(word):
if("." in word):
n = len(word)-(word.rfind(".")+1)
extension = word[-n:].lower()
if(extension in dico):
return(dico[extension])
return("UNKNOWN")
for i in range(n):
# ext: file extension
# mt: MIME type.
ext, mt = input().split()
dico[ext.lower()] = mt
for i in range(q):
fname = input()
print(check(fname))
Your explanation was clear :D
Thank you

Running unit tests on a non OOP code without any return value

I have a script of this form which is called in a program (from another similar function). I need to use unittest module to write tests for this function.
It doesn't exactly return anything but changes a lot of globals
It takes inputs
I can't change it to an OOP code right now
I want to test cases where I change a certain global varible and see if ,let's say, TOTFRAC is positive or something.
I have read tests for OOP codes where I call each variable as an object parameter, but what do I do if my code isn't Object Oriented.
Note: I have removed a lot of lines of code because it is rather long, so things might not exactl make sense
import numpy
import math
def SETUP(LAST):
def GOTO999():
print(' ERROR IN GAS INPUT : NGAS=',NGAS,'\n')
for J in range(1,6):
# print(J)
print(' N=',J,' NGAS=',NGASN[J],' FRAC=',FRAC[J])
LAST=1
return
# A lot of globals
# Initialising variables
NBREM=[]
EBRTOT=[]
for K in range(1,6):
NBREM.append(0)
EBRTOT.append(0.0)
NGAS=int(input('NGAS'))
NEVENT=int(input('NEVENT'))
IMIP=int(input('IMIP'))
NDVEC=int(input('NDVEC'))
NSEED=int(input('NSEED'))
ESTART=float(input('ESTART'))
ETHRM=float(input('ETHRM'))
ECUT=float(input('ECUT'))
ICOUNT=0
if(IMIP == 1):
ICOUNT=1
if(NGAS == 0):
LAST=1
return
if(ESTART > 3.0*(10**6) and IMIP == 3):
print(' SUBROUTINE STOPPED: X-RAY ENERGY=','%.3f' % ESTART,'EV. MAXIMUM ENERGY 3.0MEV')
sys.exit()
if(IMIP != 1 and NEVENT > 10000):
print(' SUBROUTINE STOPPED: NUMBER OF EVENTS =',NEVENT,' LARGER THAN ARRAY LIMIT OF 10000')
sys.exit()
NGASN=[]
for i in range(1,6):
NGASN.append(int(input('NGASN'+str(i))))
FRAC=[]
for i in range(1,6):
FRAC.append(round(float(input('FRAC')),4))
TEMPC=round(float(input('TEMPC')),4)
TORR=round(float(input('TORR')),4)
# more inputs
if(IWRITE != 0):
outputfile=open("DEGRAD.OUT","w")
EBIG=0.05*ESTART/1000.
EFINAL=ESTART*1.0001+760.0*EBIG/TORR*(TEMPC+ABZERO)/293.15*EFIELD
if(EFINAL < (1.01*ESTART)):
EFINAL=1.01*ESTART
# CHECK INPUT
TOTFRAC=0.00
if(NGAS == 0 or NGAS > 6):
GOTO999()
for J in range(1,NGAS):
print('J',J)
if(NGASN[J]== 0 or FRAC[J] == 0.00):
GOTO999()
TOTFRAC=TOTFRAC+FRAC[J]
if(abs(TOTFRAC-100.00)> 1*(10**-6)):
print(TOTFRAC)
GOTO999()
if(NDVEC): #22594
PHI=0
THETA=0
elif(NDVEC==-1):
PHI=0
THETA=numpy.arccos(-1)
elif(NDVEC==0):
PHI=0.0
THETA=API/2.0
elif(NDVEC==2):
R3=DRAND48(0.0,1.0)
PHI=TWOPI*R3
R4=DRAND48(1.5, 1.9)
THETA=numpy.arccos(1.0-2.0*R4)
else :
print('DIRECTION OF BEAM NOT DEFINED NDVEC =',NDVEC)
sys.exit()
if(NSEED != 0):
RM48(NSEED,0,0)
CORR=ABZERO*TORR/(ATMOS*(ABZERO+TEMPC)*100.00)
GOTO999()
# end
As #hoefling has pointed out, the code as it is now is hardly testable. But, this is not because it is not object oriented. You can easily test non-object oriented code as long as it has a suitable structure. Which means, break long functions down into smaller ones, use function arguments and return values rather than global variables etc. - none of this has anything to do with OOP. In fact, your code violates many coding principles that were known and formulated long before OOP was en vogue (1974), see https://en.wikipedia.org/wiki/The_Elements_of_Programming_Style.
I recommend reading one or some of the following books:
Code Complete by Steve McConnell - a classic book about how to write great code.
Refactoring by Martin Fowler - how to migrate from not-so-great code to better code.
Clean Code by Robert C. Martin - again, how to write great code.

python code to generate password list [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am researching wireless security and trying to write a python script to generate passwords, not random, but a dictionary of hex numbers. The letters need to be capital, and it has to go from 12 characters to 20 characters. I went from 11 f's to 20 f's, this seems like it would meet the requirements. I then tried to place them in a text file. After I made the file, I chmod'ed it to 777 and then clicked run. It has been a few minutes, but I cannot tell if it is working or not. I am running it in kali right now, on a 64 bit core i3 with 8gb of ram. I'm not sure how long it would be expected to take, but this is my code, let me know if it looks right please:
# generate 10 to 32 character password list using hex numbers, 0-9 A-F
def gen_pwd(x):
x = range(17592186044415 -295147905179352830000)
def toHex(dec):
x = (dec % 16)
digits = "0123456789ABCDEF"
rest = dec / 16
if (rest == 0):
return digits[x]
return toHex(rest) + digits[x]
for x in range(x):
print toHex(x)
f = open(/root/Home/sdnlnk_pwd.txt)
print f
value = x
string = str(value)
f.write(string)
gen_pwd
how bout just
password = hex(random.randint(1000000,100000000))[2:]
or
pw_len = 12
my_alphabet = "1234567890ABCDEF"
password = "".join(random.choice(my_alphabet) for _ in range(pw_len))
or what maybe closer to what you are trying to do
struct.pack("Q",12365468987654).encode("hex").upper()
basically you are overcomplicating a very simple task
to do exactly what you are asking you can simplify it
import itertools, struct
def int_to_chars(d):
'''
step 1: break into bytes
'''
while d > 0: # while we have not consumed
yield struct.pack("B",d&0xFF) # decode char
d>>=8 # shift right one byte
yield "" # a terminator just in case its empty
def to_password(d):
# this will convert an arbitrarily large number to a password
return "".join(int_to_chars(d)).encode("hex").upper()
# you could probably just get away with `return hex(d)[2:]`
def all_the_passwords(minimum,maximum):
#: since our numbers are so big we need to resort to some trickery
all_pw = itertools.takewhile(lambda x:x<maximum,
itertools.count(minimum))
for pw in all_pw:
yield to_password(pw)
all_passwords = all_the_passwords( 0xfffffffffff ,0xffffffffffffffffffff)
#this next bit is gonna take a while ... go get some coffee or something
for pw in all_passwords:
print pw
#you will be waiting for it to finish for a very long time ... but it will get there
You can use time.time() to get the execution time. and if you are using python 2 use xrange() instead range because xrange return an iterator :
import time
def gen_pwd(x):
def toHex(dec):
x = (dec % 16)
digits = "0123456789ABCDEF"
rest = dec / 16
if (rest == 0):
return digits[x]
return toHex(rest) + digits[x]
for x in range(x):
print toHex(x)
f = open("/root/Home/sdnlnk_pwd.txt")
print f
value = x
string = str(value)
f.write(string)
start= time.time()
gen_pwd()
last=time.time()-start
print last
Note : you need () to call your function and "" in your open() function. also i think your first range is an extra command , as its wrong , you need to remove it.
Disclaimer
I'd like to comment on the OP question but I need to show some code and also the output that said code produces, so that I eventually decided to present my comment in the format of an answer.
OTOH, I hope that this comment persuades the OP that her/his undertaking, while conceptually simple (see my previous answer, 6 lines of Python code), is not feasible with available resources (I mean, available on Planet Earth).
Code
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
pg = lambda n: locale.format("%d", n, grouping=True)
def count_bytes(low, hi):
count = low+1
for i in range(low+1,hi+1):
nn = 15*16**(i-1)
nc = i+1
count = count + nn*nc
return count
n_b = count_bytes(10,20)
n_d = n_b/4/10**12
dollars = 139.99*n_d
print "Total number of bytes to write on disk:", pg(n_b)
print """
Considering the use of
WD Green WD40EZRX 4TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5\" Internal Hard Drives,
that you can shop at $139.99 each
(see <http://www.newegg.com/Product/Product.aspx?Item=N82E16822236604>,
retrieved on December 29th, 2014)."""
print "\nNumber of 4TB hard disk drives necessary:", pg(n_d)
print "\nCost of said hard disks: $" + pg(dollars)
Output
Total number of bytes to write on disk: 25,306,847,157,254,216,063,385,611
Considering the use of
WD Green WD40EZRX 4TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drives,
that you can shop at $139.99 each
(see <http://www.newegg.com/Product/Product.aspx?Item=N82E16822236604>,
retrieved on December 29th, 2014).
Number of 4TB hard disk drives necessary: 6,326,711,789,313
Cost of said hard disks: $885,676,383,385,926
My comment on what the OP wants to do
Quite a bit of disk storage (and money) is needed to accomplish your undertaking.
Perspective
Projected US Federal debt at the end of fiscal year 2014 is $18.23 trillion, my estimated cost, not considering racks, power supplies and energy bills, is $886 trillion.
Recommended reading
Combinatorial_Explosion#SussexUniversity,
There is hope
If you are still convinced to pursue your research project on wireless security in the direction you've described, it is possible that you can get a substantial volume discount on the drives'purchase.
characters=["a","b","c"]
for x,y in zip(range(5),characters):
print (hex(x)+y)
Output:
>>>
0x0a
0x1b
0x2c
>>>
You see, its actually doing that with a short way. It is not possible if you use a range like that, keep it small and try to add another things to your result.
Also for file process, here is a better way:
with open("filepath/name","a+") as f:
f.write("whateveryouwanttowrite")
I was working with password generators, well better if you define a dict with complicated characters and compile them like:
passw={"h":"_*2ac","e":"=.kq","y":"%.hq1"}
x=input("Wanna make some passwords? Enter a sentence or word: ")
for i in x:
print (passw[i],end="")
with open("passwords.txt","a+") as f:
f.write(passw[i])
Output:
>>>
Wanna make some passwords? Enter a sentence or word: hey
_*2ac=.kq%.hq1
>>>
So, just define a dict with keys=alphabet and values=complicated characters, and you can make very strong passwords with simple words-sentences.I just wrote it for an example, of course you can add them to dict later, you dont have to write. But basic way is for that is better I think.
Preamble
I don't want to comment on what you want to do.
Code MkI
Your code can be trimmed (quite a bit) to the following
with open("myfile", "w") as f:
for x in xrange(0xff,0xff*2+1): f.write("%X\n"%x)
Comments on my code
Please note that
You can write hex numbers in source code as, ehm, hex numbers and you can mix hex and decimal notation as well
The to_hex function is redundant as python has (surprise!) a number of different ways to format your output as you please (here I've used so called string interpolation).
Of course you have to change the filename in the open statement and
adjust the extremes of the interval generated by xrange (it seems
you're using python 2.x) to your content.
Code MkII
Joran Beasley remarked that (at least in Python 2.7) xrange internally uses a C long and as such it cannot step up to the task of representing
0XFFFFFFFFFFFFFFFFFFFF. This alternative code may be a possibility:
f = open("myfile", "w")
cursor = 0XFFFFFFFFFF
end = 0XFFFFFFFFFFFFFFFFFFFF
while cursor <= end:
f.write("%X\n"%cursor)
cursor += 1
all of this is well and good, however, none of it accomplishes my purpose. if python cannot handle such large numbers, i will have to use something else. as i stated, i do not want to generate random anything, i need a list of sequential hex characters which are anywhere from 12 characters to 20 characters long. it is to make a dictionary of passwords which are nothing more than a hex number that should be about 16 characters long.
does anyone have any suggestions on what i can use for this purpose? i think some type of c language should do the trick, but i know less about c or c++ than python. sounds like this will take a while, but that's ok, it is just a research project.
i have come up with another possibility, counting in hex starting from 11 f's and going until i reach 20 f's. this would produce about 4.3 billion numbes, which should fit in a 79 million page word document. sounds like it is a little large, but if i go from 14 f's to 18 f's, that should be manageable. here is the code i am proposing now:
x = 0xffffffffffffff
def gen_pwd(x):
while x <= 0xffffffffffffffffff:
return x
string = str(x)
f = open("root/Home/sdnlnk_pwd.txt")
print f.upper(string, 'a')
f.write(string)
x = x + 0x1
gen_pwd()

Using time.time() to time a function often return 0 seconds

I have to time the implementation I did of an algorithm in one of my classes, and I am using the time.time() function to do so. After implementing it, I have to run that algorithm on a number of data files which contains small and bigger data sets in order to formally analyse its complexity.
Unfortunately, on the small data sets, I get a runtime of 0 seconds even if I get a precision of 0.000000000000000001 with that function when looking at the runtimes of the bigger data sets and I cannot believe that it really takes less than that on the smaller data sets.
My question is: Is there a problem using this function (and if so, is there another function I can use that has a better precision)? Or am I doing something wrong?
Here is my code if ever you need it:
import sys, time
import random
from utility import parseSystemArguments, printResults
...
def main(ville):
start = time.time()
solution = dynamique(ville) # Algorithm implementation
end = time.time()
return (end - start, solution)
if __name__ == "__main__":
sys.argv.insert(1, "-a")
sys.argv.insert(2, "3")
(algoNumber, ville, printList) = parseSystemArguments()
(algoTime, solution) = main(ville)
printResults(algoTime, solution, printList)
The printResults function:
def printResults(time, solution, printList=True):
print ("Temps d'execution = " + str(time) + "s")
if printList:
print (solution)
The solution to my problem was to use the timeit module instead of the time module.
import timeit
...
def main(ville):
start = timeit.default_timer()
solution = dynamique(ville)
end = timeit.default_timer()
return (end - start, solution)
Don't confuse the resolution of the system time with the resolution of a floating point number. The time resolution on a computer is only as frequent as the system clock is updated. How often the system clock is updated varies from machine to machine, so to ensure that you will see a difference with time, you will need to make sure it executes for a millisecond or more. Try putting it into a loop like this:
start = time.time()
k = 100000
for i in range(k)
solution = dynamique(ville)
end = time.time()
return ((end - start)/k, solution)
In the final tally, you then need to divide by the number of loop iterations to know how long your code actually runs once through. You may need to increase k to get a good measure of the execution time, or you may need to decrease it if your computer is running in the loop for a very long time.

Categories