I have an application that is kind of like a URL shortener and need to generate unique URL whenever a user requests.
For this I need a function to map an index/number to a unique string of length n with two requirements:
Two different numbers can not generate same string.
In other words as long as i,j<K: f(i) != f(j). K is the number of possible strings = 26^n. (26 is number of characters in English)
Two strings generated by number i and i+1 don't look similar most of the times. For example they are not abcdef1 and abcdef2. (So that users can not predict the pattern and the next IDs)
This is my current code in Python:
chars = "abcdefghijklmnopqrstuvwxyz"
for item in itertools.product(chars, repeat=n):
print("".join(item))
# For n = 7 generates:
# aaaaaaa
# aaaaaab
# aaaaaac
# ...
The problem with this code is there is no index that I can use to generate unique strings on demand by tracking that index. For example generate 1 million unique strings today and 2 million tomorrow without looping through or collision with the first 1 million.
The other problem with this code is that the strings that are created after each other look very similar and I need them to look random.
One option is to populate a table/dictionary with millions of strings, shuffle them and keep track of index to that table but it takes a lot of memory.
An option is also to check the database of existing IDs after generating a random string to make sure it doesn't exist but the problem is as I get closer to the K (26^n) the chance of collision increases and it wouldn't be efficient to make a lot of check_if_exist queries against the database.
Also if n was long enough I could use UUID with small chance of collision but in my case n is 7.
I'm going to outline a solution for you that is going to resist casual inspection even by a knowledgeable person, though it probably IS NOT cryptographically secure.
First, your strings and numbers are in a one-to-one map. Here is some simple code for that.
alphabet = 'abcdefghijklmnopqrstuvwxyz'
len_of_codes = 7
char_to_pos = {}
for i in range(len(alphabet)):
char_to_pos[alphabet[i]] = i
def number_to_string(n):
chars = []
for _ in range(len_of_codes):
chars.append(alphabet[n % len(alphabet)])
n = n // len(alphabet)
return "".join(reversed(chars))
def string_to_number(s):
n = 0
for c in s:
n = len(alphabet) * n + char_to_pos[c]
return n
So now your problem is how to take an ascending stream of numbers and get an apparently random stream of numbers out of it instead. (Because you know how to turn those into strings.) Well, there are lots of tricks for primes, so let's find a decent sized prime that fits in the range that you want.
def is_prime (n):
for i in range(2, n):
if 0 == n%i:
return False
elif n < i*i:
return True
if n == 2:
return True
else:
return False
def last_prime_before (n):
for m in range(n-1, 1, -1):
if is_prime(m):
return m
print(last_prime_before(len(alphabet)**len_of_codes)
With this we find that we can use the prime 8031810103. That's how many numbers we'll be able to handle.
Now there is an easy way to scramble them. Which is to use the fact that multiplication modulo a prime scrambles the numbers in the range 1..(p-1).
def scramble1 (p, k, n):
return (n*k) % p
Picking a random number to scramble by, int(random.random() * 26**7) happened to give me 3661807866, we get a sequence we can calculate with:
for i in range(1, 5):
print(number_to_string(scramble1(8031810103, 3661807866, i)))
Which gives us
lwfdjoc
xskgtce
jopkctb
vkunmhd
This looks random to casual inspection. But will be reversible for any knowledgeable someone who puts modest effort in. They just have to guess the prime and algorithm that we used, look at 2 consecutive values to get the hidden parameter, then look at a couple of more to verify it.
Before addressing that, let's figure out how to take a string and get the number back. Thanks to Fermat's little theorem we know for p prime and 1 <= k < p that (k * k^(p-2)) % p == 1.
def n_pow_m_mod_k (n, m, k):
answer = 1
while 0 < m:
if 1 == m % 2:
answer = (answer * n) % k
m = m // 2
n = (n * n) % k
return answer
print(n_pow_m_mod_k(3661807866, 8031810103-2, 8031810103))
This gives us 3319920713. Armed with that we can calculate scramble1(8031810103, 3319920713, string_to_number("vkunmhd")) to find out that vkunmhd came from 4.
Now let's make it harder. Let's generate several keys to be scrambling with:
import random
p = 26**7
for i in range(5):
p = last_prime_before(p)
print((p, int(random.random() * p)))
When I ran this I happened to get:
(8031810103, 3661807866)
(8031810097, 3163265427)
(8031810091, 7069619503)
(8031809963, 6528177934)
(8031809917, 991731572)
Now let's scramble through several layers, working from smallest prime to largest (this requires reversing the sequence):
def encode (n):
for p, k in [
(8031809917, 991731572)
, (8031809963, 6528177934)
, (8031810091, 7069619503)
, (8031810097, 3163265427)
, (8031810103, 3661807866)
]:
n = scramble1(p, k, n)
return number_to_string(n)
This will give a sequence:
ehidzxf
shsifyl
gicmmcm
ofaroeg
And to reverse it just use the same trick that reversed the first scramble (reversing the primes so I am unscrambling in the order that I started with):
def decode (s):
n = string_to_number(s)
for p, k in [
(8031810103, 3319920713)
, (8031810097, 4707272543)
, (8031810091, 5077139687)
, (8031809963, 192273749)
, (8031809917, 5986071506)
]:
n = scramble1(p, k, n)
return n
TO BE CLEAR I do NOT promise that this is cryptographically secure. I'm not a cryptographer, and I'm aware enough of my limitations that I know not to trust it.
But I do promise that you'll have a sequence of over 8 billion strings that you are able to encode/decode with no obvious patterns.
Now take this code, scramble the alphabet, regenerate the magic numbers that I used, and choose a different number of layers to go through. I promise you that I personally have absolutely no idea how someone would even approach the problem of figuring out the algorithm. (But then again I'm not a cryptographer. Maybe they have some techniques to try. I sure don't.)
How about :
from random import Random
n = 7
def f(i):
myrandom = Random()
myrandom.seed(i)
alphabet = "123456789"
return "".join([myrandom.choice(alphabet) for _ in range(7)])
# same entry, same output
assert f(0) == "7715987"
assert f(0) == "7715987"
assert f(0) == "7715987"
# different entry, different output
assert f(1) == "3252888"
(change the alphabet to match your need)
This "emulate" a UUID, since you said you could accept a small chance of collision. If you want to avoid collision, what you really need is a perfect hash function (https://en.wikipedia.org/wiki/Perfect_hash_function).
you can try something based on the sha1 hash
#!/usr/bin/python3
import hashlib
def generate_link(i):
n = 7
a = "abcdefghijklmnopqrstuvwxyz01234567890"
return "".join(a[x%36] for x in hashlib.sha1(str(i).encode('ascii')).digest()[-n:])
This is a really simple example of what I outlined in this comment. It just offsets the number based on i. If you want "different" strings, don't use this, because if num is 0, then you will get abcdefg (with n = 7).
alphabet = "abcdefghijklmnopqrstuvwxyz"
# num is the num to convert, i is the "offset"
def num_to_char(num, i):
return alphabet[(num + i) % 26]
# generate the link
def generate_link(num, n):
return "".join([num_to_char(num, i) for i in range(n)])
generate_link(0, 7) # "abcdefg"
generate_link(0, 7) # still "abcdefg"
generate_link(0, 7) # again, "abcdefg"!
generate_link(1, 7) # now it's "bcdefgh"!
You would just need to change the num + i to some complicated and obscure math equation.
Related
I'm trying to complete the following challenge: https://app.codesignal.com/challenge/ZGBMLJXrFfomwYiPs.
I have written code that appears to work, however, it is so inefficient that it fails the test (too long to execute and uses too much memory). Are there any ways I can make this more efficient? I'm quite new to building efficient scripts. Someone mentioned "map()" can be used in lieu of "for i in range(1, n)". Thank you Xero Smith and others for the suggestions of optimising it this far:
from functools import reduce
from operator import mul
from itertools import combinations
# Starting from the maximum, we can divide our bag combinations to see the total number of integer factors
def prime_factors(n):
p = 2
dct = {}
while n != 1:
if n % p:
p += 1
else:
dct[p] = dct.get(p, 0) + 1
n = n//p
return dct
def number_of_factors(n):
return reduce(mul, (i+1 for i in prime_factors(n).values()), 1)
def kinderLevon(bags):
candies = list()
for x in (combinations(bags, i) for i in range(1, len(bags)+1)):
for j in x:
candies.append(sum(j))
satisfied_kids = [number_of_factors(i) for i in candies]
return candies[satisfied_kids.index(max(satisfied_kids))]
Any help would be greatly appreciated.
Thanks,
Aaron
Following my comment, I can already identify a memory & complexity improvement. In your factors function since you only need the number of factors, you could only count them instead of storing them.
def factors(n):
k = 2
for i in range(2, n//2 +1):
if n % i == 0:
k += 1
return k
EDIT: as suggested in the comments stop the counter earlier.
This actually reduces time complexity for huge numbers, but not really for smaller ones.
This is a much better improvement than the one using list comprehensions (that still allocates memory)
Moreover, it is pointless to allocate your combinations list twice. You're doing
x = list(combinations(bags, i));
for j in list(x):
...
The first line you convert a tuple (returned by combinations) into a list, hence duplicating the data. The second line list(x) re-allocates a copy of the list, taking even more memory! There you should really just write:
for j in combination(bags, i):
...
As a matter of syntax, please don't use semicolons ; in Python !
First things first, combinations are iterable. This means you do not have to convert them into lists before you iterate over them; infact it is terribly inefficient to do so.
Next thing that can be improved significantly is your factors procedure. Currently it is linear. We can do better. We can get the number of factors of an integer N via the following algorithm:
get the prime factorisation of Nsuch that N = p1^n1 * p2^n2 * ...
the number of factors of N is (1+n1) * (1+n2) * ...
see https://www.wikihow.com/Find-How-Many-Factors-Are-in-a-Number for details.
Something else, your current solution has a lot of variables and computations that are not used. Get rid of them.
With these, we get the following which should work:
from functools import reduce
from operator import mul
from itertools import combinations
# Starting from the maximum, we can divide our bag combinations to see the total number of integer factors
def prime_factors(n):
p = 2
dct = {}
while n != 1:
if n % p:
p += 1
else:
dct[p] = dct.get(p, 0) + 1
n = n//p
return dct
def number_of_factors(n):
return reduce(mul, (i+1 for i in prime_factors(n).values()), 1)
def kinderLevon(bags):
candies = list()
for x in (combinations(bags, i) for i in range(1, len(bags)+1)):
for j in x:
candies.append(sum(j))
satisfied_kids = [number_of_factors(i) for i in candies]
return candies[satisfied_kids.index(max(satisfied_kids))]
Use list comprehensions. The factors function can be transformed like this :
def factors(n):
return len([i for i in range(1, n + 1) if n % i == 0])
2 days ago i started practicing python 2.7 on Codewars.com and i came across a really interesting problem, the only thing is i think it's a bit too much for my level of python knowledge. I actually did solve it in the end but the site doesn't accept my solution because it takes too much time to complete when you call it with large numbers, so here is the code:
from itertools import permutations
def next_bigger(n):
digz =list(str(n))
nums =permutations(digz, len(digz))
nums2 = []
for i in nums:
z =''
for b in range(0,len(i)):
z += i[b]
nums2.append(int(z))
nums2 = list(set(nums2))
nums2.sort()
try:
return nums2[nums2.index(n)+1]
except:
return -1
"You have to create a function that takes a positive integer number and returns the next bigger number formed by the same digits" - These were the original instructions
Also, at one point i decided to forgo the whole permutations idea, and in the middle of this second attempt i realized that there's no way it would work:
def next_bigger(n):
for i in range (1,11):
c1 = n % (10**i) / (10**(i-1))
c2 = n % (10**(i+1)) / (10**i)
if c1 > c2:
return ((n /(10**(i+1)))*10**(i+1)) + c1 *(10**i) + c2*(10**(i-1)) + n % (10**(max((i-1),0)))
break
if anybody has any ideas, i'm all-ears and if you hate my code, please do tell, because i really want to get better at this.
stolen from http://www.geeksforgeeks.org/find-next-greater-number-set-digits/
Following are few observations about the next greater number.
1) If all digits sorted in descending order, then output is always “Not Possible”. For example, 4321.
2) If all digits are sorted in ascending
order, then we need to swap last two digits. For example, 1234.
3) For
other cases, we need to process the number from rightmost side (why?
because we need to find the smallest of all greater numbers)
You can now try developing an algorithm yourself.
Following is the algorithm for finding the next greater number.
I)
Traverse the given number from rightmost digit, keep traversing till
you find a digit which is smaller than the previously traversed digit.
For example, if the input number is “534976”, we stop at 4 because 4
is smaller than next digit 9. If we do not find such a digit, then
output is “Not Possible”.
II) Now search the right side of above found digit ‘d’ for the
smallest digit greater than ‘d’. For “534976″, the right side of 4
contains “976”. The smallest digit greater than 4 is 6.
III) Swap the above found two digits, we get 536974 in above example.
IV) Now sort all digits from position next to ‘d’ to the end of
number. The number that we get after sorting is the output. For above
example, we sort digits in bold 536974. We get “536479” which is the
next greater number for input 534976.
"formed by the same digits" - there's a clue that you have to break the number into digits: n = list(str(n))
"next bigger". The fact that they want the very next item means that you want to make the least change. Focus on changing the 1s digit. If that doesn't work, try the 10's digit, then the 100's, etc. The smallest change you can make is to exchange two furthest digits to the right that will increase the value of the integer. I.e. exchange the two right-most digits in which the more right-most is bigger.
def next_bigger(n):
n = list(str(n))
for i in range(len(n)-1, -1, -1):
for j in range(i-1, -1, -1):
if n[i] > n[j]:
n[i], n[j] = n[j], n[i]
return int("".join(n))
print next_bigger(123)
Oops. This fails for next_bigger(1675). I'll leave the buggy code here for a while, for whatever it is worth.
How about this? See in-line comments for explanations. Note that the way this is set up, you don't end up with any significant memory use (we're not storing any lists).
from itertools import permutations
#!/usr/bin/python3
def next_bigger(n):
# set next_bigger to an arbitrarily large value to start: see the for-loop
next_bigger = float('inf')
# this returns a generator for all the integers that are permutations of n
# we want a generator because when the potential number of permutations is
# large, we don't want to store all of them in memory.
perms = map(lambda x: int(''.join(x)), permutations(str(n)))
for p in perms:
if (p > n) and (p <= next_bigger):
# we can find the next-largest permutation by going through all the
# permutations, selecting the ones that are larger than n, and then
# selecting the smallest from them.
next_bigger = p
return next_bigger
Note that this is still a brute-force algorithm, even if implemented for speed. Here is an example result:
time python3 next_bigger.py 3838998888
3839888889
real 0m2.475s
user 0m2.476s
sys 0m0.000s
If your code needs to be faster yet, then you'll need a smarter, non-brute-force algorithm.
You don't need to look at all the permutations. Take a look at the two permutations of the last two digits. If you have an integer greater than your integer, that's it. If not, take a look at the permutations of the last three digits, etc.
from itertools import permutations
def next_bigger(number):
check = 2
found = False
digits = list(str(number))
if sorted(digits, reverse=True) == digits:
raise ValueError("No larger number")
while not found:
options = permutations(digits[-1*check:], check)
candidates = list()
for option in options:
new = digits.copy()[:-1*check]
new.extend(option)
candidate = int(''.join(new))
if candidate > number:
candidates.append(candidate)
if candidates:
result = sorted(candidates)[0]
found = True
return result
check += 1
I currently have ↓ set as my randprime(p,q) function. Is there any way to condense this, via something like a genexp or listcomp? Here's my function:
n = randint(p, q)
while not isPrime(n):
n = randint(p, q)
It's better to just generate the list of primes, and then choose from that line.
As is, with your code there is the slim chance that it will hit an infinite loop, either if there are no primes in the interval or if randint always picks a non-prime then the while loop will never end.
So this is probably shorter and less troublesome:
import random
primes = [i for i in range(p,q) if isPrime(i)]
n = random.choice(primes)
The other advantage of this is there is no chance of deadlock if there are no primes in the interval. As stated this can be slow depending on the range, so it would be quicker if you cached the primes ahead of time:
# initialising primes
minPrime = 0
maxPrime = 1000
cached_primes = [i for i in range(minPrime,maxPrime) if isPrime(i)]
#elsewhere in the code
import random
n = random.choice([i for i in cached_primes if p<i<q])
Again, further optimisations are possible, but are very much dependant on your actual code... and you know what they say about premature optimisations.
Here is a script written in python to generate n random prime integers between tow given integers:
import numpy as np
def getRandomPrimeInteger(bounds):
for i in range(bounds.__len__()-1):
if bounds[i + 1] > bounds[i]:
x = bounds[i] + np.random.randint(bounds[i+1]-bounds[i])
if isPrime(x):
return x
else:
if isPrime(bounds[i]):
return bounds[i]
if isPrime(bounds[i + 1]):
return bounds[i + 1]
newBounds = [0 for i in range(2*bounds.__len__() - 1)]
newBounds[0] = bounds[0]
for i in range(1, bounds.__len__()):
newBounds[2*i-1] = int((bounds[i-1] + bounds[i])/2)
newBounds[2*i] = bounds[i]
return getRandomPrimeInteger(newBounds)
def isPrime(x):
count = 0
for i in range(int(x/2)):
if x % (i+1) == 0:
count = count+1
return count == 1
#ex: get 50 random prime integers between 100 and 10000:
bounds = [100, 10000]
for i in range(50):
x = getRandomPrimeInteger(bounds)
print(x)
So it would be great if you could use an iterator to give the integers from p to q in random order (without replacement). I haven't been able to find a way to do that. The following will give random integers in that range and will skip anything that it's tested already.
import random
fail = False
tested = set([])
n = random.randint(p,q)
while not isPrime(n):
tested.add(n)
if len(tested) == p-q+1:
fail = True
break
while n in s:
n = random.randint(p,q)
if fail:
print 'I failed'
else:
print n, ' is prime'
The big advantage of this is that if say the range you're testing is just (14,15), your code would run forever. This code is guaranteed to produce an answer if such a prime exists, and tell you there isn't one if such a prime does not exist. You can obviously make this more compact, but I'm trying to show the logic.
next(i for i in itertools.imap(lambda x: random.randint(p,q)|1,itertools.count()) if isPrime(i))
This starts with itertools.count() - this gives an infinite set.
Each number is mapped to a new random number in the range, by itertools.imap(). imap is like map, but returns an iterator, rather than a list - we don't want to generate a list of inifinite random numbers!
Then, the first matching number is found, and returned.
Works efficiently, even if p and q are very far apart - e.g. 1 and 10**30, which generating a full list won't do!
By the way, this is not more efficient than your code above, and is a lot more difficult to understand at a glance - please have some consideration for the next programmer to have to read your code, and just do it as you did above. That programmer might be you in six months, when you've forgotten what this code was supposed to do!
P.S - in practice, you might want to replace count() with xrange (NOT range!) e.g. xrange((p-q)**1.5+20) to do no more than that number of attempts (balanced between limited tests for small ranges and large ranges, and has no more than 1/2% chance of failing if it could succeed), otherwise, as was suggested in another post, you might loop forever.
PPS - improvement: replaced random.randint(p,q) with random.randint(p,q)|1 - this makes the code twice as efficient, but eliminates the possibility that the result will be 2.
The task is to search every power of two below 2^10000, returning the index of the first power in which a string is contained. For example if the given string to search for is "7" the program will output 15, as 2^15 is the first power to contain 7 in it.
I have approached this with a brute force attempt which times out on ~70% of test cases.
for i in range(1,9999):
if search in str(2**i):
print i
break
How would one approach this with a time limit of 5 seconds?
Try not to compute 2^i at each step.
pow = 1
for i in xrange(1,9999):
if search in str(pow):
print i
break
pow *= 2
You can compute it as you go along. This should save a lot of computation time.
Using xrange will prevent a list from being built, but that will probably not make much of a difference here.
in is probably implemented as a quadratic string search algorithm. It may (or may not, you'd have to test) be more efficient to use something like KMP for string searching.
A faster approach could be computing the numbers directly in decimal
def double(x):
carry = 0
for i, v in enumerate(x):
d = v*2 + carry
if d > 99999999:
x[i] = d - 100000000
carry = 1
else:
x[i] = d
carry = 0
if carry:
x.append(carry)
Then the search function can become
def p2find(s):
x = [1]
for y in xrange(10000):
if s in str(x[-1])+"".join(("00000000"+str(y))[-8:]
for y in x[::-1][1:]):
return y
double(x)
return None
Note also that the digits of all powers of two up to 2^10000 are just 15 millions, and searching the static data is much faster. If the program must not be restarted each time then
def p2find(s, digits = []):
if len(digits) == 0:
# This precomputation happens only ONCE
p = 1
for k in xrange(10000):
digits.append(str(p))
p *= 2
for i, v in enumerate(digits):
if s in v: return i
return None
With this approach the first check will take some time, next ones will be very very fast.
Compute every power of two and build a suffix tree using each string. This is linear time in the size of all the strings. Now, the lookups are basically linear time in the length of each lookup string.
I don't think you can beat this for computational complexity.
There are only 10000 numbers. You don't need any complex algorithms. Simply calculated them in advance and do search. This should take merely 1 or 2 seconds.
powers_of_2 = [str(1<<i) for i in range(10000)]
def search(s):
for i in range(len(powers_of_2)):
if s in powers_of_2[i]:
return i
Try this
twos = []
twoslen = []
two = 1
for i in xrange(10000):
twos.append(two)
twoslen.append(len(str(two)))
two *= 2
tens = []
ten = 1
for i in xrange(len(str(two))):
tens.append(ten)
ten *= 10
s = raw_input()
l = len(s)
n = int(s)
for i in xrange(len(twos)):
for j in xrange(twoslen[i]):
k = twos[i] / tens[j]
if k < n: continue
if (k - n) % tens[l] == 0:
print i
exit()
The idea is to precompute every power of 2, 10 and and also to precompute the number of digits for every power of 2. In this way the problem is reduces to finding the minimum i for which there exist a j such that after removing the last j digits from 2 ** i you obtain a number which ends with n or expressed as a formula (2 ** i / 10 ** j - n) % 10 ** len(str(n)) == 0.
A big problem here is that converting a binary integer to decimal notation takes time quadratic in the number of bits (at least in the straightforward way Python does it). It's actually faster to fake your own decimal arithmetic, as #6502 did in his answer.
But it's very much faster to let Python's decimal module do it - at least under Python 3.3.2 (I don't know how much C acceleration is built in to Python decimal versions before that). Here's code:
class S:
def __init__(self):
import decimal
decimal.getcontext().prec = 4000 # way more than enough for 2**10000
p2 = decimal.Decimal(1)
full = []
for i in range(10000):
s = "%s<%s>" % (p2, i)
##assert s == "%s<%s>" % (str(2**i), i)
full.append(s)
p2 *= 2
self.full = "".join(full)
def find(self, s):
import re
pat = s + "[^<>]*<(\d+)>"
m = re.search(pat, self.full)
if m:
return int(m.group(1))
else:
print(s, "not found!")
and sample usage:
>>> s = S()
>>> s.find("1")
0
>>> s.find("2")
1
>>> s.find("3")
5
>>> s.find("65")
16
>>> s.find("7")
15
>>> s.find("00000")
1491
>>> s.find("666")
157
>>> s.find("666666")
2269
>>> s.find("66666666")
66666666 not found!
s.full is a string with a bit over 15 million characters. It looks like this:
>>> print(s.full[:20], "...", s.full[-20:])
1<0>2<1>4<2>8<3>16<4 ... 52396298354688<9999>
So the string contains each power of 2, with the exponent following a power enclosed in angle brackets. The find() method constructs a regular expression to search for the desired substring, then look ahead to find the power.
Playing around with this, I'm convinced that just about any way of searching is "fast enough". It's getting the decimal representations of the large powers that sucks up the vast bulk of the time. And the decimal module solves that one.
I had an overflow error with this program here!, I realized the mistake of that program. I cannot use range or xrange when it came to really long integers. I tried running the program in Python 3 and it works. My code works but then responds after several times. Hence in order to optimize my code, I started thinking of strategies for the optimizing the code.
My problem statement is A number is called lucky if the sum of its digits, as well as the sum of the squares of its digits is a prime number. How many numbers between A and B are lucky?.
I started with this:
squarelist=[0,1,4,9,16,25,36,49,64,81]
def isEven(self, n):
return
def isPrime(n):
return
def main():
t=long(raw_input().rstrip())
count = []
for i in xrange(t):
counts = 0
a,b = raw_input().rstrip().split()
if a=='1':
a='2'
tempa, tempb= map(int, a), map(int,b)
for i in range(len(b),a,-1):
tempsum[i]+=squarelist[tempb[i]]
What I am trying to achieve is since I know the series is ordered, only the last number changes. I can save the sum of squares of the earlier numbers in the list and just keep changing the last number. This does not calculate the sum everytime and check if the sum of squares is prime. I am unable to fix the sum to some value and then keep changing the last number.How to go forward from here?
My sample inputs are provided below.
87517 52088
72232 13553
19219 17901
39863 30628
94978 75750
79208 13282
77561 61794
I didn't get what you want to achieve with your code at all. This is my solution to the question as I understand it: For all natural numbers n in a range X so that a < X < b for some natural numbers a, b with a < b, how many numbers n have the property that the sum of its digits and the sum of the square of its digits in decimal writing are both prime?
def sum_digits(n):
s = 0
while n:
s += n % 10
n /= 10
return s
def sum_digits_squared(n):
s = 0
while n:
s += (n % 10) ** 2
n /= 10
return s
def is_prime(n):
return all(n % i for i in xrange(2, n))
def is_lucky(n):
return is_prime(sum_digits(n)) and is_prime(sum_digits_squared(n))
def all_lucky_numbers(a, b):
return [n for n in xrange(a, b) if is_lucky(n)]
if __name__ == "__main__":
sample_inputs = ((87517, 52088),
(72232, 13553),
(19219, 17901),
(39863, 30628),
(94978, 75750),
(79208, 13282),
(77561, 61794))
for b, a in sample_inputs:
lucky_number_count = len(all_lucky_numbers(a, b))
print("There are {} lucky numbers between {} and {}").format(lucky_number_count, a, b)
A few notes:
The is_prime is the most naive implementation possible. It's still totally fast enough for the sample input. There are many better implementations possible (and just one google away). The most obvious improvement would be skipping every even number except for 2. That alone would cut calculation time in half.
In Python 3 (and I really recommend using it), remember to use //= to force the result of the division to be an integer, and use range instead of xrange. Also, an easy way to speed up is_prime is Python 3's #functools.lru_cache.
If you want to save some lines, calculate the sum of digits by casting them to str and back to int like that:
def sum_digits(n):
return sum(int(d) for d in str(a))
It's not as mathy, though.