Related
Python's abstraction is often seen as magic by many. Coming from a C background, I know very well there is no such thing as magic, only cold hard code made up of simple components that produces abstraction.
So, when a textbook and my teacher say that we can "store conditions" or use "named conditions" for readability, and say that assigning a boolean expression to a variable suddenly makes it a dynamic condition akin to a macro, I lose it.
EDIT 1 : They don't explicitly say its like a macro (direct quotes are placed within quotes) since we aren't expected to know any other language beforehand.
The way they say that " the variable stores the condition unevaluated ", is like saying it is a macro , and this is my opinion. They imply it to be practically the equivalent of a macro by their articulation, just without saying the word 'macro'.
Here's the claim in code form :
x,y = 1,2
less = x < y
more = x > y
'''
less/ more claimed to store not boolean True/False but some magical way of storing the
expression itself (unevaluated, say like a macro) and apparently
'no value is being stored to less and more'.
'''
It is being represented as though one was doing :
// C-style
#define less (x < y)
#define more (x > y)
Of course, this is not true, because all less and more store in the so-called 'named conditions' is just the return value of the operator between x and y .
This is obvious since < , >, == , <= , >= all have boolean return values as per the formal man pages and the spec, and less or more are only storing the True or False boolean return value , which we may prove by calling print() on them and/or by calling type() on them.
Also, changing the values of x and y , say by doing x,y = y,x does not change the values of less or more because they store not a dynamic expression but the static return value of the > or < operand on the initial x and y values.
The question isn't that this claim is a misunderstanding of the purported abstraction ( its not actually an abstraction, similar storage can be achieved in asm or C too) , but rather how to clearly and efficiently articulate to my teacher that it is not working like a C macro but rather storing the boolean return value of >or < statically.
Obviously less = x < y just looks at the current values of x and y and stores either True or False into the variable less.
If I understand where you and your teacher disagree, you two have a different idea of what the following code will print out:
x, y = 1, 2
less = x < y
print(less)
x, y = 2, 1
print(less)
"Macro's" could be implemented as text strings that can be evaluated, like (bad example - not the recommended solution):
less = "({0}) < ({1})"
and use them like:
x = 1
y = 3
outcome = eval(less.format("x", "y"))
But this is really a silly thing to do, and eval() is susceptible for security issues.
Perhaps your teacher meant to use lambda expressions, which are nameless, ad-hoc functions:
less = lambda a, b: a < b
x = 1
y = 3
outcome = less(x, y)
Note:
There is already a function for lambda a, b: a < b available in the standard library operator module: operator.lt.
I have been working on this project for a couple months right now. The ultimate goal of this project is to evaluate an entire digital logic circuit similar to functional testing; just to give a scope of the problem. The topic I created here deals with the issue I'm having with performance of analyzing a boolean expression. For any gate inside a digital circuit, it has an output expression in terms of the global inputs. EX: ((A&B)|(C&D)^E). What I want to do with this expression is then calculate all possible outcomes and determine how much influence each input has on the outcome.
The fastest way that I have found was by building a truth table as a matrix and looking at certain rows (won't go into specifics of that algorithm as it's offtopic), the problem with that is once the number of unique inputs goes above 26-27 (something around that) the memory usage is well beyond 16GB (Max my computer has). You might say "Buy more RAM", but as every increase in inputs by 1, memory usage doubles. Some of the expressions I analyze are well over 200 unique inputs...
The method I use right now uses the compile method to take the expression as the string. Then I create an array with all of the inputs found from the compile method. Then I generate a list row by row of "True" and "False" randomly chosen from a sample of possible values (that way it will be equivalent to rows in a truth table if the sample size is the same size as the range and it will allow me to limit the sample size when things get too long to calculate). These values are then zipped with the input names and used to evaluate the expression. This will give the initial result, after that I go column by column in the random boolean list and flip the boolean then zip it with the inputs again and evaluate it again to determine if the result changed.
So my question is this: Is there a faster way? I have included the code that performs the work. I have tried regular expressions to find and replace but it is always slower (from what I've seen). Take into account that the inner for loop will run N times where N is the number of unique inputs. The outside for loop I limit to run 2^15 if N > 15. So this turns into eval being executed Min(2^N, 2^15) * (1 + N)...
As an update to clarify what I am asking exactly (Sorry for any confusion). The algorithm/logic for calculating what I need is not the issue. I am asking for an alternative to the python built-in 'eval' that will perform the same thing faster. (take a string in the format of a boolean expression, replace the variables in the string with the values in the dictionary and then evaluate the string).
#value is expression as string
comp = compile(value.strip(), '-', 'eval')
inputs = comp.co_names
control = [0]*len(inputs)
#Sequences of random boolean values to be used
random_list = gen_rand_bits(len(inputs))
for row in random_list:
valuedict = dict(zip(inputs, row))
answer = eval(comp, valuedict)
for column in range(len(row)):
row[column] = ~row[column]
newvaluedict = dict(zip(inputs, row))
newanswer = eval(comp, newvaluedict)
row[column] = ~row[column]
if answer != newanswer:
control[column] = control[column] + 1
My question:
Just to make sure that I understand this correctly: Your actual problem is to determine the relative influence of each variable within a boolean expression on the outcome of said expression?
OP answered:
That is what I am calculating but my problem is not with how I calculate it logically but with my use of the python eval built-in to perform evaluating.
So, this seems to be a classic XY problem. You have an actual problem which is to determine the relative influence of each variable within the a boolean expression. You have attempted to solve this in a rather ineffective way, and now that you actually “feel” the inefficiency (in both memory usage and run time), you look for ways to improve your solution instead of looking for better ways to solve your original problem.
In any way, let’s first look at how you are trying to solve this. I’m not exactly sure what gen_rand_bits is supposed to do, so I can’t really take that into account. But still, you are essentially trying out every possible combination of variable assignments and see if flipping the value for a single variable changes the outcome of the formula result. “Luckily”, these are just boolean variables, so you are “only” looking at 2^N possible combinations. This means you have exponential run time. Now, O(2^N) algorithms are in theory very very bad, while in practice it’s often somewhat okay to use them (because most have an acceptable average case and execute fast enough). However, being an exhaustive algorithm, you actually have to look at every single combination and can’t shortcut. Plus the compilation and value evaluation using Python’s eval is apparently not so fast to make the inefficient algorithm acceptable.
So, we should look for a different solution. When looking at your solution, one might say that more efficient is not really possible, but when looking at the original problem, we can argue otherwise.
You essentially want to do things similar to what compilers do as static analysis. You want to look at the source code and analyze it just from there without having to actually evaluate that. As the language you are analyzing is highly restricted (being only a boolean expression with very few operators), this isn’t really that hard.
Code analysis usually works on the abstract syntax tree (or an augmented version of that). Python offers code analysis and abstract syntax tree generation with its ast module. We can use this to parse the expression and get the AST. Then based on the tree, we can analyze how relevant each part of an expression is for the whole.
Now, evaluating the relevance of each variable can get quite complicated, but you can do it all by analyzing the syntax tree. I will show you a simple evaluation that supports all boolean operators but will not further check the semantic influence of expressions:
import ast
class ExpressionEvaluator:
def __init__ (self, rawExpression):
self.raw = rawExpression
self.ast = ast.parse(rawExpression)
def run (self):
return self.evaluate(self.ast.body[0])
def evaluate (self, expr):
if isinstance(expr, ast.Expr):
return self.evaluate(expr.value)
elif isinstance(expr, ast.Name):
return self.evaluateName(expr)
elif isinstance(expr, ast.UnaryOp):
if isinstance(expr.op, ast.Invert):
return self.evaluateInvert(expr)
else:
raise Exception('Unknown unary operation {}'.format(expr.op))
elif isinstance(expr, ast.BinOp):
if isinstance(expr.op, ast.BitOr):
return self.evaluateBitOr(expr.left, expr.right)
elif isinstance(expr.op, ast.BitAnd):
return self.evaluateBitAnd(expr.left, expr.right)
elif isinstance(expr.op, ast.BitXor):
return self.evaluateBitXor(expr.left, expr.right)
else:
raise Exception('Unknown binary operation {}'.format(expr.op))
else:
raise Exception('Unknown expression {}'.format(expr))
def evaluateName (self, expr):
return { expr.id: 1 }
def evaluateInvert (self, expr):
return self.evaluate(expr.operand)
def evaluateBitOr (self, left, right):
return self.join(self.evaluate(left), .5, self.evaluate(right), .5)
def evaluateBitAnd (self, left, right):
return self.join(self.evaluate(left), .5, self.evaluate(right), .5)
def evaluateBitXor (self, left, right):
return self.join(self.evaluate(left), .5, self.evaluate(right), .5)
def join (self, a, ratioA, b, ratioB):
d = { k: v * ratioA for k, v in a.items() }
for k, v in b.items():
if k in d:
d[k] += v * ratioB
else:
d[k] = v * ratioB
return d
expr = '((A&B)|(C&D)^~E)'
ee = ExpressionEvaluator(expr)
print(ee.run())
# > {'A': 0.25, 'C': 0.125, 'B': 0.25, 'E': 0.25, 'D': 0.125}
This implementation will essentially generate a plain AST for the given expression and the recursively walk through the tree and evaluate the different operators. The big evaluate method just delegates the work to the type specific methods below; it’s similar to what ast.NodeVisitor does except that we return the analyzation results from each node here. One could augment the nodes instead of returning it instead though.
In this case, the evaluation is just based on ocurrence in the expression. I don’t explicitely check for semantic effects. So for an expression A | (A & B), I get {'A': 0.75, 'B': 0.25}, although one could argue that semantically B has no relevance at all to the result (making it {'A': 1} instead). This is however something I’ll leave for you. As of now, every binary operation is handled identically (each operand getting a relevance of 50%), but that can be of course adjusted to introduce some semantic rules.
In any way, it will not be necessary to actually test variable assignments.
Instead of reinventing the wheel and getting into risk like performance and security which you are already in, it is better to search for industry ready well accepted libraries.
Logic Module of sympy would do the exact thing that you want to achieve without resorting to evil ohh I meant eval. More importantly, as the boolean expression is not a string you don;t have to care about parsing the expression which generally turns out to be the bottleneck.
You don't have to prepare a static table for computing this. Python is a dynamic language, thus it's able to interpret and run a code by itself during runtime.
In you case, I would suggest a soluation that:
import random, re, time
#Step 1: Input your expression as a string
logic_exp = "A|B&(C|D)&E|(F|G|H&(I&J|K|(L&M|N&O|P|Q&R|S)&T)|U&V|W&X&Y)"
#Step 2: Retrieve all the variable names.
# You can design a rule for naming, and use regex to retrieve them.
# Here for example, I consider all the single-cap-lettler are variables.
name_regex = re.compile(r"[A-Z]")
#Step 3: Replace each variable with its value.
# You could get the value with reading files or keyboard input.
# Here for example I just use random 0 or 1.
for name in name_regex.findall(logic_exp):
logic_exp = logic_exp.replace(name, str(random.randrange(2)))
#Step 4: Replace the operators. Python use 'and', 'or' instead of '&', '|'
logic_exp = logic_exp.replace("&", " and ")
logic_exp = logic_exp.replace("|", " or " )
#Step 5: interpret the expression with eval(exp) and output its value.
print "exporession =", logic_exp
print "expression output =",eval(logic_exp)
This would be very fast and take very little memory. For a test, I run the example above with 25 input variables:
exporession = 1 or 1 and (1 or 1) and 0 or (0 or 0 or 1 and (1 and 0 or 0 or (0 and 0 or 0 and 0 or 1 or 0 and 0 or 0) and 1) or 0 and 1 or 0 and 1 and 0)
expression output= 1
computing time: 0.000158071517944 seconds
According to your comment, I see that you are computing all the possible combinations instead of the output at a given input values. If so, it would become a typical NP-complete Boolean satisfiability problem. I don't think there's any algorithm that could make it by a complexity lower than O(2^N). I suggest you to search with the keywords fast algorithm to solve SAT problem, you would find a lot of interesting things.
I have been using the following code in my programs to set the range of an axis so that the graph looks more aesthetically pleasing.
plot.set_ylim([0,a+(a*15/100)])
It is specifically this:
a+(a*15/100)
that i'm interested in.
Is there a function which exists which simplifies this?
The reason being is that when my graph is created in a for loop, and the value of a is the maximum value of a list (and so on) the whole thing starts to look messy. E.g from:
a+(a*15/100)
max(listA[x])+(max(listA[x]))*15/100
Anyone aware of a simplification?
You could use the *= operator
a = 100
a *= 1.15
print a # Returns 115
Beware that the *= operator may do different things for different types (i.e. strings and lists).
I normally use for matplot stuff a variable calledn ULIM or something, (upper limit), so I quickly can change it in one place which is not the call to *set_ylim*. Therefore, you can use
ULIM = a+(a*15/100)
plot.set_ylim([0,ULIM])
def limit(a, pct=15):
pct = 1 + (pct/100.0)
return a*pct
maxval = max(listA[x])
plot.set_ylim([0,limit(maxval)])
How can I increment a floating point value in python by the smallest possible amount?
Background: I'm using floating point values as dictionary keys.
Occasionally, very occasionally (and perhaps never, but not certainly never), there will be collisions. I would like to resolve these by incrementing the floating point value by as small an amount as possible. How can I do this?
In C, I would twiddle the bits of the mantissa to achieve this, but I assume that isn't possible in Python.
Since Python 3.9 there is math.nextafter in the stdlib. Read on for alternatives in older Python versions.
Increment a python floating point value by the smallest possible amount
The nextafter(x,y) functions return the next discretely different representable floating-point value following x in the direction of y. The nextafter() functions are guaranteed to work on the platform or to return a sensible value to indicate that the next value is not possible.
The nextafter() functions are part of POSIX and ISO C99 standards and is _nextafter() in Visual C. C99 compliant standard math libraries, Visual C, C++, Boost and Java all implement the IEEE recommended nextafter() functions or methods. (I do not honestly know if .NET has nextafter(). Microsoft does not care much about C99 or POSIX.)
None of the bit twiddling functions here fully or correctly deal with the edge cases, such as values going though 0.0, negative 0.0, subnormals, infinities, negative values, over or underflows, etc. Here is a reference implementation of nextafter() in C to give an idea of how to do the correct bit twiddling if that is your direction.
There are two solid work arounds to get nextafter() or other excluded POSIX math functions in Python < 3.9:
Use Numpy:
>>> import numpy
>>> numpy.nextafter(0,1)
4.9406564584124654e-324
>>> numpy.nextafter(.1, 1)
0.10000000000000002
>>> numpy.nextafter(1e6, -1)
999999.99999999988
>>> numpy.nextafter(-.1, 1)
-0.099999999999999992
Link directly to the system math DLL:
import ctypes
import sys
from sys import platform as _platform
if _platform == "linux" or _platform == "linux2":
_libm = ctypes.cdll.LoadLibrary('libm.so.6')
_funcname = 'nextafter'
elif _platform == "darwin":
_libm = ctypes.cdll.LoadLibrary('libSystem.dylib')
_funcname = 'nextafter'
elif _platform == "win32":
_libm = ctypes.cdll.LoadLibrary('msvcrt.dll')
_funcname = '_nextafter'
else:
# these are the ones I have access to...
# fill in library and function name for your system math dll
print("Platform", repr(_platform), "is not supported")
sys.exit(0)
_nextafter = getattr(_libm, _funcname)
_nextafter.restype = ctypes.c_double
_nextafter.argtypes = [ctypes.c_double, ctypes.c_double]
def nextafter(x, y):
"Returns the next floating-point number after x in the direction of y."
return _nextafter(x, y)
assert nextafter(0, 1) - nextafter(0, 1) == 0
assert 0.0 + nextafter(0, 1) > 0.0
And if you really really want a pure Python solution:
# handles edge cases correctly on MY computer
# not extensively QA'd...
import math
# 'double' means IEEE 754 double precision -- c 'double'
epsilon = math.ldexp(1.0, -53) # smallest double that 0.5+epsilon != 0.5
maxDouble = float(2**1024 - 2**971) # From the IEEE 754 standard
minDouble = math.ldexp(1.0, -1022) # min positive normalized double
smallEpsilon = math.ldexp(1.0, -1074) # smallest increment for doubles < minFloat
infinity = math.ldexp(1.0, 1023) * 2
def nextafter(x,y):
"""returns the next IEEE double after x in the direction of y if possible"""
if y==x:
return y #if x==y, no increment
# handle NaN
if x!=x or y!=y:
return x + y
if x >= infinity:
return infinity
if x <= -infinity:
return -infinity
if -minDouble < x < minDouble:
if y > x:
return x + smallEpsilon
else:
return x - smallEpsilon
m, e = math.frexp(x)
if y > x:
m += epsilon
else:
m -= epsilon
return math.ldexp(m,e)
Or, use Mark Dickinson's excellent solution
Obviously the Numpy solution is the easiest.
Python 3.9 and above
Starting with Python 3.9, released 2020-10-05, you can use the math.nextafter function:
math.nextafter(x, y)
Return the next floating-point value after x towards y.
If x is equal to y, return y.
Examples:
math.nextafter(x, math.inf) goes up: towards positive infinity.
math.nextafter(x, -math.inf) goes down: towards minus infinity.
math.nextafter(x, 0.0) goes towards zero.
math.nextafter(x, math.copysign(math.inf, x)) goes away from zero.
See also math.ulp().
First, this "respond to a collision" is a pretty bad idea.
If they collide, the values in the dictionary should have been lists of items with a common key, not individual items.
Your "hash probing" algorithm will have to loop through more than one "tiny increments" to resolve collisions.
And sequential hash probes are known to be inefficient.
Read this: http://en.wikipedia.org/wiki/Quadratic_probing
Second, use math.frexp and sys.float_info.epsilon to fiddle with mantissa and exponent separately.
>>> m, e = math.frexp(4.0)
>>> (m+sys.float_info.epsilon)*2**e
4.0000000000000018
Forgetting about why we would want to increment a floating point value for a moment, I would have to say I think Autopulated's own answer is probably correct.
But for the problem domain, I share the misgivings of most of the responders to the idea of using floats as dictionary keys. If the objection to using Decimal (as proposed in the main comments) is that it is a "heavyweight" solution, I suggest a do-it-yourself compromise: Figure out what the practical resolution is on the timestamps, pick a number of digits to adequately cover it, then multiply all the timestamps by the necessary amount so that you can use integers as the keys. If you can afford an extra digit or two beyond the timer precision, then you can be even more confident that there will be no or fewer collisions, and that if there are collisions, you can just add 1 (instead of some rigamarole to find the next floating point value).
I recommend against assuming that floats (or timestamps) will be unique if at all possible. Use a counting iterator, database sequence or other service to issue unique identifiers.
Instead of incrementing the value, just use a tuple for the colliding key. If you need to keep them in order, every key should be a tuple, not just the duplicates.
A better answer (now I'm just doing this for fun...), motivated by twiddling the bits. Handling the carry and overflows between parts of the number of negative values is somewhat tricky.
import struct
def floatToieee754Bits(f):
return struct.unpack('<Q', struct.pack('<d', f))[0]
def ieee754BitsToFloat(i):
return struct.unpack('<d', struct.pack('<Q', i))[0]
def incrementFloat(f):
i = floatToieee754Bits(f)
if f >= 0:
return ieee754BitsToFloat(i+1)
else:
raise Exception('f not >= 0: unsolved problem!')
Instead of resolving the collisions by changing the key, how about collecting the collisions? IE:
bag = {}
bag[1234.] = 'something'
becomes
bag = collections.defaultdict(list)
bag[1234.].append('something')
would that work?
For colliding key k, add: k / 250
Interesting problem. The amount you need to add obviously depends on the magnitude of the colliding value, so that a normalized add will affect only the least significant bits.
It's not necessary to determine the smallest value that can be added. All you need to do is approximate it. The FPU format provides 52 mantissa bits plus a hidden bit for 53 bits of precision. No physical constant is known to anywhere near this level of precision. No sensor is able measure anything near it. So you don't have a hard problem.
In most cases, for key k, you would be able to add k/253, because of that 52-bit fraction plus the hidden bit.
But it's not necessary to risk triggering library bugs or exploring rounding issues by shooting for the very last bit or anything near it.
So I would say, for colliding key k, just add k / 250 and call it a day.1
1. Possibly more than once until it doesn't collide any more, at least to foil any diabolical unit test authors.
import sys
>>> sys.float_info.epsilon
2.220446049250313e-16
Instead of modifying your float timestamp, use a tuple for every key as Mark Ransom suggests where the tuple (x,y) is composed of x=your_unmodified_time_stamp and y=(extremely unlikely to be a same value twice).
So:
x just is the unmodified timestamp and can be the same value many times;
y you can use:
a random integer number from a large range,
serial integer (0,1,2,etc),
UUID.
While 2.1 (random int from a large range) there works great for ethernet, I would use 2.2 (serializer) or 2.3 (UUID). Easy, fast, bulletproof. For 2.2 and 2.3 you don't even need collision detection (you might want to still have it for 2.1 as ethernet does.)
The advantage of 2.2 is that you can also tell, and sort, data elements that have the same float time stamp.
Then just extract x from the tuple for any sorting type operations and the tuple itself is a collision free key for the hash / dictionary.
Edit
I guess example code will help:
#!/usr/bin/env python
import time
import sys
import random
#generator for ints from 0 to maxinteger on system:
serializer=(sn for sn in xrange(0,sys.maxint))
#a list with guranteed collisions:
times=[]
for c in range(0,35):
t=time.clock()
for i in range(0,random.choice(range(0,4))):
times.append(t)
print len(set(times)), "unique items in a list of",len(times)
#dictionary of tuples; no possibilities of collisions:
di={}
for time in times:
sn=serializer.next()
di[(time,sn)]='Element {}'.format(sn)
#for tuples of multiple numbers, Python sorts
# as you expect: first by t[0] then t[1], until t[n]
for key in sorted(di.keys()):
print "{:>15}:{}".format(key, di[key])
Output:
26 unique items in a list of 55
(0.042289, 0):Element 0
(0.042289, 1):Element 1
(0.042289, 2):Element 2
(0.042305, 3):Element 3
(0.042305, 4):Element 4
(0.042317, 5):Element 5
# and so on until Element n...
Here it part of it. This is dirty and slow, but maybe that is how you like it. It is missing several corner cases, but maybe this gets someone else close.
The idea is to get the hex string of a floating point number. That gives you a string with the mantissa and exponent bits to twiddle. The twiddling is a pain since you have to do all it manually and keep converting to/from strings. Anyway, you add(subtract) 1 to(from) the last digit for positive(negative) numbers. Make sure you carry through to the exponent if you overflow. Negative numbers are a little more tricky to make you don't waste any bits.
def increment(f):
h = f.hex()
# decide if we need to increment up or down
if f > 0:
sign = '+'
inc = 1
else:
sign = '-'
inc = -1
# pull the string apart
h = h.split('0x')[-1]
h,e = h.split('p')
h = ''.join(h.split('.'))
h2 = shift(h, inc)
# increase the exponent if we added a digit
h2 = '%s0x%s.%sp%s' % (sign, h2[0], h2[1:], e)
return float.fromhex(h2)
def shift(s, num):
if not s:
return ''
right = s[-1]
right = int(right, 16) + num
if right > 15:
num = right // 16
right = right%16
elif right < 0:
right = 0
num = -1
else:
num = 0
# drop the leading 0x
right = hex(right)[2:]
return shift(s[:-1], num) + right
a = 1.4e4
print increment(a) - a
a = -1.4e4
print increment(a) - a
a = 1.4
print increment(a) - a
I think you mean "by as small an amount possible to avoid a hash collision", since for example the next-highest-float may already be a key! =)
while toInsert.key in myDict: # assumed to be positive
toInsert.key *= 1.000000000001
myDict[toInsert.key] = toInsert
That said you probably don't want to be using timestamps as keys.
After Looking at Autopopulated's answer I came up with a slightly different answer:
import math, sys
def incrementFloatValue(value):
if value == 0:
return sys.float_info.min
mant, exponent = math.frexp(value)
epsilonAtValue = math.ldexp(1, exponent - sys.float_info.mant_dig)
return math.fsum([value, epsilonAtValue])
Disclaimer: I'm really not as great at maths as I think I am ;) Please verify this is correct before using it. Also I'm not sure about performance
some notes:
epsilonAtValue calculates how many bits are used for the mantissa (the maximum minus what is used for the exponent).
I'm not sure if the math.fsum() is needed but hey it doesn't seem to hurt.
It turns out that this is actually quite complicated (maybe why seven people have answered without actually providing an answer yet...).
I think this is the right solution, it certainly seems to handle 0 and positive values correctly:
import math
import sys
def incrementFloat(f):
if f == 0.0:
return sys.float_info.min
m, e = math.frexp(f)
return math.ldexp(m + sys.float_info.epsilon / 2, e)
I have a ridiculous code segment in one of my programs right now:
str(len(str(len(var_text)**255)))
Is there an easy way to shorten that? 'Cause, frankly, that's ridiculous.
A option to convert a number >500 digits to scientific notation would also be helpful
(that's what I'm trying to do)
Full code:
print("Useless code rating:" , str(len(var_text)**255)[1] + "e" + str(len(str(len(var_text)**255))))
TL;DR: y = 2.408 * len(var_text)
Lets assume that your passkey is a string of characters with 256 characters available (0-255). Then just as a 16bit number holds 65536 numbers (2**16) the permutations of a string of equal length would be
n_perms = 256**len(passkey)
If you want the number of (decimal) digits in n_perms, consider the logarithm:
>>> from math import log10
>>> log10(1000)
3.0
>>> log10(9999)
3.9999565683801923
>>>
So we have length = floor(log10(n_perms)) + 1. In python, int rounds down anyway, so I'd say you want
n_perms = 256**len(var_text)
length = int(log10(n_perms)) + 1
I'd argue that 'shortening' ugly code isn't always the best way - you want it to be clear what you're doing.
Edit: On further consideration I realised that choosing base-10 to find the length of your permutations is really arbitrary anyway - so why not choose base-256!
length = log256(256**len(var_text)
length = len(var_text) # the log and exp cancel!
You are effectively just finding the length of your passkey in a different base...
Edit 2: Stand back, I'm going to attempt Mathematics!
if x = len(var_text), we want y such that
y = log10(256**x)
10**y = 256**x
10**y = (10**log10(256))**x
10**y = (10**(log10(256)x))
y = log10(256) * x
So, how's this for short:
length = log10(256) * len(var_text) # or about (2.408 * x)
This looks like it's producing a string version of the number of digits in the 255th power of the length of a string. Is that right? I'd be curious what that's used for.
You could compute the number differently, but it's not shorter and I'm not sure it's prettier:
str(int(math.ceil(math.log10(len(var_text))*255)))
or:
"%d" % math.ceil(math.log10(len(v))*255)
Are you trying to determine the number of possible strings having the same length as var_text? If so, you have your base and exponent reversed. You want to use 255**len(var_text) instead of len(var_text)**255.
But, I have to ask ... how long do these passkeys get to be, and what are you using them for?
And, why not just use the length of the passkey as an indicator of its length?
Firstly, if your main problem is manipulating huge floating point expressions, use the bigfloat package:
>>> import bigfloat
>>> bigfloat.BigFloat('1e1000')
BigFloat.exact('1.0000000000000001e+1000', precision=53)
As for the details in your question: len(str(num)) is approximately equal to log(num, 10) + 1. This is not significantly shorter, but it's certainly a better way of expressing it in code (for the benefit of anyone who doesn't know that off the top of their head). You can then simplify it with log laws:
len(str(x**y))
= log(x**y, 10) + 1
= y * log(x, 10) + 1
So maybe you'll find:
"%i" % (log(len(var_text),10)*255 + 1)
... is better? It's not significantly shorter, but it's a much clearer mathematical relationship between input and output.