I am trying to use the pow method on 256 bits number as the base and 1 bit to 4 thousand bits number as the exponent and then use modulo "n" on that number (n is a four thousand bits number).
meaning that the bigest number for the base is: 115792089237316195423570985008687907853269984665640564039457584007913129639935.and the bigest number for the exponent is: 13182040934309431001038897942365913631840191610932727690928034502417569281128344551079752123172122033140940756480716823038446817694240581281731062452512184038544674444386888956328970642771993930036586552924249514488832183389415832375620009284922608946111038578754077913265440918583125586050431647284603636490823850007826811672468900210689104488089485347192152708820119765006125944858397761874669301278745233504796586994514054435217053803732703240283400815926169348364799472716094576894007243168662568886603065832486830606125017643356469732407252874567217733694824236675323341755681839221954693820456072020253884371226826844858636194212875139566587445390068014747975813971748114770439248826688667129237954128555841874460665729630492658600179338272579110020881228767361200603478973120168893997574353727653998969223092798255701666067972698906236921628764772837915526086464389161570534616956703744840502975279094087587298968423516531626090898389351449020056851221079048966718878943309232071978575639877208621237040940126912767610658141079378758043403611425454744180577150855204937163460902512732551260539639221457005977247266676344018155647509515396711351487546062479444592779055555421362722504575706910949375.
But I am getting an error that says that those two numbers are to larg to convert to float.
OverflowError: int too large to convert to float
Can someone help me perform this calculation with those values?
this is the tricky part of my code:
pre_key = pow(prime_g, private_number)
pre_key = pre_key % n
As #Tim Roberts pointed out, I also don't think you should brute-force the program. That being said, you might be able to use the mpmath library, if you really NEED TO brute force your problem. I don't know, how well it handles numbers of that extreme size, but in principle you can allow for arbitrary precision.
Related
Steps to reproduce how I came to believe this:
>>> 2 ** 4324567
Keyboard interrupt the above if you get tired of waiting since the comparitive operation takes less than a second while the above takes like 20.
>>> 2 ** 4324567 % 55
You'll notice the one with the modulus operation is way quicker. The only way this could be possible is if it uses something like the Chinese remainder theorem right?
What's weird is that if the exponent (being what 2 is to the power of) is a calculated value (like 2 * 2162283 or e where e = 2 * 2162283) it doesn't do this it seems. Can someone explain what's going on here?
The time to do the exponentiation here:
>>> 2 ** 4324567
is actually brief, which you can verify by doing, e.g.,
>>> x = 2 ** 4324567
instead. The vast bulk of the time in the original is actually consumed by converting the internal 4-million+ bit binary integer into a decimal string for display.
That's expensive. Converting between base-2 and base-10 representations generally takes time quadratic in the number of bits (or digits).
Which is also why the one with the modulus operation appears quicker: there are only 2 decimal digits to display. That goes fast.
However, if you're going to do modular exponentiation, use the 3-argument version of pow() instead. That can be unboundedly more efficient than computing a giant power first and only then doing a modulus operation.
The Chinese Remainder Theorem is not used here, and not useful here either. If you want to do modular exponentiation, use 3-argument pow: pow(2, 4324567, 55).
The second line runs much faster because almost all of the work in the first line is actually in constructing the string representation of the result, not in performing the exponentiation. The second line produces a much smaller number which is much quicker to stringify.
I'd like to calculate (⌊2^(1918)*π⌋+124476) in python but I get this error when I do it using the following code:
b = math.floor((2**1918) * math.pi) + 124476
print(b)
OverflowError: int too large to convert to float
How can you get this to work? In the end I just like to have it all as hexadecimal (if that helps with answering my question) but I was actually only trying to get it as an integer first :)
The right solution really depends on how precise the results are required. Since 2^1918 already is too large for both standard integer and floating point containers, it is not possible to get away with direct calculations without loosing all the precision below ~ 10^300.
In order to compute the desired result, you should use arbitrary-precision calculation techniques. You can implement the algorithms yourself or use one of the available libraries.
Assuming you are looking for an integer part of your expression, it will take about 600 decimal places to store the results precisely. Here is how you can get it using mpmath:
from mpmath import mp
mp.dps = 600
print(mp.floor(mp.power(2, 1918)*mp.pi + 124476))
74590163000744215664571428206261183464882552592869067139382222056552715349763159120841569799756029042920968184704590129494078052978962320087944021101746026347535981717869532122259590055984951049094749636380324830154777203301864744802934173941573749720376124683717094961945258961821638084501989870923589746845121992752663157772293235786930128078740743810989039879507242078364008020576647135087519356182872146031915081433053440716531771499444683048837650335204793844725968402892045220358076481772902929784589843471786500160230209071224266538164123696273477863853813807997663357545.0
Next, all you have to do is to convert it to hex representation (or extract hex from its internal binary form), which is a matter for another subject :)
The basic problem is what the message says. Python integers can be arbitrarily large, larger even than the range of a float. 2**1918 in decimal contains 578 significant digits and is way bigger than the biggest float your IEEE754 hardware can represent. So the call just fails.
You could try looking at the mpmath module. It is designed for floating point arithmetic outside the bounds of what ordinary hardware can handle.
I think the problem can be solved without resorting to high-precision arithmetic. floor(n.something + m) where m and n are integers is equal to floor(n.something) + m. So in this case you are looking for floor(2**1918 * pi) plus an integer (namely 124476). floor(2**whatever * pi) is just the first whatever + 2 bits of pi. So just look up the first 1920 bits of pi, add the bits for 124476, and output as hex digits.
A spigot algorithm can generate digits of pi without using arbitrary precision. A quick web search seems to find some Python implementations for generating digits in base 10. I didn't see anything about base 2, but Plouffe's formula generates base 16 digits if I am not mistaken.
The problem is that (2**1918) * math.pi attempts to convert the integer to 64-bit floating point precision, which is insufficiently large. You can convert math.pi to a fraction to use arbitrary precision.
>>> math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
74590163000744212756918704280961225881025315246628098737524697383138220762542289800871336766911957454080350508173317171375032226685669280397906783245438534131599390699781017605377332298669863169044574050694427882869191541933796848577277592163846082732344724959222075452985644173036076895843129191378853006780204194590286508603564336292806628948212533561286572102730834409985441874735976583720122784469168008083076285020654725577288682595262788418426186598550864392013191287665258445673204426746083965447956681216069719524525240073122409298640817341016286940008045020172328756796
Note that arbitrary precision applies to the calculation; math.pi is defined only with 64-bit floating point precision. Use an external library, such as mpmath, if you need the exact value.
To convert this to a hexadecimal string, use hex or a string format:
>>> hex(math.floor((2**1918) * fractions.Fraction(math.pi) + 124476))
'0xc90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> '%x' % math.floor((2**1918) * fractions.Fraction(math.pi) + 124476)
'c90fdaa22168c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001e63c'
>>> f'{math.floor((2**1918) * fractions.Fraction(math.pi) + 124476):X}'
'C90FDAA22168C0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001E63C'
For string formats, x provides lower-case hex whereas X provides upper-case case.
Or I guess binary in general. I'm obviously quite new to coding, so I'll appreciate any help here.
I just started learning about converting numbers into binary, specifically two's complement. The course presented the following code for converting:
num = 19
if num < 0:
isNeg = True
num = abs(num)
else:
isNeg = False
result = ''
if num == 0:
result = '0'
while num > 0:
result = str(num % 2) + result
num = num // 2
if isNeg:
result = '-' + result
This raised a couple of questions with me and after doing some research (mostly here on Stack Overflow), I found myself more confused than I was before. Hoping somebody can break things down a bit more for me. Here are some of those questions:
I thought it was outright wrong that the code suggested just appending a - to the front of a binary number to show its negative counterpart. It looks like bin() does the same thing, but don't you have to flip the bits and add a 1 or something? Is there a reason for this other than making it easy to comprehend/read?
Was reading here and one of the answers in particular said that Python doesn't really work in two's complement, but something else that mimics it. The disconnect here for me is that Python shows me one thing but is storing the numbers a different way. Again, is this just for ease of use? Is bin() using two's complement or Python's method?
Follow-up to that one, how does the 'sign-magnitude' format mentioned in the above answer differ from two's complement?
The Professor doesn't talk at all about 8-bit, 16-bit, 64-bit, etc., which I saw a lot of while reading up on this. Where does this distinction come from, and does Python use one? Or are those designations specific to the program that I might be coding?
A lot of these posts I've only reference how Python stores integers. Is that suggesting that it stores floats a different way, or are they just speaking broadly?
As I wrote this all up, I sort of realized that maybe I'm diving into the deep end before learning how to swim, but I'm curious like that and like to have a deeper understanding of stuff before moving on.
I thought it was outright wrong that the code suggested just appending a - to the front of a binary number to show its negative counterpart. It looks like bin() does the same thing, but don't you have to flip the bits and add a 1 or something? Is there a reason for this other than making it easy to comprehend/read?
You have to somehow designate the number being negative. You can add another symbol (-), add a sign bit at the very beginning, use ones'-complement, use two's-complement, or some other completely made-up scheme that works. Both the ones'- and two's-complement representation of a number require a fixed number of bits, which doesn't exist for Python integers:
>>> 2**1000
1071508607186267320948425049060001810561404811705533607443750
3883703510511249361224931983788156958581275946729175531468251
8714528569231404359845775746985748039345677748242309854210746
0506237114187795418215304647498358194126739876755916554394607
7062914571196477686542167660429831652624386837205668069376
The natural solution is to just prepend a minus sign. You can similarly write your own version of bin() that requires you to specify the number of bits and return the two's-complement representation of the number.
Was reading here and one of the answers in particular said that Python doesn't really work in two's complement, but something else that mimics it. The disconnect here for me is that Python shows me one thing but is storing the numbers a different way. Again, is this just for ease of use? Is bin() using two's complement or Python's method?
Python is a high-level language, so you don't really know (or care) how your particular Python runtime interally stores integers. Whether you use CPython, Jython, PyPy, IronPython, or something else, the language specification only defines how they should behave, not how they should be represented in memory. bin() just takes a number and prints it out using binary digits, the same way you'd convert 123 into base-2.
Follow-up to that one, how does the 'sign-magnitude' format mentioned in the above answer differ from two's complement?
Sign-magnitude usually encodes a number n as 0bXYYYYYY..., where X is the sign bit and YY... are the binary digits of the non-negative magnitude. Arithmetic with numbers encoded as two's-complement is more elegant due to the representation, while sign-magnitude encoding requires special handling for operations on numbers of opposite signs.
The Professor doesn't talk at all about 8-bit, 16-bit, 64-bit, etc., which I saw a lot of while reading up on this. Where does this distinction come from, and does Python use one? Or are those designations specific to the program that I might be coding?
No, Python doesn't define a maximum size for its integers because it's not that low-level. 2**1000000 computes fine, as will 2**10000000 if you have enough memory. n-bit numbers arise when your hardware makes it more beneficial to make your numbers a certain size. For example, processors have instructions that quickly work with 32-bit numbers but not with 87-bit numbers.
A lot of these posts I've only reference how Python stores integers. Is that suggesting that it stores floats a different way, or are they just speaking broadly?
It depends on what your Python runtime uses. Usually floating point numbers are like C doubles, but that's not required.
don't you have to flip the bits and add a 1 or something?
Yes, for two complement notation you invert all bits and add one to get the negative counterpart.
Is bin() using two's complement or Python's method?
Two's complement is a practical way to represent negative number in electronics that can have only 0 and 1. Internally the microprocessor uses two's complement for negative numbers and all modern microprocessors do. For more info, see your textbook on computer architecture.
how does the 'sign-magnitude' format mentioned in the above answer
differ from two's complement?
You should look what this code does and why it is there:
while num > 0:
result = str(num % 2) + result
num = num // 2
I'm getting something that doesn't seem to be making a lot of sense. I was practicing my coding by making a little program that would give me the probability of getting certain cards within a certain timeframe of a card game. In order to calculate the chances, I would need to create a method that would perform division, and report the chances as a fraction and as a decimal. so I designed this:
from fractions import Fraction
def time_odds(card_count,turns,deck_size=60):
chance_of_occurence = float(card_count)/float(deck_size)
opening_hand_odds = 7*chance_of_occurence
turn_odds = (7 + turns)*chance_of_occurence
print ("Chance of it being in the opening hand: %s or %s" %(opening_hand_odds,Fraction(opening_hand_odds)))
print ("Chance of it being acquired by turn %s : %s or %s" %(turns,turn_odds,Fraction(turn_odds) ))
and then I used it like so:
time_odds(3,5)
but for whatever reason I got this as the answer:
"Chance of it being in the opening hand: 0.35000000000000003 or
6305039478318695/18014398509481984"
"Chance of it being acquired by turn 5 : 0.6000000000000001 or
1351079888211149/2251799813685248"
so it's like, almost right, except the decimal is just slightly off, giving like a 0.0000000000003 difference or a 0.000000000000000000001 difference.
Python doesn't do this when I just make it do division like this:
print (7*3/60)
This gives me just 0.35, which is correct. The only difference that I can observe, is that I get the slightly incorrect values when I am dividing with variables rather than just numbers.
I've looked around a little for an answer, and most incorrect division problems have to do with integer division (or I think it can be called floor division) , but I didn't manage to find anything addressing this.
I've had a similar problem with python doing this when I was dividing really big numbers. What's going on?
Why is this so? what can I do to correct it?
The extra digits you're seeing are floating point precision errors. As you do more and more operations with floating point numbers, the errors have a chance of compounding.
The reason you don't see them when you try to replicate the computation by hand is that your replication performs the operations in a different order. If you compute 7 * 3 / 60, the mutiplication happens first (with no error), and the division introduces a small enough error that Python's float type hides it for you in its repr (because 0.35 unambiguously refers to the same float value as the computation). If you do 7 * (3 / 60), the division happens first (introducing error) and then the multiplication increases the size of the error to the point that it can't be hidden (because 0.35000000000000003 is a different float value than 0.35).
To avoid printing out the the extra digits that are probably error, you may want to explicitly specify a precision to use when turning your numbers into strings. For instance, rather than using the %s format code (which calls str on the value), you could use %.3f, which will round off your number after three decimal places.
There's another issue with your Fractions. You're creating the Fraction directly from the floating point value, which already has the error calculated in. That's why you're seeing the fraction print out with a very large numerator and denominator (it's exactly representing the same number as the inaccurate float). If you instead pass integer numerator and denominator values to the Fraction constructor, it will take care of simplifying the fraction for you without any floating point inaccuracy:
print("Chance of it being in the opening hand: %.3f or %s"
% (opening_hand_odds, Fraction(7*card_count, deck_size)))
This should print out the numbers as 0.350 and 7/20. You can of course choose whatever number of decimal places you want.
Completely separate from the floating point errors, the calculation isn't actually getting the probability right. The formula you're using may be a good enough one for doing in your head while playing a game, but it's not completely accurate. If you're using a computer to crunch the numbers for you, you might as well get it right.
The probability of drawing at least one of N specific cards from a deck of size M after D draws is:
1 - (comb(M-N, D) / comb(M, D))
Where comb is the binary coefficient or "combination" function (often spoken as "N choose R" and written "nCr" in mathematics). Python doesn't have an implementation of that function in the standard library, but there are a lot of add on modules you may already have installed that provide one or you can pretty easily write your own. See this earlier question for more specifics.
For your example parameters, the correct odds are '5397/17110' or 0.315.
So I've decided to try to solve my physics homework by writing some python scripts to solve problems for me. One problem that I'm running into is that significant figures don't always seem to come out properly. For example this handles significant figures properly:
from decimal import Decimal
>>> Decimal('1.0') + Decimal('2.0')
Decimal("3.0")
But this doesn't:
>>> Decimal('1.00') / Decimal('3.00')
Decimal("0.3333333333333333333333333333")
So two questions:
Am I right that this isn't the expected amount of significant digits, or do I need to brush up on significant digit math?
Is there any way to do this without having to set the decimal precision manually? Granted, I'm sure I can use numpy to do this, but I just want to know if there's a way to do this with the decimal module out of curiosity.
Changing the decimal working precision to 2 digits is not a good idea, unless you absolutely only are going to perform a single operation.
You should always perform calculations at higher precision than the level of significance, and only round the final result. If you perform a long sequence of calculations and round to the number of significant digits at each step, errors will accumulate. The decimal module doesn't know whether any particular operation is one in a long sequence, or the final result, so it assumes that it shouldn't round more than necessary. Ideally it would use infinite precision, but that is too expensive so the Python developers settled for 28 digits.
Once you've arrived at the final result, what you probably want is quantize:
>>> (Decimal('1.00') / Decimal('3.00')).quantize(Decimal("0.001"))
Decimal("0.333")
You have to keep track of significance manually. If you want automatic significance tracking, you should use interval arithmetic. There are some libraries available for Python, including pyinterval and mpmath (which supports arbitrary precision). It is also straightforward to implement interval arithmetic with the decimal library, since it supports directed rounding.
You may also want to read the Decimal Arithmetic FAQ: Is the decimal arithmetic ‘significance’ arithmetic?
Decimals won't throw away decimal places like that. If you really want to limit precision to 2 d.p. then try
decimal.getcontext().prec=2
EDIT: You can alternatively call quantize() every time you multiply or divide (addition and subtraction will preserve the 2 dps).
Just out of curiosity...is it necessary to use the decimal module? Why not floating point with a significant-figures rounding of numbers when you are ready to see them? Or are you trying to keep track of the significant figures of the computation (like when you have to do an error analysis of a result, calculating the computed error as a function of the uncertainties that went into the calculation)? If you want a rounding function that rounds from the left of the number instead of the right, try:
def lround(x,leadingDigits=0):
"""Return x either as 'print' would show it (the default)
or rounded to the specified digit as counted from the leftmost
non-zero digit of the number, e.g. lround(0.00326,2) --> 0.0033
"""
assert leadingDigits>=0
if leadingDigits==0:
return float(str(x)) #just give it back like 'print' would give it
return float('%.*e' % (int(leadingDigits),x)) #give it back as rounded by the %e format
The numbers will look right when you print them or convert them to strings, but if you are working at the prompt and don't explicitly print them they may look a bit strange:
>>> lround(1./3.,2),str(lround(1./3.,2)),str(lround(1./3.,4))
(0.33000000000000002, '0.33', '0.3333')
Decimal defaults to 28 places of precision.
The only way to limit the number of digits it returns is by altering the precision.
What's wrong with floating point?
>>> "%8.2e"% ( 1.0/3.0 )
'3.33e-01'
It was designed for scientific-style calculations with a limited number of significant digits.
If I undertand Decimal correctly, the "precision" is the number of digits after the decimal point in decimal notation.
You seem to want something else: the number of significant digits. That is one more than the number of digits after the decimal point in scientific notation.
I would be interested in learning about a Python module that does significant-digits-aware floating point point computations.