The code below isn't working right for some inputs.
a, i = set(), 1
while i <= 10000:
a.add(i)
i <<= 1
N = int(input())
if N in a:
print("True")
else:
print("False")
My initial idea was to check for each input if it's a power of 2 by starting from 1 and multiplying by 2 until exceeding the input number, comparing at each step. Instead, I store all the powers of 2 in a set beforehand, in order to check a given input in O(1). How can this be improved?
Bit Manipulations
One approach would be to use bit manipulations:
(n & (n-1) == 0) and n != 0
Explanation: every power of 2 has exactly 1 bit set to 1 (the bit in that number's log base-2 index). So when subtracting 1 from it, that bit flips to 0 and all preceding bits flip to 1. That makes these 2 numbers the inverse of each other so when AND-ing them, we will get 0 as the result.
For example:
n = 8
decimal | 8 = 2**3 | 8 - 1 = 7 | 8 & 7 = 0
| ^ | |
binary | 1 0 0 0 | 0 1 1 1 | 1 0 0 0
| ^ | | & 0 1 1 1
index | 3 2 1 0 | | -------
0 0 0 0
-----------------------------------------------------
n = 5
decimal | 5 = 2**2 + 1 | 5 - 1 = 4 | 5 & 4 = 4
| | |
binary | 1 0 1 | 1 0 0 | 1 0 1
| | | & 1 0 0
index | 2 1 0 | | ------
1 0 0
So, in conclusion, whenever we subtract one from a number, AND the result with the number itself, and that becomes 0 - that number is a power of 2!
Of course, AND-ing anything with 0 will give 0, so we add the check for n != 0.
math functions
You could always use math functions, but notice that using them without care could cause incorrect results:
math.log(x[, base]) with base=2:
import math
math.log(n, 2).is_integer()
math.log2(x):
math.log2(n).is_integer()
Worth noting that for any n <= 0, both functions will throw a ValueError as it is mathematically undefined (and therefore shouldn't present a logical problem).
math.frexp(x):
abs(math.frexp(n)[0]) == 0.5
As noted above, for some numbers these functions are not accurate and actually give FALSE RESULTS:
math.log(2**29, 2).is_integer() will give False
math.log2(2**49-1).is_integer() will give True
math.frexp(2**53+1)[0] == 0.5 will give True!!
This is because math functions use floats, and those have an inherent accuracy problem.
(Expanded) Timing
Some time has passed since this question was asked and some new answers came up with the years. I decided to expand the timing to include all of them.
According to the math docs, the log with a given base, actually calculates log(x)/log(base) which is obviously slow. log2 is said to be more accurate, and probably more efficient. Bit manipulations are simple operations, not calling any functions.
So the results are:
Ev: 0.28 sec
log with base=2: 0.26 sec
count_1: 0.21 sec
check_1: 0.2 sec
frexp: 0.19 sec
log2: 0.1 sec
bit ops: 0.08 sec
The code I used for these measures can be recreated in this REPL (forked from this one).
Refer to the excellent and detailed answer to "How to check if a number is a power of 2" — for C#. The equivalent Python implementation, also using the "bitwise and" operator &, is this:
def is_power_of_two(n):
return (n != 0) and (n & (n-1) == 0)
As Python has arbitrary-precision integers, this works for any integer n as long as it fits into memory.
To summarize briefly the answer cited above: The first term, before the logical and operator, simply checks if n isn't 0 — and hence not a power of 2. The second term checks if it's a power of 2 by making sure that all bits after that bitwise & operation are 0. The bitwise operation is designed to be only True for powers of 2 — with one exception: if n (and thus all of its bits) were 0 to begin with.
To add to this: As the logical and "short-circuits" the evaluation of the two terms, it would be more efficient to reverse their order if, in a particular use case, it is less likely that a given n be 0 than it being a power of 2.
In binary representation, a power of 2 is a 1 (one) followed by zeros. So if the binary representation of the number has a single 1, then it's a power of 2. No need here to check num != 0:
print(1 == bin(num).count("1"))
The bin builtin returns a string "0b1[01]?" (regex notation) for every strictly positive integer (if system memory suffices, that is), so that we can write the Boolean expression
'1' not in bin(abs(n))[3:]
that yields True for n that equals 0, 1 and 2**k.
1 is 2**0 so it is unquestionably a power of two, but 0 is not, unless you take into account the limit of x=2**k for k → -∞. Under the second assumption we can write simply
check0 = lambda n: '1' not in bin(abs(n))[3:]
and under the first one (excluding 0)
check1 = lambda n: '1' not in bin(abs(n))[3:] and n != 0
Of course the solution here proposed is just one of the many possible ones that
you can use to check if a number is a power of two... and for sure not the most
efficient one but I'm posting it in the sake of completeness :-)
Note: this should be a comment on Tomerikoo's answer (currently the most upvoted) but unfortunately Stack Overflow won't let me comment due to reputation points.
Tomerikoo's answer is very well explained and thought-out. While it covers most applications, but I believe needs a slight modification to make it more robust against a trivial case. Their answer is:
(n & (n-1) == 0) and n != 0
The second half checks if the input is an actual 0 which would invalidate the bitwise-and logic. There is another one trivial case when this could happen: input is 1 and the bitwise-and takes place with 0 again, just on the second term. Strictly speaking, 2^0=1 of course but I doubt that it's useful for most applications. A trivial modification to account for that would be:
(n & (n-1) == 0) and (n != 0 and n-1 != 0)
The following code checks whether n is a power of 2 or not:
def power_of_two(n):
count = 0
st = str(bin(n))
st = st[2:]
for i in range(0,len(st)):
if(st[i] == '1'):
count += 1
if(count == 1):
print("True")
else:
print("False")
Many beginners won't know how code like (n != 0) and (n & (n-1) == 0) works.
But if we want to check whether a number is a power of 2 or not, we can convert the number to binary format and see it pretty clearly.
For Example:
^ (to the power of)
2^0 = 1 (Bin Value : 0000 0001)
2^1 = 2 (Bin Value : 0000 0010)
2^2 = 4 (Bin Value : 0000 0100)
2^3 = 8 (Bin Value : 0000 1000)
2^4 = 16 (Bin Value : 0001 0000)
2^5 = 32 (Bin Value : 0010 0000)
2^6 = 64 (Bin Value : 0100 0000)
2^7 = 128 (Bin Value : 1000 0000)
If you look at the binary values of all powers of 2, you can see that there is only one bit True. That's the logic in this program.
So If we count the number of 1 bit's in a binary number and if it is equal to 1, then the given number is power of 2, otherwise it is not.
n = int(input())
if '1' in list(bin(n))[3:]: #also can use if '1' in bin(n)[3:] OR can also use format(n, 'b')[1:]
print("False")
else:
print("True")
For every number which is power of 2 say(N = 2^n), where n = +integer bin(N)=bin(2^(+int)) will have string of form: 0b10000000 e.i 0b1.....zero only if not 0, N is not power of 2.
Also, format(n, 'b') returns bin(n)[2:] so can be used
Source
>>> format(14, '#b'), format(14, 'b')
('0b1110', '1110')
>>> f'{14:#b}', f'{14:b}'
('0b1110', '1110')
Use *2 instead of bit shifts. Multiplication or addition are much more readable.
In python 3.10, int.bit_count counts the set bits of a number, so we can use
n.bit_count() == 1
Most of the above answers use bin() of format(int(input()), "b")
The below code also works: Ev(x) returns True if x is power of 2
# Ev(x) ~ ispoweroftwo(x)
def Ev(x):
if x==2: return True
elif x%2: return False
return Ev(x//2)
The above code is based on generating bin()
#This function returns binary of integers
def binary(x):
a = ""
while x!= 0:
a += str(x%2)
x = x//2
return a[::-1]
I = int(input())
print(format(I, "b")) # To cross-check if equal of not
print(binary(I))
I have tried to add my answer because I found what we are doing using bin(x)[3:] or format(x, "b") is almost like asking the boolean answer of whether or not a given number x is divisible by two.....and we keep asking the same
Python seems to have trouble returning the correct value for numbers to the power of zero.
When I give it a literal equation, it works properly, but it always returns positive 1 for anything more complex than a raw number to the zeroeth.
Here are some tests:
>>> -40 ** 0 # this is the correct result
-1
>>> (0 - 40) ** 0 # you'd expect this to give the same thing, but...
1
>>> a = -40 # let's try something else...
>>> a ** 0
1
>>> int(-40) ** 0 # this oughtn't to change anything, yet...
1
>>> -66.6 ** 0 # raw floats are fine.
-1.0
>>> (0 - 66.6) ** 0.0 # ...until you try and do something with them.
1.0
UPDATE: pow() gives this result, too, so probably the first result is exceptional...
>>> pow(-60, 0)
1
Could it be some problem with signed integers? I need this for a trinary switch with values 1, -1, or 0, depending on whether an input is any positive or negative value, or zero. I could accomplish the same thing with something like:
if val > 0: switch = 1
elif val < 0: switch = -1
else: switch = 0
...and then using the variable switch for my purposes.
But that wouldn't answer the question I have about how Python deals with zero-powers.
(I will also accept that -40 ** 0 only returns -1 by accident (phenomenally), but I doubt this is the case...)
Python is correct and doing what you would expect it to do. It is a matter of order of operations. ANY number (negative or positive) to zeroth power is equal to 1. But keep in mind also that multiplication comes before subtraction in order of operations. So in more detail, what python sees is this:
1st case:
-40 ** 0 = -(40 ** 0) = -(1) = -1
2nd case:
(0 - 40) ** 0 = (-40) ** 0 = 1
In the 5th case as well it has to do with the parentheses
int(-40) ** 0 = (-40) ** 0 = 1
Just stumpled upon this question, I don't get the syntax.
But I don't think that (-40)^0 = -40^0.
On the left side, the exponential is the last operation. This is why the left side should equal 1.
On the right side, the minus sign is the last operation. This is why the result should be -1.
There is not any problem , here . every number with power 0 is 1 .
In python signs like -.+,... have less precedences to power (**) so when you put 0 - 40 inside the parenthesize you have (-1)**0 that is 1 but when you do -1**0 first you have 1**0 then -.
>>> (0-4)**0 == (-1)**0 == 1
>>> -1**0 == -(1**0) == -1
Lately I bumped repeatedly into the concept of LFSR, that I find quite interesting because of its links with different fields and also fascinating in itself. It took me some effort to understand, the final help was this really good page, much better than the (at first) cryptic wikipedia entry. So I wanted to write some small code for a program that worked like a LFSR. To be more precise, that somehow showed how a LFSR works. Here's the cleanest thing I could come up with after some lenghtier attempts (Python):
def lfsr(seed, taps):
sr, xor = seed, 0
while 1:
for t in taps:
xor += int(sr[t-1])
if xor%2 == 0.0:
xor = 0
else:
xor = 1
print(xor)
sr, xor = str(xor) + sr[:-1], 0
print(sr)
if sr == seed:
break
lfsr('11001001', (8,7,6,1)) #example
I named "xor" the output of the XOR function, not very correct.
However, this is just meant to show how it circles through its possible states, in fact you noticed the register is represented by a string. Not much logical coherence.
This can be easily turned into a nice toy you can watch for hours (at least I could :-)
def lfsr(seed, taps):
import time
sr, xor = seed, 0
while 1:
for t in taps:
xor += int(sr[t-1])
if xor%2 == 0.0:
xor = 0
else:
xor = 1
print(xor)
print('')
time.sleep(0.75)
sr, xor = str(xor) + sr[:-1], 0
print(sr)
print('')
time.sleep(0.75)
Then it struck me, what use is this in writing software? I heard it can generate random numbers; is it true? how?
So, it would be nice if someone could:
explain how to use such a device in software development
come up with some code, to support the point above or just like mine to show different ways to do it, in any language
Also, as theres not much didactic stuff around about this piece of logic and digital circuitry, it would be nice if this could be a place for noobies (like me) to get a better understanding of this thing, or better, to understand what it is and how it can be useful when writing software. Should have made it a community wiki?
That said, if someone feels like golfing... you're welcome.
Since I was looking for a LFSR-implementation in Python, I stumbled upon this topic. I found however that the following was a bit more accurate according to my needs:
def lfsr(seed, mask):
result = seed
nbits = mask.bit_length()-1
while True:
result = (result << 1)
xor = result >> nbits
if xor != 0:
result ^= mask
yield xor, result
The above LFSR-generator is based on GF(2k) modulus calculus (GF = Galois Field). Having just completed an Algebra course, I'm going to explain this the mathematical way.
Let's start by taking, for example, GF(24), which equals to {a4x4 + a3x3 + a2x2 + a1x1 + a0x0 | a0, a1, ..., a4 ∈ Z2} (to clarify, Zn = {0,1,...,n-1} and therefore Z2 = {0,1}, i.e. one bit). This means that this is the set of all polynomials of the fourth degree with all factors either being present or not, but having no multiples of these factors (e.g. there's no 2xk). x3, x4 + x3, 1 and x4 + x3 + x2 + x + 1 are all examples of members of this group.
We take this set modulus a polynomial of the fourth degree (i.e., P(x) ∈ GF(24)), e.g. P(x) = x4+x1+x0. This modulus operation on a group is also denoted as GF(24) / P(x). For your reference, P(x) describes the 'taps' within the LFSR.
We also take a random polynomial of degree 3 or lower (so that it's not affected by our modulus, otherwise we could just as well perform the modulus operation directly on it), e.g. A0(x) = x0. Now every subsequent Ai(x) is calculated by multiplying it with x: Ai(x) = Ai-1(x) * x mod P(x).
Since we are in a limited field, the modulus operation may have an effect, but only when the resulting Ai(x) has at least a factor x4 (our highest factor in P(x)). Note that, since we are working with numbers in Z2, performing the modulus operation itself is nothing more than determining whether every ai becomes a 0 or 1 by adding the two values from P(x) and Ai(x) together (i.e., 0+0=0, 0+1=1, 1+1=0, or 'xoring' these two).
Every polynomial can be written as a set of bits, for example x4+x1+x0 ~ 10011. The A0(x) can be seen as the seed. The 'times x' operation can be seen as a shift left operation. The modulus operation can be seen as a bit masking operation, with the mask being our P(x).
The algorithm depicted above therefore generates (an infinite stream of) valid four bit LFSR patterns. For example, for our defined A0(x) (x0) and P(x) (x4+x1+x0), we can define the following first yielded results in GF(24) (note that A0 is not yielded until at the end of the first round -- mathematicians generally start counting at '1'):
i Ai(x) 'x⁴' bit pattern
0 0x³ + 0x² + 0x¹ + 1x⁰ 0 0001 (not yielded)
1 0x³ + 0x² + 1x¹ + 0x⁰ 0 0010
2 0x³ + 1x² + 0x¹ + 0x⁰ 0 0100
3 1x³ + 0x² + 0x¹ + 0x⁰ 0 1000
4 0x³ + 0x² + 1x¹ + 1x⁰ 1 0011 (first time we 'overflow')
5 0x³ + 1x² + 1x¹ + 0x⁰ 0 0110
6 1x³ + 1x² + 0x¹ + 0x⁰ 0 1100
7 1x³ + 0x² + 1x¹ + 1x⁰ 1 1011
8 0x³ + 1x² + 0x¹ + 1x⁰ 1 0101
9 1x³ + 0x² + 1x¹ + 0x⁰ 0 1010
10 0x³ + 1x² + 1x¹ + 1x⁰ 1 0111
11 1x³ + 1x² + 1x¹ + 0x⁰ 0 1110
12 1x³ + 1x² + 1x¹ + 1x⁰ 1 1111
13 1x³ + 1x² + 0x¹ + 1x⁰ 1 1101
14 1x³ + 0x² + 0x¹ + 1x⁰ 1 1001
15 0x³ + 0x² + 0x¹ + 1x⁰ 1 0001 (same as i=0)
Note that your mask must contain a '1' at the fourth position to make sure that your LFSR generates four-bit results. Also note that a '1' must be present at the zeroth position to make sure that your bitstream would not end up with a 0000 bit pattern, or that the final bit would become unused (if all bits are shifted to the left, you would also end up with a zero at the 0th position after one shift).
Not all P(x)'s necessarily are generators for GF(2k) (i.e., not all masks of k bits generate all 2k-1-1 numbers). For example, x4 + x3 + x2 + x1 + x0 generates 3 groups of 5 distinct polynomals each, or "3 cycles of period 5": 0001,0010,0100,1000,1111; 0011,0110,1100,0111,1110; and 0101,1010,1011,1001,1101. Note that 0000 can never be generated, and can't generate any other number.
Usually, the output of an LFSR is the bit that is 'shifted' out, which is a '1' if the modulus operation is performed, and a '0' when it isn't. LFSR's with a period of 2k-1-1, also called pseudo-noise or PN-LFSR's, adhere to Golomb's randomness postulates, which says as much as that this output bit is random 'enough'.
Sequences of these bits therefore have their use in cryptography, for instance in the A5/1 and A5/2 mobile encryption standards, or the E0 Bluetooth standard. However, they are not as secure as one would like: the Berlekamp-Massey algorithm can be used to reverse-engineer the characteristic polynomal (the P(x)) of the LFSR. Strong encryption standards therefore use Non-linear FSR's or similar non-linear functions. A related topic to this are the S-Boxes used in AES.
Note that I have used the int.bit_length() operation. This was not implemented until Python 2.7.
If you'd only like a finite bit pattern, you could check whether the seed equals the result and then break your loop.
You can use my LFSR-method in a for-loop (e.g. for xor, pattern in lfsr(0b001,0b10011)) or you can repeatedly call the .next() operation on the result of the method, returning a new (xor, result)-pair everytime.
Actually, algorithms based on LFSR are very common. CRC is actually directly based on LFSR. Of course, in computer science classes people talk about polynomials when they're talking about how the input value is supposed to be XORed with the accumulated value, in electornics engineering we talk about taps instead. They are the same just different terminology.
CRC32 is a very common one. It's used to detect errors in Ethernet frames. That means that when I posted this answer my PC used an LFSR based algorithm to generate a hash of the IP packet so that my router can verify that what it's transmitting isn't corrupted.
Zip and Gzip files are another example. Both use CRC for error detection. Zip uses CRC32 and Gzip uses both CRC16 and CRC32.
CRCs are basically hash functions. And it's good enough to make the internet work. Which means LFSRs are fairly good hash functions. I'm not sure if you know this but in general good hash functions are considered good random number generators. But the thing with LFSR is that selecting the correct taps (polynomials) is very important to the quality of the hash/random number.
Your code is generally toy code since it operates on a string of ones and zeros. In the real world LFSR work on bits in a byte. Each byte you push through the LFSR changes the accumulated value of the register. That value is effectively a checksum of all the bytes you've push through the register. Two common ways of using that value as a random number is to either use a counter and push a sequence of numbers through the register, thereby transforming the linear sequence 1,2,3,4 to some hashed sequence like 15306,22,5587,994, or to feed back the current value into the register to generate a new number in seemingly random sequence.
It should be noted that doing this naively using bit-fiddling LFSR is quite slow since you have to process bits at a time. So people have come up with ways using pre-calculated tables to do it eight bits at a time or even 32 bits at a time. This is why you almost never see LFSR code in the wild. In most production code it masquerades as something else.
But sometimes a plain bit-twiddling LFSR can come in handy. I once wrote a Modbus driver for a PIC micro and that protocol used CRC16. A pre-calculated table requires 256 bytes of memory and my CPU only had 68 bytes (I'm not kidding). So I had to use an LFSR.
There are many applications of LFSRs. One of them is generating noise, for instance the SN76489 and variants (used on the Master System, Game Gear, MegaDrive, NeoGeo Pocket, ...) use a LFSR to generate white/periodic noise. There's a really good description of SN76489's LFSR in this page.
Here is one of my python libraries - pylfsr to implement LFSR. I have tried to make it an efficient that can handle any length of LFSR to generate the binary sequence.
import numpy as np
from pylfsr import LFSR
#for 5-bit LFSR with polynomial x^5 + x^4 + x^3 + x^2 +1
seed = [0,0,0,1,0]
fpoly = [5,4,3,2]
L = LFSR(fpoly=fpoly,initstate =seed)
seq = L.runKCycle(10)
You can display all the info at step too,
state = [1,1,1]
fpoly = [3,2]
L = LFSR(initstate=state,fpoly=fpoly,counter_start_zero=False)
print('count \t state \t\toutbit \t seq')
print('-'*50)
for _ in range(15):
print(L.count,L.state,'',L.outbit,L.seq,sep='\t')
L.next()
print('-'*50)
print('Output: ',L.seq)
Output
count state outbit seq
--------------------------------------------------
1 [1 1 1] 1 [1]
2 [0 1 1] 1 [1 1]
3 [0 0 1] 1 [1 1 1]
4 [1 0 0] 0 [1 1 1 0]
5 [0 1 0] 0 [1 1 1 0 0]
6 [1 0 1] 1 [1 1 1 0 0 1]
7 [1 1 0] 0 [1 1 1 0 0 1 0]
8 [1 1 1] 1 [1 1 1 0 0 1 0 1]
9 [0 1 1] 1 [1 1 1 0 0 1 0 1 1]
10 [0 0 1] 1 [1 1 1 0 0 1 0 1 1 1]
11 [1 0 0] 0 [1 1 1 0 0 1 0 1 1 1 0]
12 [0 1 0] 0 [1 1 1 0 0 1 0 1 1 1 0 0]
13 [1 0 1] 1 [1 1 1 0 0 1 0 1 1 1 0 0 1]
14 [1 1 0] 0 [1 1 1 0 0 1 0 1 1 1 0 0 1 0]
--------------------------------------------------
Output: [1 1 1 0 0 1 0 1 1 1 0 0 1 0 1]
Also can be visualize like this
Check out the Documentation here
To make it really elegant and Pythonic, try to create a generator, yield-ing successive values from the LFSR. Also, comparing to a floating point 0.0 is unnecessary and confusing.
A LFSR is just one of many ways to create pseudo-random numbers in computers. Pseudo-random, because there numbers aren't really random - you can easily repeat them by starting with the seed (initial value) and proceeding with the same mathematical operations.
Below is a variation on your code using integers and binary operators instead of strings. It also uses yield as someone suggested.
def lfsr2(seed, taps):
sr = seed
nbits = 8
while 1:
xor = 1
for t in taps:
if (sr & (1<<(t-1))) != 0:
xor ^= 1
sr = (xor << nbits-1) + (sr >> 1)
yield xor, sr
if sr == seed:
break
nbits = 8
for xor, sr in lfsr2(0b11001001, (8,7,6,1)):
print xor, bin(2**nbits+sr)[3:]
Here a piece of code where you can choose your seed, the number of bits and the taps you want:
from functools import reduce
def lfsr(seed=1, bits=8, taps=[8, 6, 5, 4]):
"""
1 2 3 4 5 6 7 8 (bits == 8)
┌─┬─┬─┬─┬─┬─┬─┬─┐
┌─→│0│1│0│1│0│0│1│1├─→
│ └─┴─┴─┴┬┴┬┴─┴┬┴─┘
└──────XOR┘ │ │
└──XOR──┘ (taps == 7, 5, 4)
"""
taps = [bits - tap for tap in taps]
r = seed & (1 << bits) - 1
while(1):
tap_bits = [(r >> tap) & 1 for tap in taps]
bit = reduce(lambda x, y : x ^ y, tap_bits)
yield r & 1
r &= (1 << bits) - 1
r = (r >> 1) | (bit << (bits - 1))
If we assume that seed is a list of ints rather than a string (or convert it if it is not) then the following should do what you want with a bit more elegance:
def lfsr(seed, taps) :
while True:
nxt = sum([ seed[x] for x in taps]) % 2
yield nxt
seed = ([nxt] + seed)[:max(taps)+1]
Example :
for x in lfsr([1,0,1,1,1,0,1,0,0],[1,5,6]) :
print x
list_init=[1,0,1,1]
list_coeff=[1,1,0,0]
out=[]
for i in range(15):
list_init.append(sum([list_init[i]*list_coeff[i] for i in range(len(list_init))])%2)
out.append(list_init.pop(0))
print(out)
#https://www.rocq.inria.fr/secret/Anne.Canteaut/encyclopedia.pdf
This class provides an easy of use LFSR generator object
import numpy as np
class lfsr:
def __init__(self, seed=1, nbits=8, taps=(0,1, 5, 6)): # different taps may not work well. I suggest looking for a standard configuration
self.seed0 =seed
self.seed = seed
self.nbits = nbits
self.bmask = (2**nbits)-1
self.taps = taps
def next_rnd(self):
b_in = 0
for t in self.taps:
o = 2**t
b_in ^= (o&self.seed)>>t
self.seed =(self.seed >> 1) | (b_in << (self.nbits-1))
self.seed = self.seed & self.bmask
return self.seed
def print_s(self):
print(self.seed)
def get_rnd_array(self, seed=None):
self.seed = seed if seed is not None else self.seed
arr = np.zeros((2**self.nbits))
for i in range(2**self.nbits):
arr[i] = self.next_rnd()
return arr
def get_double_rnd_array_circular(self, seed=None): # ref.: Compact and Accurate Stochastic Circuits with Shared Random Number Sources
k = int(self.nbits/2)
self.seed = seed if seed is not None else self.seed
arr0 = np.zeros((2**self.nbits))
arr1 = np.zeros((2**self.nbits))
for i in range(2**self.nbits):
rnd = self.next_rnd()
arr0[i] = rnd
rnd_p0 = rnd >> k
rnd_p1 = (rnd & (2**k-1)) << k
rnd_p2 = rnd_p1 | rnd_p0
arr1[i] = rnd_p2
return arr0, arr1
l = lfsr(1, 4, (0,1))
print(l.get_rnd_array(11))
print(l.get_double_rnd_array_circular(11))