I am wondering about how many times this while loop would execute. This is a function that adds two numbers using XOR and AND.
def Add(x, y):
# Iterate till there is no carry
while (y != 0):
# carry now contains common
# set bits of x and y
carry = x & y
# Sum of bits of x and y where at
# least one of the bits is not set
x = x ^ y
# Carry is shifted by one so that
# adding it to x gives the required sum
y = carry << 1
return x
``
Algorithm for No Carry Adder:
function no_carry_adder(A,B)
while B != 0:
X = A XOR B, Bitwise-XOR of A,B.
Y = A AND B, Bitwise-AND of A,B.
A = X
B = Y << 1, Multiplying Y by 2.
return A
As you can see, the while loop executes those four instructions again and again until B = 0, and when B = 0, binary number stored in A is the answer.
Now the question was to find out how many times the while loop will execute before B = 0 or B becomes zero.
If I have gone for the naive way i.e write the algorithm as it is described in any programming language like it will give me an answer but it will be time-consuming if the number of bits in the binary string A and B is > 500.
How can I make a faster algorithm?
Let's take a look at different cases,
Case 1: When both A = B = 0.
In this case the number of times the loop iterates = 0 as B = 0.
Case 2: When A != 0 and B = 0.
In this case also the number of times the loop iterates = 0 as B = 0.
Case 3: When A = 0 and B != 0.
In this case, the number of times the loop iterates = 1 because after the first iteration, the value of X = B as when you do the bitwise XOR of any binary number with 0 the result is the number itself. The value of Y = 0 because bitwise AND of any number with 0 is 0. So, you can see Y = 0 then B = 0 and Y << 1 = 0.
Case 4: When A = B and A != 0 and B != 0.
In this case, the number of times the loop iterates = 2 because in first iteration the value of A = 0, why because bitwise XOR of two same numbers is always 0 and value of Y = B because bitwise AND of two same numbers is the number itself and then B = Y << 1, after the first iteration, A = 0 and B != 0 so this case becomes Case-3. So, the number of iteration will always be 2.
Case-5: When A != B and A != 0 and B != 0.
In this case, the number of times the loop iterates = length of the longest carry-chain.
Algorithm to calculate the length of the longest carry-chain:
First make both the binary strings A and B of equal length if they are not.
As we know the length of the largest carry sequence will be the answer, I just need to find the maximum carry sequence length I have occurred till now. So, to compute that,
I will iterate from left to right i.e. LSB to MSB and:
if carry + A[i] + B[i] == 2 means that bit marks the start of carry-sequence, so ++curr_carry_sequence and carry = 1.
if carry + A[i] + B[i] == 3 means the carry which was forwarded by previous bit addition is consumed here and this bit will generate a new carry so, length of carry-sequence will reset to 1 i.e. curr_carry_sequence = 1 and carry = 1.
if carry + A[i] + B[i] == 1 or 0 means carry generated by the previous bit resolves here and it will mark the end of the carry-sequence, so the length of the carry-sequence will reset to 0. i.e. curr_carry_sequence = 0 and carry = 0.
Now if curr_carry_seq length is > than max_carry_sequence, then you update the max_carry_sequence.
Answer would be max_carry_sequence + 1.
For Source-code refer to No Carry Adder Solution.
P.S. For average-case analysis of No-Carry Adder you can refer to the paper: Average Case Analysis of No Carry Adder: Addition in log(n) + O(1)Steps on Average: A Simple Analysis.
There is no fixed answer to how many times the while loop is executed. The while loop is always executed when there is a carry bit from one position to another. Hence you need to know how exactly the numbers look like in binary. But what you can say with certainty is what the max possible number of executions is. It is the length of the bigger number as bits + 1. Why? Because if that's the number that a carry can max occur. Let's take add(1,7) = 8 (001 + 111 = 1000). The carry from the first bit is passed two the second position then to the third and then to the forth. 4 iterations this is equivalent to the length of 7 and that + 1 = 4.
Given a binary number, I need to write a function to count the total steps reaching zero. The rules are:
If the number is even, divide it by 2
If the number is odd, subtract 1 from it
for example, it takes six iterations for "1110" (14) to become 0:
14 / 2 = 7
7 - 1 = 6
6 / 2 = 3
3 - 1 = 2
2 / 2 = 1
1 - 1 = 0
I have come up with a naive solution that does calculations, but this algorithm cannot handle numbers that are very large.
def test(x):
a = int(x,2)
steps = 0
while a != 0:
if a % 2 == 0:
a = a // 2
else:
a = a - 1
steps += 1
return steps
test("1000")
Out[65]: 4
test("101")
Out[66]: 4
test("001")
Out[67]: 1
test("0010010001")
Out[68]: 10
test("001001")
Out[69]: 5
what I need to know: How can I avoid doing the calculation and have an algorithm that is fast / can handle big numbers?
Assuming your code is correct and the rule is:
test(0) = 0
test(n) = 1 + test(n / 2) if n is even;
1 + test(n − 1) otherwise
the important thing to notice is that:
an even number ends with a binary 0
dividing by 2 removes the 0 from the end (and nothing else)
an odd number ends with a binary 1
subtracting 1 turns the last 1 to a 0 (and nothing else)
So every 1 bit except for the first one adds 2 steps, and every significant 0 bit adds 1 step. That means for inputs that start with 1, you can write:
def test(x):
return x.count('1') + len(x) - 1
Now you just need to account for leading zeros, or just the specific case of "0" if leading zeros aren’t possible.
I had this question on a coding test today, I had 40 mins to complete the test. Unfortunately, I only came up with a good solution after the timer had reached the limit.
You do not need to calculate the divisions and the subtractions(!). You can iterate over the characters of S, if the character is a 1, two steps are required, if the character is a 0, only one step is required.
If there is a 1 at the end, you will subtract 1
If there is a 0 at the end, you can divide by two and the number will shift to the right.
The first character is an exception (S[0])
Here is the solution:
def iterate_string(S: str):
acc = 0
for c in S:
if c == "0":
acc += 1
else:
acc += 2
acc -= 1 # the very first 1 is only + 1, thus - 1
return acc
Here is an example:
1001 (17) - 1 = 1000 (16)
1000 (16) / 2 = 100 (8)
100 (8) / 2 = 10 (4)
10 (4) / 2 = 1
1 - 1 = 0
# First digit, requires two steps:
|
1001
# Second digit, requires one step:
|
1001
# Third digit, requires one step:
|
1001
# S[0] is 1, but requires only one step:
|
1001
=> total of 5 steps:
0: 1001 # (-1)
1: 1000 # (/2)
2: 100 # (/2)
3: 10 # (/2)
4: 1 # (-1)
5: 0
Good luck to the next person who is having the same challenge! :)
Here is the naive solution that can't handle big numbers:
def do_calculations(S: str):
decimal_value = int(S, 2)
iterations = 0
while decimal_value > 0:
if decimal_value % 2 == 1:
decimal_value = decimal_value - 1
else:
decimal_value = decimal_value / 2
iterations += 1
return iterations
Your algorithm isn't correct for odd numbers. You are only dividing when the number is even, which is not how you described the "steps."
you want
def test(x, 2):
x_int = int(x)
steps = 0
while x_int <= 0:
x_int //= 2
x -= 1
steps += 1
You should clarify your algorithm, because the way you described it, you're not guaranteed to converge to 0 for all inputs. The way you described it is an infinite loop for odd numbers. Just try 1:
#test(1)
1 // 2 = 0
0 - 1 = -1
...
Now you will never get to 0, which is why you should check for x_int <= 0.
I suggest you reconsider why you want to do this anyway. I'm fairly certain that you don't even need an iterative algorithm to know how many "steps" are required anyway, there should just be a mathematical formula for this.
You could also use a recursive approach:
def stepsToZero(N):
return N if N < 2 else 2 + stepsToZero(N//2-1)
This will get you results up to N = 2**993 (which is quite a big number) with a very concise (and imho more elegant) function.
What would run much faster would be to solve this mathematically
For example:
import math
def steps2Zero(N):
if N < 2: return N
d = int(math.log(N+2,2))-1
s = int(N >= 3*2**d-2)
return 2*d+s
Note that, for N=2^900, the mathematical solution is a hundred times faster than the recursion. On the other hand, the recursive function responds in well under a second and is a lot more readable. So, depending on how this would be used and on what size numbers, performance considerations are likely pointless
If the input number is in binary (or convert the input number to binary) then implement this function simply
def solution(s): # 's' should be a binary input (011100)
while s[0] == "0":
s = s[1:]
ones = s.count('1')
zeros = s.count('0')
return ones*2+zeros-1
Python seems to have trouble returning the correct value for numbers to the power of zero.
When I give it a literal equation, it works properly, but it always returns positive 1 for anything more complex than a raw number to the zeroeth.
Here are some tests:
>>> -40 ** 0 # this is the correct result
-1
>>> (0 - 40) ** 0 # you'd expect this to give the same thing, but...
1
>>> a = -40 # let's try something else...
>>> a ** 0
1
>>> int(-40) ** 0 # this oughtn't to change anything, yet...
1
>>> -66.6 ** 0 # raw floats are fine.
-1.0
>>> (0 - 66.6) ** 0.0 # ...until you try and do something with them.
1.0
UPDATE: pow() gives this result, too, so probably the first result is exceptional...
>>> pow(-60, 0)
1
Could it be some problem with signed integers? I need this for a trinary switch with values 1, -1, or 0, depending on whether an input is any positive or negative value, or zero. I could accomplish the same thing with something like:
if val > 0: switch = 1
elif val < 0: switch = -1
else: switch = 0
...and then using the variable switch for my purposes.
But that wouldn't answer the question I have about how Python deals with zero-powers.
(I will also accept that -40 ** 0 only returns -1 by accident (phenomenally), but I doubt this is the case...)
Python is correct and doing what you would expect it to do. It is a matter of order of operations. ANY number (negative or positive) to zeroth power is equal to 1. But keep in mind also that multiplication comes before subtraction in order of operations. So in more detail, what python sees is this:
1st case:
-40 ** 0 = -(40 ** 0) = -(1) = -1
2nd case:
(0 - 40) ** 0 = (-40) ** 0 = 1
In the 5th case as well it has to do with the parentheses
int(-40) ** 0 = (-40) ** 0 = 1
Just stumpled upon this question, I don't get the syntax.
But I don't think that (-40)^0 = -40^0.
On the left side, the exponential is the last operation. This is why the left side should equal 1.
On the right side, the minus sign is the last operation. This is why the result should be -1.
There is not any problem , here . every number with power 0 is 1 .
In python signs like -.+,... have less precedences to power (**) so when you put 0 - 40 inside the parenthesize you have (-1)**0 that is 1 but when you do -1**0 first you have 1**0 then -.
>>> (0-4)**0 == (-1)**0 == 1
>>> -1**0 == -(1**0) == -1
Lately I bumped repeatedly into the concept of LFSR, that I find quite interesting because of its links with different fields and also fascinating in itself. It took me some effort to understand, the final help was this really good page, much better than the (at first) cryptic wikipedia entry. So I wanted to write some small code for a program that worked like a LFSR. To be more precise, that somehow showed how a LFSR works. Here's the cleanest thing I could come up with after some lenghtier attempts (Python):
def lfsr(seed, taps):
sr, xor = seed, 0
while 1:
for t in taps:
xor += int(sr[t-1])
if xor%2 == 0.0:
xor = 0
else:
xor = 1
print(xor)
sr, xor = str(xor) + sr[:-1], 0
print(sr)
if sr == seed:
break
lfsr('11001001', (8,7,6,1)) #example
I named "xor" the output of the XOR function, not very correct.
However, this is just meant to show how it circles through its possible states, in fact you noticed the register is represented by a string. Not much logical coherence.
This can be easily turned into a nice toy you can watch for hours (at least I could :-)
def lfsr(seed, taps):
import time
sr, xor = seed, 0
while 1:
for t in taps:
xor += int(sr[t-1])
if xor%2 == 0.0:
xor = 0
else:
xor = 1
print(xor)
print('')
time.sleep(0.75)
sr, xor = str(xor) + sr[:-1], 0
print(sr)
print('')
time.sleep(0.75)
Then it struck me, what use is this in writing software? I heard it can generate random numbers; is it true? how?
So, it would be nice if someone could:
explain how to use such a device in software development
come up with some code, to support the point above or just like mine to show different ways to do it, in any language
Also, as theres not much didactic stuff around about this piece of logic and digital circuitry, it would be nice if this could be a place for noobies (like me) to get a better understanding of this thing, or better, to understand what it is and how it can be useful when writing software. Should have made it a community wiki?
That said, if someone feels like golfing... you're welcome.
Since I was looking for a LFSR-implementation in Python, I stumbled upon this topic. I found however that the following was a bit more accurate according to my needs:
def lfsr(seed, mask):
result = seed
nbits = mask.bit_length()-1
while True:
result = (result << 1)
xor = result >> nbits
if xor != 0:
result ^= mask
yield xor, result
The above LFSR-generator is based on GF(2k) modulus calculus (GF = Galois Field). Having just completed an Algebra course, I'm going to explain this the mathematical way.
Let's start by taking, for example, GF(24), which equals to {a4x4 + a3x3 + a2x2 + a1x1 + a0x0 | a0, a1, ..., a4 ∈ Z2} (to clarify, Zn = {0,1,...,n-1} and therefore Z2 = {0,1}, i.e. one bit). This means that this is the set of all polynomials of the fourth degree with all factors either being present or not, but having no multiples of these factors (e.g. there's no 2xk). x3, x4 + x3, 1 and x4 + x3 + x2 + x + 1 are all examples of members of this group.
We take this set modulus a polynomial of the fourth degree (i.e., P(x) ∈ GF(24)), e.g. P(x) = x4+x1+x0. This modulus operation on a group is also denoted as GF(24) / P(x). For your reference, P(x) describes the 'taps' within the LFSR.
We also take a random polynomial of degree 3 or lower (so that it's not affected by our modulus, otherwise we could just as well perform the modulus operation directly on it), e.g. A0(x) = x0. Now every subsequent Ai(x) is calculated by multiplying it with x: Ai(x) = Ai-1(x) * x mod P(x).
Since we are in a limited field, the modulus operation may have an effect, but only when the resulting Ai(x) has at least a factor x4 (our highest factor in P(x)). Note that, since we are working with numbers in Z2, performing the modulus operation itself is nothing more than determining whether every ai becomes a 0 or 1 by adding the two values from P(x) and Ai(x) together (i.e., 0+0=0, 0+1=1, 1+1=0, or 'xoring' these two).
Every polynomial can be written as a set of bits, for example x4+x1+x0 ~ 10011. The A0(x) can be seen as the seed. The 'times x' operation can be seen as a shift left operation. The modulus operation can be seen as a bit masking operation, with the mask being our P(x).
The algorithm depicted above therefore generates (an infinite stream of) valid four bit LFSR patterns. For example, for our defined A0(x) (x0) and P(x) (x4+x1+x0), we can define the following first yielded results in GF(24) (note that A0 is not yielded until at the end of the first round -- mathematicians generally start counting at '1'):
i Ai(x) 'x⁴' bit pattern
0 0x³ + 0x² + 0x¹ + 1x⁰ 0 0001 (not yielded)
1 0x³ + 0x² + 1x¹ + 0x⁰ 0 0010
2 0x³ + 1x² + 0x¹ + 0x⁰ 0 0100
3 1x³ + 0x² + 0x¹ + 0x⁰ 0 1000
4 0x³ + 0x² + 1x¹ + 1x⁰ 1 0011 (first time we 'overflow')
5 0x³ + 1x² + 1x¹ + 0x⁰ 0 0110
6 1x³ + 1x² + 0x¹ + 0x⁰ 0 1100
7 1x³ + 0x² + 1x¹ + 1x⁰ 1 1011
8 0x³ + 1x² + 0x¹ + 1x⁰ 1 0101
9 1x³ + 0x² + 1x¹ + 0x⁰ 0 1010
10 0x³ + 1x² + 1x¹ + 1x⁰ 1 0111
11 1x³ + 1x² + 1x¹ + 0x⁰ 0 1110
12 1x³ + 1x² + 1x¹ + 1x⁰ 1 1111
13 1x³ + 1x² + 0x¹ + 1x⁰ 1 1101
14 1x³ + 0x² + 0x¹ + 1x⁰ 1 1001
15 0x³ + 0x² + 0x¹ + 1x⁰ 1 0001 (same as i=0)
Note that your mask must contain a '1' at the fourth position to make sure that your LFSR generates four-bit results. Also note that a '1' must be present at the zeroth position to make sure that your bitstream would not end up with a 0000 bit pattern, or that the final bit would become unused (if all bits are shifted to the left, you would also end up with a zero at the 0th position after one shift).
Not all P(x)'s necessarily are generators for GF(2k) (i.e., not all masks of k bits generate all 2k-1-1 numbers). For example, x4 + x3 + x2 + x1 + x0 generates 3 groups of 5 distinct polynomals each, or "3 cycles of period 5": 0001,0010,0100,1000,1111; 0011,0110,1100,0111,1110; and 0101,1010,1011,1001,1101. Note that 0000 can never be generated, and can't generate any other number.
Usually, the output of an LFSR is the bit that is 'shifted' out, which is a '1' if the modulus operation is performed, and a '0' when it isn't. LFSR's with a period of 2k-1-1, also called pseudo-noise or PN-LFSR's, adhere to Golomb's randomness postulates, which says as much as that this output bit is random 'enough'.
Sequences of these bits therefore have their use in cryptography, for instance in the A5/1 and A5/2 mobile encryption standards, or the E0 Bluetooth standard. However, they are not as secure as one would like: the Berlekamp-Massey algorithm can be used to reverse-engineer the characteristic polynomal (the P(x)) of the LFSR. Strong encryption standards therefore use Non-linear FSR's or similar non-linear functions. A related topic to this are the S-Boxes used in AES.
Note that I have used the int.bit_length() operation. This was not implemented until Python 2.7.
If you'd only like a finite bit pattern, you could check whether the seed equals the result and then break your loop.
You can use my LFSR-method in a for-loop (e.g. for xor, pattern in lfsr(0b001,0b10011)) or you can repeatedly call the .next() operation on the result of the method, returning a new (xor, result)-pair everytime.
Actually, algorithms based on LFSR are very common. CRC is actually directly based on LFSR. Of course, in computer science classes people talk about polynomials when they're talking about how the input value is supposed to be XORed with the accumulated value, in electornics engineering we talk about taps instead. They are the same just different terminology.
CRC32 is a very common one. It's used to detect errors in Ethernet frames. That means that when I posted this answer my PC used an LFSR based algorithm to generate a hash of the IP packet so that my router can verify that what it's transmitting isn't corrupted.
Zip and Gzip files are another example. Both use CRC for error detection. Zip uses CRC32 and Gzip uses both CRC16 and CRC32.
CRCs are basically hash functions. And it's good enough to make the internet work. Which means LFSRs are fairly good hash functions. I'm not sure if you know this but in general good hash functions are considered good random number generators. But the thing with LFSR is that selecting the correct taps (polynomials) is very important to the quality of the hash/random number.
Your code is generally toy code since it operates on a string of ones and zeros. In the real world LFSR work on bits in a byte. Each byte you push through the LFSR changes the accumulated value of the register. That value is effectively a checksum of all the bytes you've push through the register. Two common ways of using that value as a random number is to either use a counter and push a sequence of numbers through the register, thereby transforming the linear sequence 1,2,3,4 to some hashed sequence like 15306,22,5587,994, or to feed back the current value into the register to generate a new number in seemingly random sequence.
It should be noted that doing this naively using bit-fiddling LFSR is quite slow since you have to process bits at a time. So people have come up with ways using pre-calculated tables to do it eight bits at a time or even 32 bits at a time. This is why you almost never see LFSR code in the wild. In most production code it masquerades as something else.
But sometimes a plain bit-twiddling LFSR can come in handy. I once wrote a Modbus driver for a PIC micro and that protocol used CRC16. A pre-calculated table requires 256 bytes of memory and my CPU only had 68 bytes (I'm not kidding). So I had to use an LFSR.
There are many applications of LFSRs. One of them is generating noise, for instance the SN76489 and variants (used on the Master System, Game Gear, MegaDrive, NeoGeo Pocket, ...) use a LFSR to generate white/periodic noise. There's a really good description of SN76489's LFSR in this page.
Here is one of my python libraries - pylfsr to implement LFSR. I have tried to make it an efficient that can handle any length of LFSR to generate the binary sequence.
import numpy as np
from pylfsr import LFSR
#for 5-bit LFSR with polynomial x^5 + x^4 + x^3 + x^2 +1
seed = [0,0,0,1,0]
fpoly = [5,4,3,2]
L = LFSR(fpoly=fpoly,initstate =seed)
seq = L.runKCycle(10)
You can display all the info at step too,
state = [1,1,1]
fpoly = [3,2]
L = LFSR(initstate=state,fpoly=fpoly,counter_start_zero=False)
print('count \t state \t\toutbit \t seq')
print('-'*50)
for _ in range(15):
print(L.count,L.state,'',L.outbit,L.seq,sep='\t')
L.next()
print('-'*50)
print('Output: ',L.seq)
Output
count state outbit seq
--------------------------------------------------
1 [1 1 1] 1 [1]
2 [0 1 1] 1 [1 1]
3 [0 0 1] 1 [1 1 1]
4 [1 0 0] 0 [1 1 1 0]
5 [0 1 0] 0 [1 1 1 0 0]
6 [1 0 1] 1 [1 1 1 0 0 1]
7 [1 1 0] 0 [1 1 1 0 0 1 0]
8 [1 1 1] 1 [1 1 1 0 0 1 0 1]
9 [0 1 1] 1 [1 1 1 0 0 1 0 1 1]
10 [0 0 1] 1 [1 1 1 0 0 1 0 1 1 1]
11 [1 0 0] 0 [1 1 1 0 0 1 0 1 1 1 0]
12 [0 1 0] 0 [1 1 1 0 0 1 0 1 1 1 0 0]
13 [1 0 1] 1 [1 1 1 0 0 1 0 1 1 1 0 0 1]
14 [1 1 0] 0 [1 1 1 0 0 1 0 1 1 1 0 0 1 0]
--------------------------------------------------
Output: [1 1 1 0 0 1 0 1 1 1 0 0 1 0 1]
Also can be visualize like this
Check out the Documentation here
To make it really elegant and Pythonic, try to create a generator, yield-ing successive values from the LFSR. Also, comparing to a floating point 0.0 is unnecessary and confusing.
A LFSR is just one of many ways to create pseudo-random numbers in computers. Pseudo-random, because there numbers aren't really random - you can easily repeat them by starting with the seed (initial value) and proceeding with the same mathematical operations.
Below is a variation on your code using integers and binary operators instead of strings. It also uses yield as someone suggested.
def lfsr2(seed, taps):
sr = seed
nbits = 8
while 1:
xor = 1
for t in taps:
if (sr & (1<<(t-1))) != 0:
xor ^= 1
sr = (xor << nbits-1) + (sr >> 1)
yield xor, sr
if sr == seed:
break
nbits = 8
for xor, sr in lfsr2(0b11001001, (8,7,6,1)):
print xor, bin(2**nbits+sr)[3:]
Here a piece of code where you can choose your seed, the number of bits and the taps you want:
from functools import reduce
def lfsr(seed=1, bits=8, taps=[8, 6, 5, 4]):
"""
1 2 3 4 5 6 7 8 (bits == 8)
┌─┬─┬─┬─┬─┬─┬─┬─┐
┌─→│0│1│0│1│0│0│1│1├─→
│ └─┴─┴─┴┬┴┬┴─┴┬┴─┘
└──────XOR┘ │ │
└──XOR──┘ (taps == 7, 5, 4)
"""
taps = [bits - tap for tap in taps]
r = seed & (1 << bits) - 1
while(1):
tap_bits = [(r >> tap) & 1 for tap in taps]
bit = reduce(lambda x, y : x ^ y, tap_bits)
yield r & 1
r &= (1 << bits) - 1
r = (r >> 1) | (bit << (bits - 1))
If we assume that seed is a list of ints rather than a string (or convert it if it is not) then the following should do what you want with a bit more elegance:
def lfsr(seed, taps) :
while True:
nxt = sum([ seed[x] for x in taps]) % 2
yield nxt
seed = ([nxt] + seed)[:max(taps)+1]
Example :
for x in lfsr([1,0,1,1,1,0,1,0,0],[1,5,6]) :
print x
list_init=[1,0,1,1]
list_coeff=[1,1,0,0]
out=[]
for i in range(15):
list_init.append(sum([list_init[i]*list_coeff[i] for i in range(len(list_init))])%2)
out.append(list_init.pop(0))
print(out)
#https://www.rocq.inria.fr/secret/Anne.Canteaut/encyclopedia.pdf
This class provides an easy of use LFSR generator object
import numpy as np
class lfsr:
def __init__(self, seed=1, nbits=8, taps=(0,1, 5, 6)): # different taps may not work well. I suggest looking for a standard configuration
self.seed0 =seed
self.seed = seed
self.nbits = nbits
self.bmask = (2**nbits)-1
self.taps = taps
def next_rnd(self):
b_in = 0
for t in self.taps:
o = 2**t
b_in ^= (o&self.seed)>>t
self.seed =(self.seed >> 1) | (b_in << (self.nbits-1))
self.seed = self.seed & self.bmask
return self.seed
def print_s(self):
print(self.seed)
def get_rnd_array(self, seed=None):
self.seed = seed if seed is not None else self.seed
arr = np.zeros((2**self.nbits))
for i in range(2**self.nbits):
arr[i] = self.next_rnd()
return arr
def get_double_rnd_array_circular(self, seed=None): # ref.: Compact and Accurate Stochastic Circuits with Shared Random Number Sources
k = int(self.nbits/2)
self.seed = seed if seed is not None else self.seed
arr0 = np.zeros((2**self.nbits))
arr1 = np.zeros((2**self.nbits))
for i in range(2**self.nbits):
rnd = self.next_rnd()
arr0[i] = rnd
rnd_p0 = rnd >> k
rnd_p1 = (rnd & (2**k-1)) << k
rnd_p2 = rnd_p1 | rnd_p0
arr1[i] = rnd_p2
return arr0, arr1
l = lfsr(1, 4, (0,1))
print(l.get_rnd_array(11))
print(l.get_double_rnd_array_circular(11))