How to define variable with two zeros - python

I making MAC addr generator and currently I have this problem.
mac1="001122334455"
mac2="001122334695"
mac1 = [mac1[x:x+2] for x in xrange(0,len(mac1),2)]
mac2 = [mac2[x:x+2] for x in xrange(0,len(mac2),2)]
k=0
for item in mac1:
mac1[k] = "%d" % int(mac1[k], 16)
mac2[k] = "%d" % int(mac2[k], 16)
mac1[k]=int(mac1[k])
mac2[k]=int(mac2[k])
k=k+1
while mac1 != mac2:
#print mac1
print "%X0:%X:%X:%X:%X:%X" % (mac1[0], mac1[1], mac1[2], mac1[3], mac1[4], mac1[5])
mac1[5] = int(mac1[5]) + 1
if int(mac1[5]) > 255:
#mac1[5] = 00
mac1[4] = int(mac1[4]) +1
if int(mac1[4]) > 255:
mac1[3] = int(mac1[3]) + 1
if int(mac1[3]) > 255:
mac1[2] = int(mac1[2]) +1
if int(mac1[2]) > 255:
mac1[1] = int(mac1[1]) +1
I need to start generating fifth byte from beginning so I defined mac1[5] = 00, but instead of two 0 I only get one 0?

Much simpler to just treat the entire mac as one number:
mac1 = 0x1122334455
mac2 = 0x1122334695
for i in xrange(mac1, mac2+1):
s = "%012x" % i
print ':'.join(s[j:j+2] for j in range(0,12,2)))
See Display number with leading zeros

You cannot set an integer as 00 it will always degrade to 0, added to that fact, in python 2.x putting a 0 in front of an integer (for example 0123) will tell python you want that number evaluated as an octal! defiantly not what you want. In python 3.x, 0integer is not allowed at all!
you need to use strings if you want 00 instead of 0.
Out of interest, are you trying to generate a range of macs between mac1 and mac2, if so I suspect i have a more elegant solution if you are interested.
EDIT:
Working solution will print the hex values of the mac address between start and finish, since it works internally with integers between 0 and 255 the start and end values are integers not hex values.
start = [0,11,22,33,44,55]
end = [0,11,22,33,46,95]
def generate_range(start, end):
cur = start
while cur < end:
cur[5] = int(cur[5]) + 1
for pos in range(len(cur)-1, -1, -1):
if cur[pos] == 255:
cur[pos] = 0
cur[pos-1] = int(cur[pos-1]) + 1
yield ':'.join("{0:02X}".format(cur[i]) for i in range(0,len(cur)))
for x in generate_range(start, end):
print (x)

Related

Restore corrupt 128-bit key from SHA-1

Disclaimer: This is a section from a uni assignment
I have been given the following AES-128-CBC key and told that up to 3 bits in the key have been changed/corrupt.
d9124e6bbc124029572d42937573bab4
The original key's SHA-1 hash is provided;
439090331bd3fad8dc398a417264efe28dba1b60
and I have to find the original key by trying all combinations of up to 3 bit flips.
Supposedly this is possible in 349633 guesses however I don't have a clue where that number came from; I would have assumed it would be closer to 128*127*126 which would be over 2M combinations, that's where my first problem lies.
Secondly, I created the python script below containing a triple nested loop (I know, far from the best code...) to iterate over all 2M possibilities however, after completion an hour later, it hadn't found any matches which I really don't understand.
Hoping someone can atleast point me in the right direction, cheers
#!/usr/bin/python2
import sys
import commands
global binary
def inverseBit(index):
global binary
if binary[index] == "0":
return "1"
return "0"
if __name__ == '__main__':
if len(sys.argv) != 3:
print "Usage: bitflip.py <hex> <sha-1>"
sys.exit()
global binary
binary = ""
sha = str(sys.argv[2])
binary = str(bin(int(sys.argv[1], 16)))
binary = binary[2:]
print binary
b2 = binary
tries = 0
file = open("shas", "w")
for x in range(-2, 128):
for y in range(-1,128):
for z in range(0,128):
if x >= 0:
b2 = b2[:x] + inverseBit(x) + b2[x+1:]
if y >= 0:
b2 = b2[:y] + inverseBit(y) + b2[y+1:]
b2 = b2[:z] + inverseBit(z) + b2[z+1:]
#print b2
hexOut = hex(int(b2,2))
command = "echo -n \"" + hexOut + "\" | openssl sha1"
cmdOut = str(commands.getstatusoutput(command))
cmdOut = cmdOut[cmdOut.index('=')+2:]
cmdOut = cmdOut[:cmdOut.index('\'')]
file.write(str(hexOut) + " | " + str(cmdOut) + "\n")
if len(cmdOut) != 40:
print cmdOut
if cmdOut == sha:
print "Found bit reversals in " + str(tries) + " tries. Corrected key:"
print hexOut
sys.exit()
b2 = binary
tries = tries + 1
if tries % 10000 == 0:
print tries
EDIT:
Changing for loop to
for x in range(-2, 128):
for y in range(x+1,128):
for z in range(y+1,128):
drastically cuts down on the number of guesses while (I think?) still covering the whole space. Still getting some duplicates and still no luck finding the match though..
Your code, if not very efficient, looks fine except for one thing:
hexOut = hex(int(b2,2))
as the output of hex
>>> hex(int('01110110000101',2))
'0x1d85'
starts with 'Ox', which shouldn't be part of the key. So, you should be fine by removing these two characters.
For the number of possible keys to try, you have:
1 with no bit flipped
128 with 1 bit flipped
128*127/2 = 8128 with 2 bits flipped (128 ways to choose the first one, 127 ways to choose the second, and each pair will appear twice)
128*127*126/6 = 341376 with 3 bits flipped (each triplet appears 6 times). This is the number of combinations of 128 bits taken 3 at a time.
So, the total is 1 + 128 + 8128 + 341376 = 349633 possibilities.
Your code tests each of them many times. You could avoid a the useless repetitions by looping like this (for 3 bits):
for x in range (0, 128):
for y in range(x+1, 128):
for z in range(y+1, 128):
.....
You could adapt your trick of starting at -2 with:
for x in range (-2, 128):
for y in range(x+1, 128):
for z in range(y+1, 128):
.... same code you used ...
You could also generate the combinations with itertools.combinations:
from itertools import combinations
for x, y, z in combinations(range(128), 3): # for 3 bits
......
but you'd need a bit more work to manage the cases with 0, 1, 2 and 3 flipped bits in this case.

Find length of a string that includes its own length?

I want to get the length of a string including a part of the string that represents its own length without padding or using structs or anything like that that forces fixed lengths.
So for example I want to be able to take this string as input:
"A string|"
And return this:
"A string|11"
On the basis of the OP tolerating such an approach (and to provide an implementation technique for the eventual python answer), here's a solution in Java.
final String s = "A String|";
int n = s.length(); // `length()` returns the length of the string.
String t; // the result
do {
t = s + n; // append the stringified n to the original string
if (n == t.length()){
return t; // string length no longer changing; we're good.
}
n = t.length(); // n must hold the total length
} while (true); // round again
The problem of, course, is that in appending n, the string length changes. But luckily, the length only ever increases or stays the same. So it will converge very quickly: due to the logarithmic nature of the length of n. In this particular case, the attempted values of n are 9, 10, and 11. And that's a pernicious case.
A simple solution is :
def addlength(string):
n1=len(string)
n2=len(str(n1))+n1
n2 += len(str(n2))-len(str(n1)) # a carry can arise
return string+str(n2)
Since a possible carry will increase the length by at most one unit.
Examples :
In [2]: addlength('a'*8)
Out[2]: 'aaaaaaaa9'
In [3]: addlength('a'*9)
Out[3]: 'aaaaaaaaa11'
In [4]: addlength('a'*99)
Out[4]: 'aaaaa...aaa102'
In [5]: addlength('a'*999)
Out[5]: 'aaaa...aaa1003'
Here is a simple python port of Bathsheba's answer :
def str_len(s):
n = len(s)
t = ''
while True:
t = s + str(n)
if n == len(t):
return t
n = len(t)
This is a much more clever and simple way than anything I was thinking of trying!
Suppose you had s = 'abcdefgh|, On the first pass through, t = 'abcdefgh|9
Since n != len(t) ( which is now 10 ) it goes through again : t = 'abcdefgh|' + str(n) and str(n)='10' so you have abcdefgh|10 which is still not quite right! Now n=len(t) which is finally n=11 you get it right then. Pretty clever solution!
It is a tricky one, but I think I've figured it out.
Done in a hurry in Python 2.7, please fully test - this should handle strings up to 998 characters:
import sys
orig = sys.argv[1]
origLen = len(orig)
if (origLen >= 98):
extra = str(origLen + 3)
elif (origLen >= 8):
extra = str(origLen + 2)
else:
extra = str(origLen + 1)
final = orig + extra
print final
Results of very brief testing
C:\Users\PH\Desktop>python test.py "tiny|"
tiny|6
C:\Users\PH\Desktop>python test.py "myString|"
myString|11
C:\Users\PH\Desktop>python test.py "myStringWith98Characters.........................................................................|"
myStringWith98Characters.........................................................................|101
Just find the length of the string. Then iterate through each value of the number of digits the length of the resulting string can possibly have. While iterating, check if the sum of the number of digits to be appended and the initial string length is equal to the length of the resulting string.
def get_length(s):
s = s + "|"
result = ""
len_s = len(s)
i = 1
while True:
candidate = len_s + i
if len(str(candidate)) == i:
result = s + str(len_s + i)
break
i += 1
This code gives the result.
I used a few var, but at the end it shows the output you want:
def len_s(s):
s = s + '|'
b = len(s)
z = s + str(b)
length = len(z)
new_s = s + str(length)
new_len = len(new_s)
return s + str(new_len)
s = "A string"
print len_s(s)
Here's a direct equation for this (so it's not necessary to construct the string). If s is the string, then the length of the string including the length of the appended length will be:
L1 = len(s) + 1 + int(log10(len(s) + 1 + int(log10(len(s)))))
The idea here is that a direct calculation is only problematic when the appended length will push the length past a power of ten; that is, at 9, 98, 99, 997, 998, 999, 9996, etc. To work this through, 1 + int(log10(len(s))) is the number of digits in the length of s. If we add that to len(s), then 9->10, 98->100, 99->101, etc, but still 8->9, 97->99, etc, so we can push past the power of ten exactly as needed. That is, adding this produces a number with the correct number of digits after the addition. Then do the log again to find the length of that number and that's the answer.
To test this:
from math import log10
def find_length(s):
L1 = len(s) + 1 + int(log10(len(s) + 1 + int(log10(len(s)))))
return L1
# test, just looking at lengths around 10**n
for i in range(9):
for j in range(30):
L = abs(10**i - j + 10) + 1
s = "a"*L
x0 = find_length(s)
new0 = s+`x0`
if len(new0)!=x0:
print "error", len(s), x0, log10(len(s)), log10(x0)

What's wrong with my Python program that is supposed to convert from binary to decimal?

We're supposed to write a "simple" program that converts from binary string. It's also supposed to return 0 when given an empty string. I apologize in advance to my lack of knowledge. I'm completely new to this.
Here's my attempt:
def b(binaryString):
if binaryString[0] !=0 or binaryString[1] !=1:
return 0
else:
x = int(binaryString[1])
a = (len(binaryString)) - 1
return x * 2**a + b(binaryString[1:])
Sample Input: b('1101')
Expected Output: 13
Actual Output: IndexError: string index out of range
Probably not the most elegant solution... however, here are my two cents
def b(binaryString):
if len(binaryString):
try:
return sum([int(num)*2**(idx) for idx,num in enumerate(reversed(binaryString))])
except ValueError:
return "Your input might be incorrect"
else:
return "0"
print(b("111")) #returns 7
print(b("")) #returns 0
print (b("11111101111")) #returns 2031
def b(binaryString):
if len(binaryString) == 0: return 0
rest, lsb = binaryString[:-1], binaryString[-1]
lsb = 1 if lsb == '1' else 0 # Alternatively, lsb = int(lsb)
return (b(rest) << 1) + lsb # Alternatively, return b(rest) * 2 + lsb
Where:
print b('1101') # 13
print b('') # 0
print b('11111111') # 255
print b('10') # 2
print b('01') # 1
The code splits the input string into two variables rest and lsb.
rest contains the input string, up to but excluding the last bit.
lsb contains the least significant bit (last bit in string). It's converted to an int in the following line.
The return value of the function is b(rest) shifted up 1 bit (= *2) plus the integer value of lsb.
Worth noting is that you could "cheat" with something as simple as:
def b(binaryString):
return int('0' + binaryString, base=2)

Am I missing something or is this Microsoft algorithm for calculating the excel column characters incorrect?

I'm trying to write a function in Python that takes in a column number and outputs the corresponding Excel column code (for example: 5 -> "E", 27 -> "AA"). I tried implementing the algorithm given here: http://support.microsoft.com/kb/833402, which is the following visual basic:
Function ConvertToLetter(iCol As Integer) As String
Dim iAlpha As Integer
Dim iRemainder As Integer
iAlpha = Int(iCol / 27)
iRemainder = iCol - (iAlpha * 26)
If iAlpha > 0 Then
ConvertToLetter = Chr(iAlpha + 64)
End If
If iRemainder > 0 Then
ConvertToLetter = ConvertToLetter & Chr(iRemainder + 64)
End If
End Function
My python version:
def excelcolumn(colnum):
alpha = colnum // 27
remainder = colnum - (alpha*26)
out = ""
if alpha > 0:
out = chr(alpha+64)
if remainder > 0:
out = out + chr(remainder+64)
return out
This works fine until column number 53 which results in "A[", as alpha = 53 // 27 == 1 and thus remainder = 53 - 1*26 == 27 meaning the second character chr(64+27) will be "[". Am I missing something? My VBA skills are quite lackluster so that might be the issue.
edit: I am using Python 3.3.1
The Microsoft formula is incorrect. I'll bet they never tested it beyond 53. When I tested it myself in Excel it gave the same incorrect answer that yours did.
Here's how I'd do it:
def excelcolumn(colnum):
alpha, remainder = colnum // 26, colnum % 26
out = "" if alpha == 0 else chr(alpha - 1 + ord('A'))
out += chr(remainder + ord('A'))
return out
Not that this assumes a 0-based column number while the VBA code assumes 1-based.
If you need to extend beyond 701 columns you need something slightly different as noted in the comments:
def excelcolumn(colnum):
if colnum < 26:
return chr(colnum + ord('A'))
return excelcolumn(colnum // 26 - 1) + chr(colnum % 26 + ord('A'))
Here is one way to do it:
def xl_col_to_name(col_num):
col_str = ''
while col_num:
remainder = col_num % 26
if remainder == 0:
remainder = 26
# Convert the remainder to a character.
col_letter = chr(ord('A') + remainder - 1)
# Accumulate the column letters, right to left.
col_str = col_letter + col_str
# Get the next order of magnitude.
col_num = int((col_num - 1) / 26)
return col_str
Which gives:
>>> xl_col_to_name(5)
'E'
>>> xl_col_to_name(27)
'AA'
>>> xl_col_to_name(256)
'IV'
>>> xl_col_to_name(1000)
'ALL'
This is taken from the utility functions in the XlsxWriter module.
I am going to answer your specific question:
... is this Microsoft algorithm for calculating the excel column characters incorrect?
YES.
Generally speaking, when you want to have the integer division (typically called DIV) of two numbers, and the remainder (typically called MOD), you should use the same value as the denominator. Thus, you should use either 26 or 27 in both places.
So, the algorithm is incorrect (and it is easy to see that with iCol=27, where iAlpha=1 and iRemainder=1, while it should be iRemainder=0).
In this particular case, the number should be 26. Since this gives you numbers starting at zero, you should probably add ascii("A") (=65), generically speaking, instead of 64. The double error made it work for some cases.
The (hardly acceptable) confusion may stem from the fact that, from A to Z there are 26 columns, from A to ZZ there are 26*27 columns, from A to ZZZ there are 26*27*27 columns, and so on.
Code that works for any column, and non-recursive:
def excelcolumn(colnum):
if colnum < 1:
raise ValueError("Index is too small")
result = ""
while True:
if colnum > 26:
colnum, r = divmod(colnum - 1, 26)
result = chr(r + ord('A')) + result
else:
return chr(colnum + ord('A') - 1) + result
(taken from here).

No errors, just doesn't print or do anything

I am pretty a beginner and I'm looking for help. I am supposed to write a simple programm which reads numbers from a file (they are ordered in two columns like this:
3 788506
255 879405
3 687899
255 697879 etc)
and always pairwise subtracts the number near 255 from the number near 3. The differences should be appended to a list. I also have to check whether the pair is rigt (e.g. that it's always 3 and 255 one after the other and not two 255s). So far I think I'm ready, but it doesn't do anything. I spent hours looking for my mistake, but I just cannot see what went wrong. I would appreciate any help.
filepath = "C:/liz/RT1-1.dat"
f = open (filepath, 'rU')
reac3 = []
reac255 = []
right_list = []
wrong_list = []
very_wrong_list =[]
li = [i.strip().split() for i in f.readlines()]
for element in li:
if int(element[0]) == 3: reac3.append(element[-1])
elif int(element[0]) == 255: reac255.append(element[-1])
k = 0
for i in range (0, len(li)+1, 2): #0,2,4,6,8 etc
if li[i][0] == 3 and li[i+1][0] == 255:
difference = int(reac255[k]) - int(reac3[k])
print int(difference)
k+=1
if difference > 300 and difference < 1200: right_list.append(difference)
else: wrong_list.append(difference)
else: very_wrong_list.append(li[i])
print right_list
i.strip().split() will return 2 strings .. therefore your comparison li[i][0] == 3 & li[i+1][0] == 5 should fail as li[i][0] & li[i+1][0] are still strings.
Also notice, that since len(li) should be even, then xrange(0, len(li) + 1, 2) will eventually make i = len(li) which should be out of the list boundaries.

Categories