I am trying to test this implementation of the xtea algorithm in Python. The only testvectors I have found are these.
How can I test the output of the algorithm so that I can compare it bytewise?
Which password/key should I choose? Which endian would be best?
(I am on 64 bit xubuntu/x86/little endian)
XTEA
# 64 bit block of data to encrypt
v0, v1 = struct.unpack(endian + "2L", block)
# 128 bit key
k = struct.unpack(endian + "4L", key)
sum, delta, mask = 0L, 0x9e3779b9L, 0xffffffffL
for round in range(n):
v0 = (v0 + (((v1<<4 ^ v1>>5) + v1) ^ (sum + k[sum & 3]))) & mask
sum = (sum + delta) & mask
v1 = (v1 + (((v0<<4 ^ v0>>5) + v0) ^ (sum + k[sum>>11 & 3]))) & mask)
return struct.pack(endian + "2L", v0, v1)
Initial 64 bit test input
# pack 000000 in 64 bit string
byte_string = ''
for c in range(56, -8, -8):
byte_string += chr(000000 >> c & 0xff)
Testvectors (copied from here)
tean values
These are made by starting with a vector of 6 zeroes,
data followed by key, and coding with one cycle then
moving the six cyclically so that n becomes n-1 modulo 6.
We repeat with 2-64 cycles printing at powers of 2 in
hexadecimal. The process is reversed decoding back
to the original zeroes which are printed.
1 0 9e3779b9 0 0 0 0
2 ec01a1de aaa0256d 0 0 0 0
4 bc3a7de2 4e238eb9 0 0 ec01a1de 114f6d74
8 31c5fa6c 241756d6 bc3a7de2 845846cf 2794a127 6b8ea8b8
16 1d8e6992 9a478905 6a1d78c8 8c86d67 2a65bfbe b4bd6e46
32 d26428af a202283 27f917b1 c1da8993 60e2acaa a6eb923d
64 7a01cbc9 b03d6068 62ee209f 69b7afc 376a8936 cdc9e923
1 0 0 0 0 0 0
The C code you linked to seems to assume that a long has 32 bits -- XTEA uses a 64-bit block made of two uint32; the code uses a couple of long and doesn't do anything to handle the overflow which happens when you sum/leftshift (and propagates into later computations).
The python code lets you choose endianness, while the C code treats those numbers as... well, numbers, so if you want to compare them, you need to pick endianness (or if you're lazy, try both and see if one matches :)
Regarding the key, I'm not sure what your problem is, so I'll guess: in case you're not a C programmer, the line static long pz[1024], n, m; is a static declaration, meaning that all those values are implicitly initialized to zero.
Anything else I missed?
Related
I have pairs like these: (-102,-56), (123, -56). First value from the pairs represents the lower 8 bits and the second value represents the upper 8 bits, both are in signed decimal form. I need to convert these pairs into a single 16 bit values.
I think I was able to convert (-102,-56) pair by:
l = bin(-102 & 0b1111111111111111)[-8:]
u = bin(-56 & 0b1111111111111111)[-8:]
int(u+l,2)
But when I try to do the same with (123, -56) pair I get the following error:
ValueError: invalid literal for int() with base 2: '11001000b1111011'.
I understand that it's due to the different lengths for different values and I need to fill them up to 8 bits.
Am I approaching this completely wrong? What's the best way to do this so it works both on negative and positive values?
UPDATE:
I was able to solve this by:
low_int = 123
up_int = -56
(low_int & 0xFF) | ((up_int & 0xFF) << 8)
You can try to shift the first value 8 bits: try to use the logic described here https://stackoverflow.com/a/1857965/8947333
Just guessing
l, u = -102 & 255, -56 & 255
# shift 8 bits to left
u << 8 + l
Bitwise operations are fine, but not strictly required.
In the most common 2's complement representation for 8 bits:
-1 signed == 255 unsigned
-2 signed == 254 unsigned
...
-127 signed = 129 usigned
-128 signed = 128 usigned
simply the two absolute values always give the sum 256.
Use this to convert negative values:
if b < 0:
b += 256
and then combine the high and low byte:
value = 256 * hi8 + lo8
The usual saying is that string comparison must be done in constant time when checking things like password or hashes, and thus, it is recommended to avoid a == b.
However, I run the follow script and the results don't support the hypothesis that a==b short circuit on the first non-identical character.
from time import perf_counter_ns
import random
def timed_cmp(a, b):
start = perf_counter_ns()
a == b
end = perf_counter_ns()
return end - start
def n_timed_cmp(n, a, b):
"average time for a==b done n times"
ts = [timed_cmp(a, b) for _ in range(n)]
return sum(ts) / len(ts)
def check_cmp_time():
random.seed(123)
# generate a random string of n characters
n = 2 ** 8
s = "".join([chr(random.randint(ord("a"), ord("z"))) for _ in range(n)])
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
diffs = [s[:i] + "A" + s[i+1:] for i in range(min(50, n))]
timed = [(i, n_timed_cmp(10000, s, d)) for (i, d) in enumerate(diffs)]
sorted_timed = sorted(timed, key=lambda t: t[1])
# print the 10 fastest
for x in sorted_timed[:10]:
i, t = x
print("{}\t{:3f}".format(i, t))
print("---")
i, t = timed[0]
print("{}\t{:3f}".format(i, t))
i, t = timed[1]
print("{}\t{:3f}".format(i, t))
if __name__ == "__main__":
check_cmp_time()
Here is the result of a run, re-running the script gives slightly different results, but nothing satisfactory.
# ran with cpython 3.8.3
6 78.051700
1 78.203200
15 78.222700
14 78.384800
11 78.396300
12 78.441800
9 78.476900
13 78.519000
8 78.586200
3 78.631500
---
0 80.691100
1 78.203200
I would've expected that the fastest comparison would be where the first differing character is at the beginning of the string, but it's not what I get.
Any idea what's going on ???
There's a difference, you just don't see it on such small strings. Here's a small patch to apply to your code, so I use longer strings, and I do 10 checks by putting the A at a place, evenly spaced in the original string, from the beginning to the end, I mean, like this:
A_______________________________________________________________
______A_________________________________________________________
____________A___________________________________________________
__________________A_____________________________________________
________________________A_______________________________________
______________________________A_________________________________
____________________________________A___________________________
__________________________________________A_____________________
________________________________________________A_______________
______________________________________________________A_________
____________________________________________________________A___
## -15,13 +15,13 ## def n_timed_cmp(n, a, b):
def check_cmp_time():
random.seed(123)
# generate a random string of n characters
- n = 2 ** 8
+ n = 2 ** 16
s = "".join([chr(random.randint(ord("a"), ord("z"))) for _ in range(n)])
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
- diffs = [s[:i] + "A" + s[i+1:] for i in range(min(50, n))]
+ diffs = [s[:i] + "A" + s[i+1:] for i in range(0, n, n // 10)]
timed = [(i, n_timed_cmp(10000, s, d)) for (i, d) in enumerate(diffs)]
sorted_timed = sorted(timed, key=lambda t: t[1])
and you'll get:
0 122.621000
1 213.465700
2 380.214100
3 460.422000
5 694.278700
4 722.010000
7 894.630300
6 1020.722100
9 1149.473000
8 1341.754500
---
0 122.621000
1 213.465700
Note that with your example, with only 2**8 characters, it's already noticable, apply this patch:
## -21,7 +21,7 ## def check_cmp_time():
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
- diffs = [s[:i] + "A" + s[i+1:] for i in range(min(50, n))]
+ diffs = [s[:i] + "A" + s[i+1:] for i in [0, n - 1]]
timed = [(i, n_timed_cmp(10000, s, d)) for (i, d) in enumerate(diffs)]
sorted_timed = sorted(timed, key=lambda t: t[1])
to only keep the two extreme cases (first letter change vs last letter change) and you'll get:
$ python3 cmp.py
0 124.131800
1 135.566000
Numbers may vary, but most of the time test 0 is a tad faster that test 1.
To isolate more precisely which caracter is modified, it's possible as long as the memcmp does it character by character, so as long as it does not use integer comparisons, typically on the last character if they get misaligned, or on really short strings, like 8 char string, as I demo here:
from time import perf_counter_ns
from statistics import median
import random
def check_cmp_time():
random.seed(123)
# generate a random string of n characters
n = 8
s = "".join([chr(random.randint(ord("a"), ord("z"))) for _ in range(n)])
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
diffs = [s[:i] + "A" + s[i + 1 :] for i in range(n)]
values = {x: [] for x in range(n)}
for _ in range(10_000_000):
for i, diff in enumerate(diffs):
start = perf_counter_ns()
s == diff
values[i].append(perf_counter_ns() - start)
timed = [[k, median(v)] for k, v in values.items()]
sorted_timed = sorted(timed, key=lambda t: t[1])
# print the 10 fastest
for x in sorted_timed[:10]:
i, t = x
print("{}\t{:3f}".format(i, t))
print("---")
i, t = timed[0]
print("{}\t{:3f}".format(i, t))
i, t = timed[1]
print("{}\t{:3f}".format(i, t))
if __name__ == "__main__":
check_cmp_time()
Which gives me:
1 221.000000
2 222.000000
3 223.000000
4 223.000000
5 223.000000
6 223.000000
7 223.000000
0 241.000000
The differences are so small, Python and perf_counter_ns may no longer be the right tools here.
See, to know why it doesn't short circuit, you'll have to do some digging. The simple answer is, of course, it doesn't short circuit because the standard doesn't specify so. But you might think, "Why wouldn't the implementations choose to short circuit? Surely, It must be faster!". Not quite.
Let's take a look at cpython, for obvious reasons. Look at the code for unicode_compare_eq function defined in unicodeobject.c
static int
unicode_compare_eq(PyObject *str1, PyObject *str2)
{
int kind;
void *data1, *data2;
Py_ssize_t len;
int cmp;
len = PyUnicode_GET_LENGTH(str1);
if (PyUnicode_GET_LENGTH(str2) != len)
return 0;
kind = PyUnicode_KIND(str1);
if (PyUnicode_KIND(str2) != kind)
return 0;
data1 = PyUnicode_DATA(str1);
data2 = PyUnicode_DATA(str2);
cmp = memcmp(data1, data2, len * kind);
return (cmp == 0);
}
(Note: This function is actually called after deducing that str1 and str2 are not the same object - if they are - well that's just a simple True immediately)
Focus on this line specifically-
cmp = memcmp(data1, data2, len * kind);
Ahh, we're back at another cross road. Does memcmp short circuit? The C standard does not specify such a requirement. As seen in the opengroup docs and also in Section 7.24.4.1 of the C Standard Draft
7.24.4.1 The memcmp function
Synopsis
#include <string.h>
int memcmp(const void *s1, const void *s2, size_t n);
Description
The memcmp function compares the first n characters of the object pointed to by s1 to
the first n characters of the object pointed to by s2.
Returns
The memcmp function returns an integer greater than, equal to, or less than zero,
accordingly as the object pointed to by s1 is greater than, equal to, or less than the object pointed to by s2.
Most Some C implementations (including glibc) choose to not short circuit. But why? are we missing something, why would you not short circuit?
Because the comparison they use isn't might not be as naive as a byte by byte by check. The standard does not require the objects to be compared byte by byte. Therein lies the chance of optimization.
What glibc does, is that it compares elements of type unsigned long int instead of just singular bytes represented by unsigned char. Check out the implementation
There's a lot more going under the hood - a discussion far outside the scope of this question, after all this isn't even tagged as a C question ;). Though I found that this answer may be worth a look. But just know, the optimization is there, just in a much different form than the approach that may come in mind at first glance.
Edit: Fixed wrong function link
Edit: As #Konrad Rudolph has stated, glibc memcmp does apparently short circuit. I've been misinformed.
I have this in hex: 08
Which is this in binary: 0000 1000 (bit positions: 7,6,5,4,3,2,1,0)
Now I would like to make a bitmask in python, so I have bit position 3.
Here in example 1 or better (the one in ""): 0000 "1"000
What shall I do to have only this bit?
Thanks
Shift right by the bit index to have that bit in the 0th position, then AND with 1 to isolate it.
val = 0b01001000 # note the extra `1` to prove this works
pos = 3
bit = (val >> pos) & 1
print(bit)
outputs 1
you could just do this:
def get_bit(n, pos):
return (n >> pos) & 1
res = get_bit(n=8, pos=3)
# 1
shift the number n left by pos bits (>> pos) and then mask away the rest (& 1).
the doc on Bitwise Operations on Integer Types may help.
This is the problem:
How many integers 0 ≤ n < 10^18 have the property that the sum of the digits of n equals the sum of digits of 137n?
This solution is grossly inefficient. What am I missing?
#!/usr/bin/env python
#coding: utf-8
import time
from timestrings import *
start = time.clock()
maxpower = 18
count = 0
for i in range(0, 10 ** maxpower - 1):
if i % 9 == 0:
result1 = list(str(i))
result2 = list(str(137 * i))
sum1 = 0
for j in result1:
sum1 += int(j)
sum2 = 0
for j in result2:
sum2 += int(j)
if sum1 == sum2:
print (i, sum1)
count += 1
finish = time.clock()
print ("Project Euler, Project 290")
print ()
print ("Answer:", count)
print ("Time:", stringifytime(finish - start))
First of all, you are to count, not to show the hits.
That is very important. All you have to do is to device an efficient way to count it. Like Jon Bentley wrote in Programming Pearls: "Any methond that considers all permutations of letters for a word is doomed to failure". In fact, I tried in python, as soon as "i" hit 10^9, the system already freezed. 1.5 G memory was consumed. Let alone 10^18. And this also tells us, cite Bentley again, "Defining the problem was about ninety percent of this battle."
And to solve this problem, I can't see a way without dynamic programming (dp). In fact, most of those ridiculously huge Euler problems all require some sort of dp. The theory of dp itself is rather academic and dry, but to implement the idea of dp to solve real problems is not, in fact, the practice is fun and colorful.
One solution to the problem is, we go from 0-9 then 10-99 then 100-999 and so on and extract the signatures of the numbers, summarize numbers with the same signature and deal with all of them as a piece, thus save space and time.
Observation:
3 * 137 = 411 and 13 * 137 = 1781. Let's break the the first result "411" down into two parts: the first two digits "41" and the last digit "1". The "1" is staying, but the "41" part is going to be "carried" to further calculations. Let's call "41" the carry, the first element of the signature. The "1" will stay as the rightest digit as we go on calculating 13 * 137, 23 * 137, 33 * 137 or 43 * 137. All these *3 numbers have a "3" as their rightest digit and the last digit of 137*n is always 1. That is, the difference between this "3" and "1" is +2, call this +2 the "diff" as the second element of the signature.
OK, if we are gonna find a two-digit number with 3 as its last digit, we have to find a digit "m" that satisfies
diff_of_digitsum (m, 137*m+carry) = -2 (1)
to neutralize our +2 diff accumulated earlier. If m could do that, then you know m * 10 + 3, on the paper you write: "m3", is a hit.
For example, in our case we tried digit 1. diff_of_digitsum (digit, 137*digit+carry) = diff_of_digitsum (1, 137*1+41) = -15. Which is not -2, so 13 is not a hit.
Let's see 99. 9 * 137 = 1233. The "diff" is 9 - 3 = +6. "Carry" is 123. In the second iteration when we try to add a digit 9 to 9 and make it 99, we have diff_of_digitsum (digit, 137*digit+carry) = diff_of_digitsum (9, 137*9+123) = diff_of_digitsum (9, 1356) = -6 and it neutralizes our surplus 6. So 99 is a hit!
In code, we just need 18 iteration. In the first round, we deal with the single digit numbers, 2nd round the 2-digit numbers, then 3-digit ... until we get to 18-digit numbers. Make a table before the iterations that with a structure like this:
table[(diff, carry)] = amount_of_numbers_with_the_same_diff_and_carry
Then the iteration begins, you need to keep updating the table as you go. Add new entries if you encounter a new signature, and always update amount_of_numbers_with_the_same_diff_and_carry. First round, the single digits, populate the table:
0: 0 * 137 = 0, diff: 0; carry: 0. table[(0, 0)] = 1
1: 1 * 137 = 137. diff: 1 - 7 = -6; carry: 13. table[(-6, 13)] = 1
2: 2 * 137 = 274. diff: 2 - 7 = -5; carry: 27. table[(-5, 27)] = 1
And so on.
Second iteration, the "10"th digit, we will go over the digit 0-9 as your "m" and use it in (1) to see if it can produce a result that neutralizes the "diff". If yes, it means this m is going to make all those amount_of_numbers_with_the_same_diff_and_carry into hits. Hence counting not showing. And then we can calculate the new diff and carry with this digit added, like in the example 9 has diff 6 and carry 123 but 99 has the diff 9 - 6 ( last digit from 1356) = 3 and carry 135, replace the old table using the new info.
Last comment, be careful the digit 0. It will appear a lot of times in the iteration and don't over count it because 0009 = 009 = 09 = 9. If you use c++, make sure the sum is in unsigned long long and that sort because it is big. Good luck.
You are trying to solve a Project Euler problem by brute force. That may work for the first few problems, but for most problems you need think of a more sophisticated approach.
Since it is IMHO not OK to give advice specific to this problem, take a look at the general advice in this answer.
This brute force Python solution of 7 digits ran for 19 seconds for me:
print sum(sum(map(int, str(n))) == sum(map(int, str(137 * n)))
for n in xrange(0, 10 ** 7, 9))
On the same machine, single core, same Python interpreter, same code, would take about 3170 years to compute for 18 digits (as the problem asked).
See dgg32's answer for an inspiration of a faster counting.
I have two matrices. Both are filled with zeros and ones. One is a big one (3000 x 2000 elements), and the other is smaller ( 20 x 20 ) elements. I am doing something like:
newMatrix = (size of bigMatrix), filled with zeros
l = (a constant)
for y in xrange(0, len(bigMatrix[0])):
for x in xrange(0, len(bigMatrix)):
for b in xrange(0, len(smallMatrix[0])):
for a in xrange(0, len(smallMatrix)):
if (bigMatrix[x, y] == smallMatrix[x + a - l, y + b - l]):
newMatrix[x, y] = 1
Which is being painfully slow. Am I doing anything wrong? Is there a smart way to make this work faster?
edit: Basically I am, for each (x,y) in the big matrix, checking all the pixels of both big matrix and the small matrix around (x,y) to see if they are 1. If they are 1, then I set that value on newMatrix. I am doing a sort of collision detection.
I can think of a couple of optimisations there -
As you are using 4 nested python "for" statements, you are about as slow as you can be.
I can't figure out exactly what you are looking for -
but for one thing, if your big matrix "1"s density is low, you can certainly use python's "any" function on bigMtarix's slices to quickly check if there are any set elements there -- you could get a several-fold speed increase there:
step = len(smallMatrix[0])
for y in xrange(0, len(bigMatrix[0], step)):
for x in xrange(0, len(bigMatrix), step):
if not any(bigMatrix[x: x+step, y: y + step]):
continue
(...)
At this point, if still need to interact on each element, you do another pair of indexes to walk each position inside the step - but I think you got the idea.
Apart from using inner Numeric operations like this "any" usage, you could certainly add some control flow code to break-off the (b,a) loop when the first matching pixel is found.
(Like, inserting a "break" statement inside your last "if" and another if..break pair for the "b" loop.
I really can't figure out exactly what your intent is - so I can't give you more specifc code.
Your example code makes no sense, but the description of your problem sounds like you are trying to do a 2d convolution of a small bitarray over the big bitarray. There's a convolve2d function in scipy.signal package that does exactly this. Just do convolve2d(bigMatrix, smallMatrix) to get the result. Unfortunately the scipy implementation doesn't have a special case for boolean arrays so the full convolution is rather slow. Here's a function that takes advantage of the fact that the arrays contain only ones and zeroes:
import numpy as np
def sparse_convolve_of_bools(a, b):
if a.size < b.size:
a, b = b, a
offsets = zip(*np.nonzero(b))
n = len(offsets)
dtype = np.byte if n < 128 else np.short if n < 32768 else np.int
result = np.zeros(np.array(a.shape) + b.shape - (1,1), dtype=dtype)
for o in offsets:
result[o[0]:o[0] + a.shape[0], o[1]:o[1] + a.shape[1]] += a
return result
On my machine it runs in less than 9 seconds for a 3000x2000 by 20x20 convolution. The running time depends on the number of ones in the smaller array, being 20ms per each nonzero element.
If your bits are really packed 8 per byte / 32 per int,
and you can reduce your smallMatrix to 20x16,
then try the following, here for a single row.
(newMatrix[x, y] = 1 when any bit of the 20x16 around x,y is 1 ??
What are you really looking for ?)
python -m timeit -s '
""" slide 16-bit mask across 32-bit pairs bits[j], bits[j+1] """
import numpy as np
bits = np.zeros( 2000 // 16, np.uint16 ) # 2000 bits
bits[::8] = 1
mask = 32+16
nhit = 16 * [0]
def hit16( bits, mask, nhit ):
"""
slide 16-bit mask across 32-bit pairs bits[j], bits[j+1]
bits: long np.array( uint16 )
mask: 16 bits, int
out: nhit[j] += 1 where pair & mask != 0
"""
left = bits[0]
for b in bits[1:]:
pair = (left << 16) | b
if pair: # np idiom for non-0 words ?
m = mask
for j in range(16):
if pair & m:
nhit[j] += 1
# hitposition = jb*16 + j
m <<= 1
left = b
# if any(nhit): print "hit16:", nhit
' \
'
hit16( bits, mask, nhit )
'
# 15 msec per loop, bits[::4] = 1
# 11 msec per loop, bits[::8] = 1
# mac g4 ppc