I'm trying to calculate an LRC (Longitudinal Redundancy Check) value with Python.
My Python code is pulled from other posts on StackOverflow. It looks like this:
lrc = 0
for b in message:
lrc ^= b
print lrc
If I plug in the value '\x02\x47\x30\x30\x03', I get an LRC value of 70 or 0x46 (F)
However, I am expecting a value of 68 - 0x44 (D) instead.
I have calculated the correct LRC value via C# code:
byte LRC = 0;
for (int i = 1; i < bytes.Length; i++)
{
LRC ^= bytes[i];
}
return LRC;
If I plug in the same byte array values, I get the expected result of 0x44.
Functionally, the code looks very similar. So I'm wondering what the difference is between the code. Is it my input value? Should I format my string differently?
Arrays are 0-ordered in C#, so by starting iteration from int i = 1; you are skipping 1st byte.
Python result is correct one.
Fixed reference code:
byte LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC ^= bytes[i];
}
return LRC;
To avoid such mistake you should consider using foreach syntactic sugar (although I'm not familiar with C# practices).
/edit
To skip first byte in Python simply use slice syntax:
lrc = 0
for b in message[1:]:
lrc ^= b
print lrc
So I figured out the answer to my question. Thanks to Nsh for his insight. I found a way to make the algorithm work. I just had to skip the first byte in the for-loop. There's probably a better way to do this but it was quick and it's readable.
def calcLRC(input):
input=input.decode('hex')
lrc = 0
i = 0
message = bytearray(input)
for b in message:
if(i == 0):
pass
else:
lrc ^= b
i+=1;
return lrc
It now returns the expected 0x44 in my use case.
Related
I was participating in a competitive programming contest, and faced a question where out of four test cases, my answer was correct in 3, but exceeded time limit in 4th.
I tried to get better results by converting my code from python to cpp (I know that time complexity remains same, but it was worth a shot :))
Following is the question:
A string is said to be using strong language if it contains at least K consecutive characters '*'.
You are given a string S with length N. Determine whether it uses strong language or not.
Input:
The first line of the input contains a single integer T denoting the number of test cases. The description of T test cases follows.
The first line of each test case contains two space-separated integers N and K.
The second line contains a single string S with length N.
Output:
Print a single line containing the string "YES" if the string contains strong language or "NO" if it does not
My python approach:
for _ in range(int(input())):
k = int(input().split()[1])
s = input()
s2 = "".join(["*"]*k)
if len(s.split(s2))>1:
print("YES")
else:
print("NO")
My converted Cpp code (converted it myself)
#include <iostream>
#include<string>
using namespace std;
int main() {
// your code goes here
int t;
std::cin >> t;
for (int i = 0; i < t; i++) {
/* code */
int n,k;
std::cin >> n >> k;
string str;
cin >> str;
string str2(k,'*');
size_t found = str.find(str2);
if (found != string::npos){
std::cout << "YES" << std::endl;
} else {
std::cout << "NO" << std::endl;
}
}
return 0;
}
Please guide me how can I reduce my time complexity?
Other approaches : "Using find() function instead of split or using for loop"
Edit:
Sample Input :
2
5 1
abd
5 2
*i**j
Output :
NO
YES
The bounds you posted suggest that linear time is OK in Python. You can simply keep a running track of how many asterisks you have seen in a row.
T = int(input())
for _ in range(T):
n, k = map(int, input())
s = input()
count, ans = 0, False
for c in s:
if c == "*":
count += 1
else:
count = 0
ans = ans or count >= k
if ans:
print("NO")
else:
print("YES")
I can also tell you why you are TLE'ing. Consider the case where n = 1e6, k = 5e5, and s is a string where the first k-1 characters are asterisks. The find method you have is going to check every position for matching the str2 you created. This will take O(n^2) time, giving you a TLE.
I have been looking for a simple python code which can generate a crc32-sum. It is for a stm32 and i dont find a good example which is adjustable.
To get the right settings for my calculation i used following side.
http://www.sunshine2k.de/coding/javascript/crc/crc_js.html
The settings would be the following:
Polynomial: 0x4C11DB7,
Initial Value: 0xFFFFFFFF
and no Xor Value or 0x00, also the Input and result are not reflected.
Does someone know where i could get a simple adjustable algorithm or where i can learn how to write one?
Edit:
I use this function to create the table
def create_table():
a = []
for i in range(256):
k = i
for j in range(8):
if k & 1:
k ^= 0x4C11DB7
k >>= 1
a.append(k)
return a
and the following for generating the crc-sum
def crc32(bytestream):
crc_table = create_table()
crc32 = 0xffffffff
for byte in range( int(len(bytestream)) ):
lookup_index = (crc32 ^ byte) & 0xff
crc32 = (crc32 >> 8) ^ crc_table[lookup_index]
return crc32
and call the function with this
print(hex(crc32(b"1205")))
the result is: 0x9f8e7b8c
but the website gives me: 0xA7D10A0A
can someone help me?
First off, what you have is for a reflected CRC, not a non-reflected CRC. Though there is an error in your table construction. This:
if k & 1:
k ^= 0x4C11DB7
k >>= 1
is wrong. The exclusive-or must be done after the shift. So it would need to be (for the reflected case):
k = (k >> 1) ^ 0xedb88320 if k & 1 else k >> 1
Note that the polynomial also needs to be reflected in this case.
Another error in your code is using range to make the integers 0, 1, ..., and using those instead of the actual data bytes to compute the CRC on! What you want for your for loop is simply:
for byte in bytestream:
The whole point of using a table is to make the CRC calculation faster. You don't want to regenerate the table every time you do a CRC. You want to generate the table once when your program starts, and then use it multiple times. Or you can generate the table separately from your program, and then put the table itself in your program. That's what's usually done.
Anyway, to do the non-reflected case, you need to flip things around. So to make the table:
def create_table():
a = []
for i in range(256):
k = i << 24;
for _ in range(8):
k = (k << 1) ^ 0x4c11db7 if k & 0x80000000 else k << 1
a.append(k & 0xffffffff)
return a
To use the table:
def crc32(bytestream):
crc_table = create_table()
crc = 0xffffffff
for byte in bytestream:
lookup_index = ((crc >> 24) ^ byte) & 0xff
crc = ((crc & 0xffffff) << 8) ^ crc_table[lookup_index]
return crc
Now it correctly implements your specification, which happens to be the MPEG-2 32-bit CRC specification (from Greg Cook's CRC catalogue):
width=32 poly=0x04c11db7 init=0xffffffff refin=false refout=false xorout=0x00000000 check=0x0376e6e7 residue=0x00000000 name="CRC-32/MPEG-2"
For the code above, if I do:
print(hex(crc32(b'123456789')))
I get 0x376e6e7, which matches the check value in the catalog.
Again, you need to take the create_table() out of the crc32() routine and do it somewhere else, once.
I tried rewriting the small C program below in Python, but I am getting different outputs.
C version:
#include <stdio.h>
int main()
{
unsigned char data = 0x00;
unsigned char i;
unsigned char bit = 0x01;
unsigned char parity = 1;
unsigned char value = 0x1c;
for (i = 0; i < 8; i++)
{
data = data | bit;
bit = bit << 1;
parity = parity ^ (data & 0x01);
}
printf("data: %d bit: %d parity: %d\n", data, bit, parity);
return 0;
}
Python version:
data = 0
bit = 1
parity = 1
value = int('1c', 16)
for i in range(8):
data = data | bit
bit = bit << 1
parity = parity ^ (data & 1)
print('data: {0} bit: {1} parity: {2}'.format(data, bit, parity))
And the outputs are:
C version
> $ ./test
data: 255 bit: 0 parity: 1
Python version
> $ python3 test.py
data: 255 bit: 256 parity: 1
What am I missing on Python bitwise operations?
As you can see, the only difference in the output is the value of the variable bit.
In your C program, the variable bit is declared as unsigned char. That means it takes on only the values 0 through 255. The last operation on bit in your code is
bit = bit << 1
Before the last time that line is executed, bit is 128. After that line, it "tries" to become 256 but that does not fit into an unsigned char. So overflow happens, not flagged by your program, and bit becomes 0.
In the Python program, the variable bit is simply an integer, int, which has no maximum size. So there is no overflow, and bit does become 256.
There are several ways to overcome that difference in Python. You could force bit to stay in the desired range by instead using
bit = (bit << 1) % 256
or perhaps
bit = (bit << 1) & 255
You could instead make bit to be the equivalent of an unsigned char variable. As a comment says, you could use the ctypes module, I believe. You could also use numpy (I'm more familiar with that).
Im trying to run some c code in python using inline from scipy.weave.
Lets say we have 2 double arrays and onbe double value, i wish to add each index of the first index to the corresponiding index of the next index, plus the value.
The C code:
double* first;
double* second;
double val;
int length;
int i;
for (i = 0; i < length; i++) {
second[i] = second[i] + first[i] + val;
}
Then i wish to use the "second" array in my python code again.
Given the following python code:
import numpy
from scipy import weave
first = zeros(10) #first double array
second = ones(10) #second python array
val = 1.0
code = """
the c code
"""
second = inline(code,[first, second, val, 10])
Now i am not shure if this is the correct way of sending in the arrays/getting it out, and how to use/get acces to them within the c code.
The following code is an algorithm to determine the amount of integer triangles, with their biggest side being smaller or equal to MAX, that have an integer median. The Python version works but is too slow for bigger N, while the C++ version is a lot faster but doesn't give the right result.
When MAX is 10, C++ and Python both return 3.
When MAX is 100, Python returns 835 and C++ returns 836.
When MAX is 200, Python returns 4088 and C++ returns 4102.
When MAX is 500, Python returns 32251 and C++ returns 32296.
When MAX is 1000, Python returns 149869 and C++ returns 150002.
Here's the C++ version:
#include <cstdio>
#include <math.h>
const int MAX = 1000;
int main()
{
long long int x = 0;
for (int b = MAX; b > 4; b--)
{
printf("%lld\n", b);
for (int a = b; a > 4; a -= 2){
for (int c = floor(b/2); c < floor(MAX/2); c+=1)
{
if (a+b > 2*c){
int d = 2*(pow(a,2)+pow(b,2)-2*pow(c,2));
if (sqrt(d)/2==floor(sqrt(d)/2))
x+=1;
}
}
}
}
printf("Done: ");
printf("%lld\n", x);
}
Here's the original Python version:
import math
def sumofSquares(n):
f = 0
for b in range(n,4,-1):
print(b)
for a in range(b,4,-2):
for C in range(math.ceil(b/2),n//2+1):
if a+b>2*C:
D = 2*(a**2+b**2-2*C**2)
if (math.sqrt(D)/2).is_integer():
f += 1
return f
a = int(input())
print(sumofSquares(a))
print('Done')
I'm not too familiar with C++ so I have no idea what could be happening that's causing this (maybe an overflow error?).
Of course, any optimizations for the algorithm are more than welcome!
The issue is that the range for your c (C in python) variables do not match. To make them equivalent to your existing C++ range, you can change your python loop to:
for C in range(int(math.floor(b/2)), int(math.floor(n/2))):
...
To make them equivalent to your existing python range, you can change your C++ loop to:
for (int c = ceil(b/2.0); c < MAX/2 + 1; c++) {
...
}
Depending on which loop is originally correct, this will make the results match.
It seams some troubles could be here:
(sqrt(d)==floor(sqrt(d)))