Does python Sha1 takes integer - python

How to take integer in hashlib.sha1(int).
Please see the code in which i am taking IP as string converting it as integer now at hash.sha1 doest take integer...
import hashlib
import socket
import struct
class blommy(object):
def __init__(self):
self.bitarray= [0]*2048
def hashes(self,ip):
#convert decimal dotted quad string to long integer"
intip= struct.unpack('>L',socket.inet_aton(ip))[0]
index = [0, 1]
hbyte = hashlib.sha1(intip) # how to take sha1 of integer??
index[0] = ord(hbyte[0])| ord(hbyte[1])<< 8
index[1] = ord(hbyte[2])| ord(hbyte[3])<< 8
Need to convert this C code to python. Please advice some part of code is written above. If i take ip as int I cannot compute sha1 + even if convert ip using socket than sha1 dont accept it suggestion? see comments below
//fixed parameters
k = 2
m = 256*8
//the filter
byte[m/8] bloom ##
function insertIP(byte[] ip) {
byte[20] hash = sha1(ip)
int index1 = hash[0] | hash[1] << 8 # how to in python?
int index2 = hash[2] | hash[3] << 8
// truncate index to m (11 bits required)
index1 %= m ## ?
index2 %= m ## ?
// set bits at index1 and index2
bloom[index1 / 8] |= 0x01 << index1 % 8 ## ??
bloom[index2 / 8] |= 0x01 << index2 % 8 ## ??
}
// insert IP 192.168.1.1 into the filter:
insertIP(byte[4] {192,168,1,1})

The answer to your question is no, you can calculate the hash of strings but not integers. Try something like:
hashlib.sha1(str(1234)).digest()
to get the hash of your integer as a string.

Related

Converting struct format string to range of allowable int values

Python struct library has a bunch of format strings corresponding with a ctype ("h": int16, "H": uint16).
Is there a simple way to go from a format string (e.g. "h", "H", etc.) to the range of possible values (e.g. -32768 to 32767, 0 to 65535, etc.)?
I see the struct library provides calcsize, but what I really want is something like calcrange.
Is there a built-in solution, or an elegant solution I am neglecting? I am also open to third party libraries.
I have made a DIY calcrange below, but it only covers a limited number of possible format strings and makes some non-generalizable assumptions.
def calcrange(fmt: str) -> Tuple[int, int]:
"""Calculate the min and max possible value of a given struct format string."""
size: int = calcsize(fmt)
unsigned_max = int("0x" + "FF" * size, 16)
if fmt.islower():
# Signed case
min_ = -1 * int("0x80" + "00" * (calcsize(fmt) - 1), 16)
return min_, unsigned_max + min_
# Unsigned case
return 0, unsigned_max
The math can be simplified. If b is the bit-width, then unsigned values are 0 to 2b-1 and signed values are -2(b-1) to 2(b-1)-1. It only works for the integer types.
Here's a the simplified version:
from typing import *
import struct
def calcrange(intcode):
b = struct.calcsize(intcode) * 8
if intcode.islower():
return -2**(b-1),2**(b-1)-1
else:
return 0,2**b-1
for code in 'bBhHiIlLqQnN':
s,e = calcrange(code)
print(f'{code} {s:26,} to {e:26,}')
Output:
b -128 to 127
B 0 to 255
h -32,768 to 32,767
H 0 to 65,535
i -2,147,483,648 to 2,147,483,647
I 0 to 4,294,967,295
l -2,147,483,648 to 2,147,483,647
L 0 to 4,294,967,295
q -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
Q 0 to 18,446,744,073,709,551,615
n -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
N 0 to 18,446,744,073,709,551,615

Add mask to a byte-array knowing bits length and starting position

I have to apply a bit mask to a CAN-bus payload message (8 bytes) to filter a single signal (there are a multiple signals in a message) in Python 3 and my inputs are:
Length of the signal I want to filter in binary (think about a set of '1's).
The starting position of the signal.
The problem is that the signal can start in the middle of a byte and occupy more than 1 byte.
For example I have to filter a signal with starting bit position = 50 and length = 10
The mask will be byte 6 = (00111111) and byte 7 = (11000000). All other bytes set to 0.
I've tried to build an array of bytes with 1's and then apply | with an empty 8-byte length array to have the mask. And also create directly the 8-byte array but can't achieve how to bitwise correctly the starting position.
I tried with bitstring module and bytearray but can't find a good solution.
Could anyone help?
Thank you very much.
Edit: adding non-functional code if signal starts in the middle of byte:
my_mask_byte = [0, 0, 0, 0, 0, 0, 0, 0]
message_bit_pos = 50
message_signal_lenght = 10
byte_pos = message_bit_pos // 8
bit_pos = message_bit_pos % 8
for i in range(0, message_signal_lenght):
if i < 8:
my_mask_byte[byte_pos + i // 8] = 1 << i + bit_pos | my_mask_byte[byte_pos + i // 8]
else:
my_mask_byte[byte_pos + i // 8] = 1 << i-8 | my_mask_byte[byte_pos + i // 8]
for byte in my_mask_byte:
print(bin(byte))
byte 6 = (00111111) and byte 7 = (11110000)
You missed 2 bits since length is 10.
You can easily achieve this with numpy:
import numpy as np
message_bit_pos = 50
message_signal_length = 10
mask = np.uint64(0)
while message_signal_length > 0:
mask |= np.uint64(1 << (64-50-message_signal_length))
message_signal_length -= 1
print(f'mask: 0b{mask:064b}')
n = np.uint64(0b0000000000000000011000000000000000000000000000000011111111000000)
print(f'n: 0b{n:064b}')
n &= mask
print(f'n&m: 0b{n:064b}')
output:
mask: 0b0000000000000000000000000000000000000000000000000011111111110000
n: 0b0000000000000000011000000000000000000000000000000011111111000000
n&m: 0b0000000000000000000000000000000000000000000000000011111111000000

Getting wrong values when I stitch 2 shorts back into an unsigned long

I am doing BLE communications with an Arduino Board and an FPGA.
I have this requirement which restraints me from changing the packet structure (the packet structure is basically short data types). Thus, to send a timestamp (form millis()) over, I have to split an unsigned long into 2 shorts on the Arduino side and stitch it back up on the FPGA side (python).
This is the implementation which I have:
// Arduino code in c++
unsigned long t = millis();
// bitmask to get bits 1-16
short LSB = (short) (t & 0x0000FFFF);
// bitshift to get bits 17-32
short MSB = (short) (t >> 16);
// I then send the packet with MSB and LSB values
# FPGA python code to stitch it back up (I receive the packet and extract the MSB and LSB)
MSB = data[3]
LSB = data[4]
data = MSB << 16 | LSB
Now the issue is that my output for data on the FPGA side is sometimes negative, which tells me that I must have missed something somewhere as timestamps are not negative. Does any one know why ?
When I transfer other data in the packet (i.e. other short values and not the timestamp), I am able to receive them as expected, so the problem most probably lies in the conversion that I did and not the sending/receiving of data.
short defaults to signed, and in case of a negative number >> will keep the sign by shifting in one bits in from the left. See e.g. Microsoft.
From my earlier comment:
In Python avoid attempting that by yourself (by the way short from C perspective has no idea concerning its size, you always have to look into the compiler manual or limits.h) and use the struct module instead.
you probably need/want to first convert the long to network byte order using hotnl
As guidot reminded “short” is signed and as data are transferred to Python the code has an issue:
For t=0x00018000 most significant short MSB = 1, least significant short LSB = -32768 (0x8000 in C++ and -0x8000 in Python) and Python code expression
time = MSB << 16 | LSB
returns time = -32768 (see the start of Python code below).
So, we have incorrect sign and we are loosing MSB (any value, not only 1 in our example).
MSB is lost because in the expression above LSB is extended with sign bit 1 to the left 16 bits, then new 16 “1” bits override with “|” operator whatever MSB we have and then all new 16 “1” bits are discarded and the expression returns LSB.
Straightforward fix (1.1 Fix) would be fixing MSB, LSB to unsigned short. This could be enough without any changes in Python code.
To exclude bit operations we could use “union” as per 1.2 Fix.
Without access to C++ code we could fix in Python by converting signed LSB, MSB (2.1 Fix) or use “Union” (similar to C++ “union”, 2.2 Fix).
C++
#include <iostream>
using namespace std;
int main () {
unsigned long t = 0x00018000;
short LSB = (short)(t & 0x0000FFFF);
short MSB = (short)(t >> 16);
cout << hex << "t = " << t << endl;
cout << dec << "LSB = " << LSB << " MSB = " << MSB << endl;
// 1.1 Fix Use unsigned short instead of short
unsigned short fixedLSB = (unsigned short)(t & 0x0000FFFF);
unsigned short fixedMSB = (unsigned short)(t >> 16);
cout << "fixedLSB = " << fixedLSB << " fixedMSB = " << fixedMSB << endl;
// 1.2 Fix Use union
union {
unsigned long t2;
unsigned short unsignedShortArray[2];
};
t2 = 0x00018000;
fixedLSB = unsignedShortArray [0];
fixedMSB = unsignedShortArray [1];
cout << "fixedLSB = " << fixedLSB << " fixedMSB = " << fixedMSB << endl;
}
Output
t = 18000
LSB = -32768 MSB = 1
fixedLSB = 32768 fixedMSB = 1
fixedLSB = 32768 fixedMSB = 1
Python
DATA=[0, 0, 0, 1, -32768]
MSB=DATA[3]
LSB=DATA[4]
data = MSB << 16 | LSB
print (f"MSB = {MSB} ({hex(MSB)})")
print (f"LSB = {LSB} ({hex(LSB)})")
print (f"data = {data} ({hex(data)})")
time = MSB << 16 | LSB
print (f"time = {time} ({hex(time)})")
# 2.1 Fix
def twosComplement (short):
if short >= 0: return short
return 0x10000 + short
fixedTime = twosComplement(MSB) << 16 | twosComplement(LSB)
# 2.2 Fix
import ctypes
class UnsignedIntUnion(ctypes.Union):
_fields_ = [('unsignedInt', ctypes.c_uint),
('ushortArray', ctypes.c_ushort * 2),
('shortArray', ctypes.c_short * 2)]
unsignedIntUnion = UnsignedIntUnion(shortArray = (LSB, MSB))
print ("unsignedIntUnion")
print ("unsignedInt = ", hex(unsignedIntUnion.unsignedInt))
print ("ushortArray[1] = ", hex(unsignedIntUnion.ushortArray[1]))
print ("ushortArray[0] = ", hex(unsignedIntUnion.ushortArray[0]))
print ("shortArray[1] = ", hex(unsignedIntUnion.shortArray[1]))
print ("shortArray[0] = ", hex(unsignedIntUnion.shortArray[0]))
unsignedIntUnion.unsignedInt=twosComplement(unsignedIntUnion.shortArray[1]) << 16 | twosComplement(unsignedIntUnion.shortArray[0])
def toUInt(msShort: int, lsShort: int):
return UnsignedIntUnion(ushortArray = (lsShort, msShort)).unsignedInt
fixedTime = toUInt(MSB, LSB)
print ("fixedTime = ", hex(fixedTime))
print()
Output
MSB = 1 (0x1)
LSB = -32768 (-0x8000)
data = -32768 (-0x8000)
time = -32768 (-0x8000)
unsignedIntUnion
unsignedInt = 0x18000
ushortArray[1] = 0x1
ushortArray[0] = 0x8000
shortArray[1] = 0x1
shortArray[0] = -0x8000
fixedTime = 0x18000

Python's equivalent of perl vec() function

I am new to Python. In Perl, to set specific bits to a scalar variable(integer), I can use vec() as below.
#!/usr/bin/perl -w
$vec = '';
vec($vec, 3, 4) = 1; # bits 0 to 3
vec($vec, 7, 4) = 10; # bits 4 to 7
vec($vec, 11, 4) = 3; # bits 8 to 11
vec($vec, 15, 4) = 15; # bits 12 to 15
print("vec() Has a created a string of nybbles,
in hex: ", unpack("h*", $vec), "\n");
Output:
vec() Has a created a string of nybbles,
in hex: 0001000a0003000f
I was wondering how to achieve the same in Python, without having to write bit manipulation code and using struct.pack manually?
Not sure how the vec function works in pearl (haven't worked with the vec function). However, according to the output you have mentioned, the following code in python works fine. I do not see the significance of the second argument. To call the vec function this way: vec(value, size). Every time you do so, the output string will be concatenated to the global final_str variable.
final_vec = ''
def vec(value, size):
global final_vec
prefix = ''
str_hex = str(hex(value)).replace('0x','')
str_hex_size = len(str_hex)
for i in range (0, size - str_hex_size):
prefix = prefix + '0'
str_hex = prefix + str_hex
final_vec = final_vec + str_hex
return 0
vec(1, 4)
vec(10, 4)
vec(3, 4)
vec(15, 4)
print(final_vec)
If you really want to create a hex string from nibbles, you could solve it this way
nibbles = [1,10,3,15]
hex = '0x' + "".join([ "%04x" % x for x in nibbles])

Hex string to signed int in Python

How do I convert a hex string to a signed int in Python 3?
The best I can come up with is
h = '9DA92DAB'
b = bytes(h, 'utf-8')
ba = binascii.a2b_hex(b)
print(int.from_bytes(ba, byteorder='big', signed=True))
Is there a simpler way? Unsigned is so much easier: int(h, 16)
BTW, the origin of the question is itunes persistent id - music library xml version and iTunes hex version
In n-bit two's complement, bits have value:
bit 0 = 20
bit 1 = 21
bit n-2 = 2n-2
bit n-1 = -2n-1
But bit n-1 has value 2n-1 when unsigned, so the number is 2n too high. Subtract 2n if bit n-1 is set:
def twos_complement(hexstr, bits):
value = int(hexstr, 16)
if value & (1 << (bits - 1)):
value -= 1 << bits
return value
print(twos_complement('FFFE', 16))
print(twos_complement('7FFF', 16))
print(twos_complement('7F', 8))
print(twos_complement('FF', 8))
Output:
-2
32767
127
-1
import struct
For Python 3 (with comments' help):
h = '9DA92DAB'
struct.unpack('>i', bytes.fromhex(h))
For Python 2:
h = '9DA92DAB'
struct.unpack('>i', h.decode('hex'))
or if it is little endian:
h = '9DA92DAB'
struct.unpack('<i', h.decode('hex'))
Here's a general function you can use for hex of any size:
import math
# hex string to signed integer
def htosi(val):
uintval = int(val,16)
bits = 4 * (len(val) - 2)
if uintval >= math.pow(2,bits-1):
uintval = int(0 - (math.pow(2,bits) - uintval))
return uintval
And to use it:
h = str(hex(-5))
h2 = str(hex(-13589))
x = htosi(h)
x2 = htosi(h2)
This works for 16 bit signed ints, you can extend for 32 bit ints. It uses the basic definition of 2's complement signed numbers. Also note xor with 1 is the same as a binary negate.
# convert to unsigned
x = int('ffbf', 16) # example (-65)
# check sign bit
if (x & 0x8000) == 0x8000:
# if set, invert and add one to get the negative value, then add the negative sign
x = -( (x ^ 0xffff) + 1)
It's a very late answer, but here's a function to do the above. This will extend for whatever length you provide. Credit for portions of this to another SO answer (I lost the link, so please provide it if you find it).
def hex_to_signed(source):
"""Convert a string hex value to a signed hexidecimal value.
This assumes that source is the proper length, and the sign bit
is the first bit in the first byte of the correct length.
hex_to_signed("F") should return -1.
hex_to_signed("0F") should return 15.
"""
if not isinstance(source, str):
raise ValueError("string type required")
if 0 == len(source):
raise valueError("string is empty")
sign_bit_mask = 1 << (len(source)*4-1)
other_bits_mask = sign_bit_mask - 1
value = int(source, 16)
return -(value & sign_bit_mask) | (value & other_bits_mask)

Categories