AES encryption between iOS and Python - python
I have functions to encrypt/decrypt using AES (128 and 256) on both iOS (CCCrypt) and python (pycryptdome). All test cases working on each platform but... when I take an AES key and encrypted string from iOS to python the decryption fails. I have looked extensively and tried various use cases to no avail.
I've created a simple test case here with an iOS encryption and python decryption in the hopes that someone can tell me what i am doing differently on the platforms.
iOS code
Test Case
NSString *test_aes = #"XSmTe1Eyw8JsZkreIFUpNi7BhKEReHTP";
NSString *test_string = #"This is a test string";
NSData *clearPayload = [test_string dataUsingEncoding:NSUTF8StringEncoding];
NSData *encPayload = nil;
char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused)
bzero( keyPtr, sizeof( keyPtr ) ); // fill with zeroes (for padding)
// fetch key data
[test_aes getCString:keyPtr maxLength:sizeof( keyPtr ) encoding:NSUTF8StringEncoding];
NSUInteger dataLength = clearPayload.length;
size_t bufferSize = dataLength + kCCKeySizeAES256;
void *buffer = malloc( bufferSize );
size_t numBytesEncrypted = 0;
CCCryptorStatus cryptStatus = CCCrypt( kCCEncrypt, kCCAlgorithmAES, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
NULL /* initialization vector (optional) */,
[clearPayload bytes], dataLength, /* input */
buffer, bufferSize, /* output */
&numBytesEncrypted );
NSString *encString = #"Error";
if( cryptStatus == kCCSuccess )
{
//the returned NSData takes ownership of the buffer and will free it on deallocation
encPayload = [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
encString = [encPayload base64EncodedStringWithOptions:NSDataBase64EncodingEndLineWithLineFeed];
}
//free( buffer ); //free the buffer
NSLog(#"Src = %# AES = %# String = %#",test_string, test_aes, encString);
encPayload = [[NSData alloc] initWithBase64EncodedString:encString options:NSDataBase64DecodingIgnoreUnknownCharacters];
clearPayload = nil;
char keyPtr2[kCCKeySizeAES256+1]; // room for terminator (unused)
bzero( keyPtr2, sizeof( keyPtr2 ) ); // fill with zeroes (for padding)
// fetch key data
[test_aes getCString:keyPtr2 maxLength:sizeof( keyPtr2 ) encoding:NSUTF8StringEncoding];
NSUInteger dataLength2 = [encPayload length];
//See the doc: For block ciphers, the output size will always be less than or
//equal to the input size plus the size of one block.
//That's why we need to add the size of one block here
size_t bufferSize2 = dataLength2 + kCCKeySizeAES256;
void *buffer2 = malloc( bufferSize2 );
size_t numBytesDecrypted = 0;
CCCryptorStatus cryptStatus2 = CCCrypt( kCCDecrypt, kCCAlgorithmAES, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
NULL /* initialization vector (optional) */,
[encPayload bytes], dataLength2, /* input */
buffer2, bufferSize2, /* output */
&numBytesDecrypted );
NSString *clearString = #"Error";
if( cryptStatus2 == kCCSuccess )
{
//the returned NSData takes ownership of the buffer and will free it on deallocation
clearPayload = [NSData dataWithBytesNoCopy:buffer2 length:numBytesDecrypted];
clearString = [[NSString alloc] initWithData:clearPayload encoding:NSUTF8StringEncoding];
}
NSLog(#"Res = %#",clearString);
The encryption and decryption in this code works fine and the output is:
Src = This is a test string
AES = XSmTe1Eyw8JsZkreIFUpNi7BhKEReHTP
String = hUbjWyXX4mB01gI0RJhYQRD0iAjQnkGTpsnKcmDpvaQ=
Res = This is a test string
When I take the encoded string and aes key to python to test with this code:
key = "XSmTe1Eyw8JsZkreIFUpNi7BhKEReHTP"
data = "hUbjWyXX4mB01gI0RJhYQRD0iAjQnkGTpsnKcmDpvaQ="
usekey = key
useData = data
if isinstance(key, str):
usekey = key.encode('utf-8')
cipher = AES.new(usekey, AES.MODE_GCM, nonce=self.nonce)
print("nonce", cipher.nonce)
if isinstance(data, str):
useData = data.encode('utf-8')
useData = b64decode(useData)
puseData = useData # unpad(useData,32)
print("decrypt:In bytes=", puseData)
result = cipher.decrypt(puseData)
print ("decrypt:Out bytes=",result)
The decryption fails with output of
nonce b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
decrypt:In bytes= b'\x85F\xe3[%\xd7\xe2`t\xd6\x024D\x98XA\x10\xf4\x88\x08\xd0\x9eA\x93\xa6\xc9\xcar`\xe9\xbd\xa4'
decrypt:Out bytes= b'\x08\xc58\x962q\x94\xff#\xfa\xab\xe2\xc8{b\xed\x0b\xedw\x8f\xe3\xec\x0b\x8e\xfb\xcc\x12\x7f\x9e\xb4\x8f\xd6'
Both of the above routines work with locally encrypted data without issue, I have hacked the examples here (including not freeing the malloc'ed buffers :) for debugging purposes, so i apologize for the somewhat dirty code.
Note: I have tried changing the python mode to AES.MODE_CBC (and added padding code) when i saw notes iOS may use this rather than GCM, this failed as well... For now I have kept the nonce / iv as an array of 0 as I am told iOS will have used this as the default as the CCCrypt is not provided one, when this example works I will transition to specified iv.
I'd appreciate any direction.
EDIT:
I went ahead and specified a null IV on the iOS side with
char iv[16]; // also tried 17
bzero( iv, sizeof( iv ) );
No change in behaviour at all...
EDIT:
I set the IV to all char '1' on both systems and got the same result.
iOS added code:
NSString *hardCodeIV = #"1111111111111111";
char iv[17];
bzero( iv, sizeof( iv ) );
[hardCodeIV getCString:iv maxLength:sizeof(iv) encoding:NSUTF8StringEncoding];
Which produced
Src = This is a test string
AES = XSmTe1Eyw8JsZkreIFUpNi7BhKEReHTP
String = sFoZ24VRN1hyMzegXT+GFzAn/YGPvaKO8p1eD+xhGaU=
Res = This is a test string
So on iOS it encrypts and decrypts properly with the byte 0 and char 1 IV....
And the python code works as well when encrypted and decrypted locally with either IV... But when the output from iOS encryption is used on python to decrypt it fails as shown here.
Moving the key and encrypted message to python for decryption as:
key = "XSmTe1Eyw8JsZkreIFUpNi7BhKEReHTP"
data = "sFoZ24VRN1hyMzegXT+GFzAn/YGPvaKO8p1eD+xhGaU="
usekey = key
useData = data
if isinstance(key, str):
usekey = key.encode('utf-8')
cipher = AES.new(usekey, AES.MODE_GCM, nonce=self.nonce)
print("nonce", cipher.nonce)
if isinstance(data, str):
useData = data.encode('utf-8')
useData = b64decode(useData)
puseData = useData # unpad(useData,32)
print("decrypt:In bytes=", puseData)
result = cipher.decrypt(puseData)
print ("decrypt:Out bytes=",result)
Resulted in:
nonce b'1111111111111111'
decrypt:In bytes= b"\xb0Z\x19\xdb\x85Q7Xr37\xa0]?\x86\x170'\xfd\x81\x8f\xbd\xa2\x8e\xf2\x9d^\x0f\xeca\x19\xa5"
decrypt:Out bytes= b'\xc3\x1e"w\x86:~\x86\xd3\xc9H3\xd3\xd3y)|,|\xe02(\xc6\x17\xa3\x1e\xe2\x0f\x1a#\xbbW'
So, still no joy...
It looks very much like the algorithm choice is the problem, but the options on iOS seems to only be GCM or CBC with GCM being the default... Most testing has been done on GCM. I attempted to use CBC in one test (with no IV as it does not need one) in case iOS was actually using this and not telling me, but as shown above, that also had no success.
I'm continuing to test approaches, but could really use some advice from someone who has made this work - i have not been able to find working examples. [as a side note, the RSA models work fine - this is how i am moving the AES key around - and that part of the solution is flawless at the moment, this is the last bit i need to get operational).
Edited to final answer with both ECB and CBC working between iOS and Python:
With credit to others who built the origional NSData_AESCrypt code at:
// AES Encrypt/Decrypt
// Created by Jim Dovey and 'Jean'
// See http://iphonedevelopment.blogspot.com/2009/02/strong-encryption-for-cocoa-cocoa-touch.html
//
// BASE64 Encoding/Decoding
// Copyright (c) 2001 Kyle Hammond. All rights reserved.
// Original development by Dave Winer.
//
// Put together by Michael Sedlaczek, Gone Coding on 2011-02-22
//
On iOS, encryption logic is modified from the original NSData+AESCrypt is modified as:
#implementation NSData (AESCrypt)
- (NSData *)AES256EncryptWithKey:(NSString *)key
{
return [self AES256EncryptWithKey:key ECB:false];
}
- (NSData *)AES256EncryptWithKey:(NSString *)key ECB:(Boolean) ecb
{
// 'key' should be 32 bytes for AES256, will be null-padded otherwise
char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused)
bzero( keyPtr, sizeof( keyPtr ) ); // fill with zeroes (for padding)
// fetch key data
[key getCString:keyPtr maxLength:sizeof( keyPtr ) encoding:NSUTF8StringEncoding];
// create results buffer with extra space for padding
NSUInteger dataLength = [self length];
size_t bufferSize = dataLength + kCCKeySizeAES256;
void *buffer = malloc( bufferSize );
size_t numBytesEncrypted = 0;
NSString *hardCodeIV = #"1111111111111111";
char iv[17];
bzero( iv, sizeof( iv ) );
[hardCodeIV getCString:iv maxLength:sizeof(iv) encoding:NSUTF8StringEncoding];
//CBC
CCCryptorStatus cryptStatus = kCCSuccess;
if (ecb == false)
{
cryptStatus = CCCrypt( kCCEncrypt, kCCAlgorithmAES, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
iv ,
[self bytes], dataLength,
buffer, bufferSize,
&numBytesEncrypted );
} else
{
// ECB
cryptStatus = CCCrypt( kCCEncrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding | kCCOptionECBMode,
keyPtr, kCCKeySizeAES256,
NULL,
[self bytes], dataLength,
buffer, bufferSize,
&numBytesEncrypted );
}
if( cryptStatus == kCCSuccess )
{
//the returned NSData takes ownership of the buffer and will free it on deallocation
return [NSData dataWithBytesNoCopy:buffer length:numBytesEncrypted];
}
free( buffer ); //free the buffer
return nil;
}
- (NSData *)AES256DecryptWithKey:(NSString *)key
{
return [self AES256DecryptWithKey:key ECB:false];
}
- (NSData *)AES256DecryptWithKey:(NSString *)key ECB:(Boolean) ecb
{
// 'key' should be 32 bytes for AES256, will be null-padded otherwise
char keyPtr[kCCKeySizeAES256+1]; // room for terminator (unused)
bzero( keyPtr, sizeof( keyPtr ) ); // fill with zeroes (for padding)
// fetch key data
[key getCString:keyPtr maxLength:sizeof( keyPtr ) encoding:NSUTF8StringEncoding];
NSUInteger dataLength = [self length];
size_t bufferSize = dataLength + kCCKeySizeAES256;
void *buffer = malloc( bufferSize );
size_t numBytesDecrypted = 0;
NSString *hardCodeIV = #"1111111111111111";
char iv[17];
bzero( iv, sizeof( iv ) );
[hardCodeIV getCString:iv maxLength:sizeof(iv) encoding:NSUTF8StringEncoding];
CCCryptorStatus cryptStatus = kCCSuccess;
if (ecb == false)
{
cryptStatus = CCCrypt( kCCDecrypt, kCCAlgorithmAES, kCCOptionPKCS7Padding,
keyPtr, kCCKeySizeAES256,
iv ,
[self bytes], dataLength,
buffer, bufferSize,
&numBytesDecrypted );
} else {
cryptStatus = CCCrypt( kCCDecrypt, kCCAlgorithmAES128, kCCOptionPKCS7Padding | kCCOptionECBMode,
keyPtr, kCCKeySizeAES256,
NULL,
[self bytes], dataLength,
buffer, bufferSize,
&numBytesDecrypted );
}
if( cryptStatus == kCCSuccess )
{
//the returned NSData takes ownership of the buffer and will free it on deallocation
return [NSData dataWithBytesNoCopy:buffer length:numBytesDecrypted];
}
free( buffer ); //free the buffer
return nil;
}
The resulting NSData element is then base64 encoded using some helper classes (used unmodified from the class) as:
static char encodingTable[64] =
{
'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P',
'Q','R','S','T','U','V','W','X','Y','Z','a','b','c','d','e','f',
'g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v',
'w','x','y','z','0','1','2','3','4','5','6','7','8','9','+','/'
};
+ (NSData *)dataWithBase64EncodedString:(NSString *)string
{
return [[NSData allocWithZone:nil] initWithBase64EncodedString:string];
}
- (id)initWithBase64EncodedString:(NSString *)string
{
NSMutableData *mutableData = nil;
if( string )
{
unsigned long ixtext = 0;
unsigned long lentext = 0;
unsigned char ch = 0;
unsigned char inbuf[4], outbuf[3];
short i = 0, ixinbuf = 0;
BOOL flignore = NO;
BOOL flendtext = NO;
NSData *base64Data = nil;
const unsigned char *base64Bytes = nil;
// Convert the string to ASCII data.
base64Data = [string dataUsingEncoding:NSASCIIStringEncoding];
base64Bytes = [base64Data bytes];
mutableData = [NSMutableData dataWithCapacity:base64Data.length];
lentext = base64Data.length;
while( YES )
{
if( ixtext >= lentext ) break;
ch = base64Bytes[ixtext++];
flignore = NO;
if( ( ch >= 'A' ) && ( ch <= 'Z' ) ) ch = ch - 'A';
else if( ( ch >= 'a' ) && ( ch <= 'z' ) ) ch = ch - 'a' + 26;
else if( ( ch >= '0' ) && ( ch <= '9' ) ) ch = ch - '0' + 52;
else if( ch == '+' ) ch = 62;
else if( ch == '=' ) flendtext = YES;
else if( ch == '/' ) ch = 63;
else flignore = YES;
if( ! flignore )
{
short ctcharsinbuf = 3;
BOOL flbreak = NO;
if( flendtext )
{
if( ! ixinbuf ) break;
if( ( ixinbuf == 1 ) || ( ixinbuf == 2 ) ) ctcharsinbuf = 1;
else ctcharsinbuf = 2;
ixinbuf = 3;
flbreak = YES;
}
inbuf [ixinbuf++] = ch;
if( ixinbuf == 4 )
{
ixinbuf = 0;
outbuf [0] = ( inbuf[0] << 2 ) | ( ( inbuf[1] & 0x30) >> 4 );
outbuf [1] = ( ( inbuf[1] & 0x0F ) << 4 ) | ( ( inbuf[2] & 0x3C ) >> 2 );
outbuf [2] = ( ( inbuf[2] & 0x03 ) << 6 ) | ( inbuf[3] & 0x3F );
for( i = 0; i < ctcharsinbuf; i++ )
[mutableData appendBytes:&outbuf[i] length:1];
}
if( flbreak ) break;
}
}
}
self = [self initWithData:mutableData];
return self;
}
#pragma mark -
- (NSString *)base64Encoding
{
return [self base64EncodingWithLineLength:0];
}
- (NSString *)base64EncodingWithLineLength:(NSUInteger)lineLength
{
const unsigned char *bytes = [self bytes];
NSMutableString *result = [NSMutableString stringWithCapacity:self.length];
unsigned long ixtext = 0;
unsigned long lentext = self.length;
long ctremaining = 0;
unsigned char inbuf[3], outbuf[4];
unsigned short i = 0;
unsigned short charsonline = 0, ctcopy = 0;
unsigned long ix = 0;
while( YES )
{
ctremaining = lentext - ixtext;
if( ctremaining <= 0 ) break;
for( i = 0; i < 3; i++ )
{
ix = ixtext + i;
if( ix < lentext ) inbuf[i] = bytes[ix];
else inbuf [i] = 0;
}
outbuf [0] = (inbuf [0] & 0xFC) >> 2;
outbuf [1] = ((inbuf [0] & 0x03) << 4) | ((inbuf [1] & 0xF0) >> 4);
outbuf [2] = ((inbuf [1] & 0x0F) << 2) | ((inbuf [2] & 0xC0) >> 6);
outbuf [3] = inbuf [2] & 0x3F;
ctcopy = 4;
switch( ctremaining )
{
case 1:
ctcopy = 2;
break;
case 2:
ctcopy = 3;
break;
}
for( i = 0; i < ctcopy; i++ )
[result appendFormat:#"%c", encodingTable[outbuf[i]]];
for( i = ctcopy; i < 4; i++ )
[result appendString:#"="];
ixtext += 3;
charsonline += 4;
if( lineLength > 0 )
{
if( charsonline >= lineLength )
{
charsonline = 0;
[result appendString:#"\n"];
}
}
}
return [NSString stringWithString:result];
}
The resulting base64 encoded string is then sent to the cloud where python pycryptodome can decrypt it as:
def aesCBCEncrypt(self,key,stri):
if isinstance(key, str):
key = key.encode('utf-8')
cipher = AES.new(key, AES.MODE_CBC, self.nonce) # , nonce=self.nonce)
if isinstance(stri, str):
data = stri.encode('utf-8')
try:
data = pad(data, 16)
ciphertext = cipher.encrypt(data)
ciphertext = b64encode(ciphertext)
ret = ciphertext.decode('utf-8')
except:
print("Some Error")
ret = ""
return ret
def aesCBCDecrypt(self,key,data):
if isinstance(key, str):
key = key.encode('utf-8')
cipher = AES.new(key, AES.MODE_CBC, self.nonce) # , nonce=self.nonce)
if isinstance(data, str):
data = data.encode('utf-8')
data = b64decode(data)
try:
result = cipher.decrypt(data)
result = unpad(result, 16)
ret = result.decode('utf-8')
except:
print("Some Error")
ret = ""
return ret
def aesECBEncrypt(self,key,stri):
if isinstance(key, str):
key = key.encode('utf-8')
cipher = AES.new(key, AES.MODE_ECB) # , nonce=self.nonce)
if isinstance(stri, str):
data = stri.encode('utf-8')
try:
data = pad(data,16)
ciphertext = cipher.encrypt(data)
ciphertext = b64encode(ciphertext)
ret = ciphertext.decode('utf-8')
except:
print("Some Error")
ret = ""
return ret
def aesECBDecrypt(self,key,data):
if isinstance(key, str):
key = key.encode('utf-8')
cipher = AES.new(key, AES.MODE_ECB) #, nonce=self.nonce)
if isinstance(data, str):
data = data.encode('utf-8')
data = b64decode(data)
try:
result = cipher.decrypt(data)
result = unpad(result,16)
ret = result.decode('utf-8')
except:
print("Some Error")
ret = ""
return ret
It is extremely important that the IV (aka nonce) is the same on iOS and Python when using CBC - it will not work otherwise. This is set in this case to a string of 16 '1' characters terminated with a null. This is not really a secret key in itself, but it is likely worth changing it and securing it (possibly sending it as well with asymmetric, RSA in my case, encryption). The AES however is the critical key and should certainly be sent encrypted between the devices.
Finally, I'd recommend using the CBC even though the IV needs to be considered, as it is more secure. And when I have time I will look into integration of the Swift only Apple Crypto Kit library to support other forms as well...
Related
Not able to connect in socket programming in C++ whereas in python, it works
I have a piece of code in python. It is related to client Socket programming. I want to get the NTRIP data from "www.rtk2go.com". The code written in python works well and serves the purpose. import socket import base64 server = "www.rtk2go.com" port = "2101" mountpoint = "leedgps" username = "" password = "" def getHTTPBasicAuthString(username, password): inputstring = username + ':' + password pwd_bytes = base64.standard_b64encode(inputstring.encode("utf-8")) pwd = pwd_bytes.decode("utf-8").replace('\n', '') return pwd pwd = getHTTPBasicAuthString(username, password) print(pwd) header = "GET /{} HTTP/1.0\r\n".format(mountpoint) + \ "User-Agent: NTRIP u-blox\r\n" + \ "Accept: */*\r\n" + \ "Authorization: Basic {}\r\n".format(pwd) + \ "Connection: close\r\n\r\n" print(header) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((server, int(port))) s.sendto(header.encode('utf-8'), (server, int(port))) resp = s.recv(1024) try: while True: try: data = s.recv(2048) except: pass finally: s.close() I wanted to implement the same thing in c++ code and after going through few online tutorials, I wrote the following code in C++ (I am very new to C++) #include <iostream> #include <stdio.h> #include <sys/socket.h> #include <arpa/inet.h> #include <string.h> #include <netdb.h> using namespace std; #define SIZE 1000 #define PORT 2101 string base64Encoder(string input_str, int len_str) { char char_set[] = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"; char *res_str = (char *) malloc(SIZE * sizeof(char)); int index, no_of_bits = 0, padding = 0, val = 0, count = 0, temp; int i, j, k = 0; for (i = 0; i < len_str; i += 3) { val = 0, count = 0, no_of_bits = 0; for (j = i; j < len_str && j <= i + 2; j++) { val = val << 8; val = val | input_str[j]; count++; } no_of_bits = count * 8; padding = no_of_bits % 3; while (no_of_bits != 0) { // retrieve the value of each block if (no_of_bits >= 6) { temp = no_of_bits - 6; // binary of 63 is (111111) f index = (val >> temp) & 63; no_of_bits -= 6; } else { temp = 6 - no_of_bits; // append zeros to right if bits are less than 6 index = (val << temp) & 63; no_of_bits = 0; } res_str[k++] = char_set[index]; } } for (i = 1; i <= padding; i++) { res_str[k++] = '='; } res_str[k] = '\0'; string a = res_str; return a; } int main() { string input_str = ":"; int len_str; len_str = input_str.length(); string pwd = base64Encoder(input_str, len_str); string mountpoint = "leedgps"; string header = "GET /" + mountpoint + " HTTP/1.0\r\n" + \ "User-Agent: NTRIP u-blox\r\n" + \ "Accept: */*\r\n" + \ "Authorization: Basic " + pwd + "\r\n" + \ "Connection: close\r\n\r\n"; struct hostent *h; if ((h = gethostbyname("www.rtk2go.com")) == NULL) { // Lookup the hostname cout << "cannot look up hostname" << endl; } struct sockaddr_in saddr; int sockfd, connfd; sockfd = socket(AF_INET, SOCK_STREAM, 0) < 0; if (sockfd) { printf("Error creating socket\n"); } saddr.sin_family = AF_INET; saddr.sin_port = htons(2101); if(inet_pton(AF_INET, "3.23.52.207", &saddr.sin_addr)<=0) // { printf("\nInvalid address/ Address not supported \n"); return -1; } cout << connect(sockfd, (struct sockaddr *)&saddr, sizeof(saddr)) << endl; return 0; } But the connect method always returns -1 (in C++). Any idea what am I doing wrong?
sockfd = socket(AF_INET, SOCK_STREAM, 0) < 0; means that sockfd is either 0 or 1, which is not a valid socket. Do this instead: sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) { printf("Error creating socket\n"); }
python script slowing down and almost stops after a couple of seconds
I am trying to make a resource monitor with an arduino and it is working great for almost 20 seconds before it almost stops running. When it slows down, it takes almost 5 seconds between updates. I have tried to comment out everything with psutil and giving it a permanent value. and have tried the same with GPUtil. Here is the python code import serial.tools.list_ports import serial import psutil import GPUtil import time import serial ports = list(serial.tools.list_ports.comports()) baud = 9600 for p in ports: if "Arduino" in p[1]: port = p[0] ser=serial.Serial(port, baud, timeout=1) try: while True: cpuUsage = psutil.cpu_percent() ramUsage = psutil.virtual_memory() cpuUsage = str(cpuUsage) GPUs = GPUtil.getGPUs() gpuUsage = GPUs[0].load gpuUsage = str(gpuUsage) gpuUsage = gpuUsage[2:] ramUsage = str(ramUsage.percent) toSend = cpuUsage + "," + gpuUsage + ","+ ramUsage print (toSend) ser.write(toSend.encode()) #print("20.5".encode()) #line = ser.readline()[:-2] #line.decode() #print ("Read : " , line); time.sleep(0.1) except: print ("error") Here is the Arduino code #include <FastLED.h> #define NUM_LEDS 15 #define DATA_PIN 6 float Cpu = 40; float Gpu = 99; float Ram = 60; String Cpu_Read; String Gpu_Read; String Ram_Read; int RXLED = 17; String StrNumbers = ""; int numbers; int Received = 0; int Separrator = 0; String Text = ""; CRGB leds[NUM_LEDS]; void setup() { // put your setup code here, to run once: FastLED.addLeds<WS2812B, DATA_PIN, GRB>(leds, NUM_LEDS); FastLED.setBrightness(5); Serial.begin(115200); pinMode(LED_BUILTIN_TX,INPUT); pinMode(LED_BUILTIN_RX,INPUT); } void loop() { // put your main code here, to run repeatedly: char inByte = ' '; StrNumbers = ""; Text = ""; Separrator = 0; Cpu_Read = ""; Gpu_Read = ""; Ram_Read = ""; Received = 0; pinMode(LED_BUILTIN_TX,INPUT); while(Serial.available() > 0){ // only send data back if data has been sent pinMode(LED_BUILTIN_TX,OUTPUT); inByte = Serial.read(); // read the incoming data if (inByte == ','){ Separrator += 1; } else if (Separrator == 0){ Cpu_Read += (char) inByte; } else if (Separrator == 1){ Gpu_Read += (char) inByte; } else if (Separrator == 2){ Ram_Read += (char) inByte; Serial.print("Ram : "); Serial.println(Ram_Read); } else{ Serial.println("Error"); } /* inByte = Serial.read(); // read the incoming data if (isDigit(inByte)) { // tests if inByte is a digit StrNumbers += (char) inByte; Serial.println(inByte); // send the data back in a new line so that it is not all one long line //Serial.println(numbers); // send the data back in a new line so that it is not all one long line } else if (inByte == ".") { abortLoop = 1; } else{ Text +=(char) inByte; } */ Received = 1; } if (Received == 1){ Cpu = Cpu_Read.toInt(); Gpu = Gpu_Read.toInt(); Ram = Ram_Read.toInt(); UpdateMonitor(Cpu ,Gpu ,Ram); } Text.trim(); if (StrNumbers != ""){ numbers = StrNumbers.toInt(); Serial.println(numbers); if (numbers > 100) numbers = 100; UpdateMonitor(Cpu,numbers,Ram); } if(Text!= ""){ //Serial.println(Text); // send the data back in a new line so that it is not all one long line } if (Text == "ResourceMonitor"){ Serial.println("Yes"); } else if (Text != ""){ Serial.println(Text); numbers = Text.toInt(); if (numbers > 100) numbers = 100; UpdateMonitor(Cpu, numbers, Ram); } } void UpdateMonitor(int cpu,int gpu, int ram){ int Cpu_Usage = map(cpu, 0, 100, 0, 5); int Gpu_Usage = map(gpu, 0, 100, 0, 5); int Ram_Usage = map(ram, 0, 100, 0, 5); FastLED.clear(); for(int led = 0; led < Ram_Usage; led++) { leds[led] = CRGB::Blue; } for(int led = 0; led < Gpu_Usage; led++) { leds[9 - led] = CRGB::Green; } for(int led = 0; led < Cpu_Usage; led++) { leds[led+10] = CRGB::Red; } FastLED.show(); }
it is fixed now not sure what fixet it but it works atleast :) thansk to all that have tried to helt
Inputting, splitting, and sorting in C
I am a python programmer. My girlfriend is taking a C class. This frustrates me, something so simple I can't find online nor I can figure out. Let's cut to the chase. I have a simple Python program that I need help trying to translate to C. lst = input("Enter a list of numbers with a space in between each number\n") newList = lst.split(" ") #selection sort has been pre defined x = newList.selectSort() print(x) Sorry this was done on my phone. Her assignment isn't just this. It's adding multiple functions that work together. I just need to know how this works in order to pull the full program together.
First of all, you have to define the number of item in the list then you can input them. Then, you have to store them in an array and do the sorting process manually. I've done the sorting process without defining a function. If you want to use a function, just pass the array and return the sorted array. #include <stdio.h> int main() { int n, c, d, position, swap; printf("Enter number of elements\n"); scanf("%d", &n); int array[n]; printf("Enter %d integers\n", n); for ( c = 0 ; c < n ; c++ ) scanf("%d", &array[c]); for ( c = 0 ; c < ( n - 1 ) ; c++ ) { position = c; for ( d = c + 1 ; d < n ; d++ ) { if ( array[position] > array[d] ) position = d; } if ( position != c ) { swap = array[c]; array[c] = array[position]; array[position] = swap; } } printf("Sorted list in ascending order:\n"); for ( c = 0 ; c < n ; c++ ) printf("%d\n", array[c]); return 0; }
#include <stdio.h> #include <stdlib.h> #include <stdbool.h> #include <string.h> // Macro for sorting #define sort(name, data_set, len, comparator, inverse) \ name##_sort(data_set, len, comparator, inverse) #define SORT_DEFINE(name, data_type) \ \ /* Sort data set #data_set data set to sort #len length of data set #comparator comparator to compare two elements, return positive value when first element is bigger #inverse whether the result should be inversed */\ void name##_sort(data_type *data_set, int len, int (*comparator)(data_type, data_type), bool inverse) \ { \ int i; \ int j; \ bool change = true; \ int ret; \ data_type tmp; \ \ for (i = 0; change && i < len - 1; i++) \ { \ change = false; \ for (j = 0; j < len - 1 - i; j++) \ { \ ret = comparator(data_set[j], data_set[j + 1]); \ if ((!inverse && ret > 0) || (inverse && ret < 0)) \ { \ change = true; \ tmp = data_set[j]; \ data_set[j] = data_set[j + 1]; \ data_set[j + 1] = tmp; \ } \ } \ } \ } /* Split string #content origin string content #delim delimiter for splitting #psize pointer pointing at the variable to store token size #return tokens after splitting */ const char **split(char *content, const char *delim, int *psize) { char *token; const char **tokens; int capacity; int size = 0; token = strtok(content, delim); if (!token) { return NULL; } // Initialize tokens tokens = malloc(sizeof(char *) * 64); if (!tokens) { exit(-1); } capacity = 64; tokens[size++] = token; while ((token = strtok(NULL, delim))) { if (size >= capacity) { tokens = realloc(tokens, sizeof(char *) * capacity * 2); if (!tokens) { exit(-1); } capacity *= 2; } tokens[size++] = token; } *psize = size; return tokens; } // Define sort function for data_type = const char * SORT_DEFINE(str, const char *); // Define sort function for data_type = int SORT_DEFINE(int, int) int intcmp(int v1, int v2) { return v1 - v2; } int main(int argc, char *argv[]) { char buff[128]; const char **tokens; int size; int i; int *ints; // Get input from stdin fgets(buff, 128, stdin); // Split string tokens = split(buff, " \t\n", &size); ints = malloc(sizeof(int) * size); // Sort strings [min -> max] sort(str, tokens, size, strcmp, false); // Print strings and transfer them to integers for (i = 0; i < size; i++) { printf("[%02d]: <%s>\n", i, tokens[i]); ints[i] = atoi(tokens[i]); } // Sort integers [max -> min] sort(int, ints, size, intcmp, true); // Print integers for (i = 0; i < size; i++) { printf("[%02d]: <%d>\n", i, ints[i]); } free(ints); free(tokens); return 0; } Use macro SORT_DEFINE(), sort(), and function split() to do your own job. The main() function is just a demo to show how to use them.
Triggering data acquisition in Arduino with Python
I would like to trigger data acquisition on my Arduino UNO with connected accelerometer (MPU 6050) by using Python. The idea is that when the command is given in Python, the Arduino would start saving data on its SRAM and when a certain number of measurements would be saved, the data would be sent in a package back to Python. This is my current Arduino code: #include<Wire.h> #define MPU6050_DLPF_94HZ MPU6050_DLPF_CFG_2 const int MPU_addr_1 = 0x68; // I2C address of the first MPU-6050 const int baudrate = 19200; int16_t AcX1; // definition of variables const int len = 200; // Buffer size float analogDataArray[len]; int count = 0; void setup() { Wire.begin(); Wire.beginTransmission(MPU_addr_1); Wire.write(0x6B); // PWR_MGMT_1 register Wire.write(0); // set to zero (wakes up the MPU-6050) Wire.endTransmission(true); Serial.begin(baudrate); } void loop() { Wire.beginTransmission(MPU_addr_1); Wire.write(0x3B); // starting with register 0x3B (ACCEL_XOUT_H) Wire.endTransmission(false); Wire.requestFrom(MPU_addr_1, 8, true); // request a total of 14 registers float AcX1 = Wire.read() << 8 | Wire.read(); // 0x3B (ACCEL_XOUT_H) & 0x3C (ACCEL_XOUT_L) if (Serial.available()) { if (Serial.read() == 'Y') { analogDataArray[count] = AcX1; count = count + 1; if (count >= len) { for (int i; i <= len; i = i + 1) { Serial.println(analogDataArray[i] / 16384); } } count = 0; } } delay(5); } and this is my Python code: import serial arduinoData = serial.Serial('COM3', 19200) com = input('Press "Y":' ) arduinoData.write(bytes(com, 'utf-8')) vec = [] run = True while run is True: while (arduinoData.inWaiting() == 0): pass arduinoString = arduinoData.readline() vec.append(float(arduinoString)) if len(vec) >= 100: run = False print(vec) I've managed to get it working for 1 measurement but as soon as I defined an array inside Arduino to save multiple measurements, the code doesn't work. I'm sure that it is close to working, but I can't find the detail that is stopping me from that. Thank you for any provided help. Kind regards, L
I got it working, the problem was in my Arduino code, as expected. #include<Wire.h> #define MPU6050_DLPF_94HZ MPU6050_DLPF_CFG_2 const int MPU_addr_1 = 0x68; // I2C address of the first MPU-6050 const int baudrate = 9600; int16_t AcX1; // definition of variables const int len = 200; // Buffer size float analogDataArray[len]; int count = 0; int ans; void setup() { Wire.begin(); Wire.beginTransmission(MPU_addr_1); Wire.write(0x6B); // PWR_MGMT_1 register Wire.write(0); // set to zero (wakes up the MPU-6050) Wire.endTransmission(true); Serial.begin(baudrate); } void loop() { Wire.beginTransmission(MPU_addr_1); Wire.write(0x3B); // starting with register 0x3B (ACCEL_XOUT_H) Wire.endTransmission(false); Wire.requestFrom(MPU_addr_1, 8, true); // request a total of 14 registers float AcX1 = Wire.read() << 8 | Wire.read(); // 0x3B (ACCEL_XOUT_H) & 0x3C (ACCEL_XOUT_L) if (Serial.available() && Serial.read() == 'Y') { ans = 1; } if (ans == 1) { analogDataArray[count] = AcX1; count = count + 1; if (count >= len) { for (int i; i <= len; i = i + 1) { Serial.println(analogDataArray[i] / 16384); } ans = 0; } } delay(5); }
Unpacking hex-encoded floats
I'm trying to translate the following Python code into C++: import struct import binascii inputstring = ("0000003F" "0000803F" "AD10753F" "00000080") num_vals = 4 for i in range(num_vals): rawhex = inputstring[i*8:(i*8)+8] # <f for little endian float val = struct.unpack("<f", binascii.unhexlify(rawhex))[0] print val # Output: # 0.5 # 1.0 # 0.957285702229 # -0.0 So it reads 32-bit worth of the hex-encoded string, turns it into a byte-array with the unhexlify method, and interprets it as a little-endian float value. The following almost works, but the code is kind of crappy (and the last 00000080 parses incorrectly): #include <sstream> #include <iostream> int main() { // The hex-encoded string, and number of values are loaded from a file. // The num_vals might be wrong, so some basic error checking is needed. std::string inputstring = "0000003F" "0000803F" "AD10753F" "00000080"; int num_vals = 4; std::istringstream ss(inputstring); for(unsigned int i = 0; i < num_vals; ++i) { char rawhex[8]; // The ifdef is wrong. It is not the way to detect endianness (it's // always defined) #ifdef BIG_ENDIAN rawhex[6] = ss.get(); rawhex[7] = ss.get(); rawhex[4] = ss.get(); rawhex[5] = ss.get(); rawhex[2] = ss.get(); rawhex[3] = ss.get(); rawhex[0] = ss.get(); rawhex[1] = ss.get(); #else rawhex[0] = ss.get(); rawhex[1] = ss.get(); rawhex[2] = ss.get(); rawhex[3] = ss.get(); rawhex[4] = ss.get(); rawhex[5] = ss.get(); rawhex[6] = ss.get(); rawhex[7] = ss.get(); #endif if(ss.good()) { std::stringstream convert; convert << std::hex << rawhex; int32_t val; convert >> val; std::cerr << (*(float*)(&val)) << "\n"; } else { std::ostringstream os; os << "Not enough values in LUT data. Found " << i; os << ". Expected " << num_vals; std::cerr << os.str() << std::endl; throw std::exception(); } } } (compiles on OS X 10.7/gcc-4.2.1, with a simple g++ blah.cpp) Particularly, I'd like to get rid of the BIG_ENDIAN macro stuff, as I'm sure there is a nicer way to do this, as this post discusses. Few other random details - I can't use Boost (too large a dependency for the project). The string will usually contain between 1536 (83*3) and 98304 float values (323*3), at most 786432 (643*3) (edit2: added another value, 00000080 == -0.0)
The following is your updated code modified to remove the #ifdef BIG_ENDIAN block. It uses a read technique that should be host byte order independent. It does this by reading the hex bytes (which are little endian in your source string) into a big endian string format compatible with the iostream std::hex operator. Once in this format it should not matter what the host byte order is. Additionally, it fixes a bug in that rawhex needs to be zero terminated to be inserted into convert without trailing garbage in some cases. I do not have a big endian system to test on, so please verify on your platform. This was compiled and tested under Cygwin. #include <sstream> #include <iostream> int main() { // The hex-encoded string, and number of values are loaded from a file. // The num_vals might be wrong, so some basic error checking is needed. std::string inputstring = "0000003F0000803FAD10753F00000080"; int num_vals = 4; std::istringstream ss(inputstring); size_t const k_DataSize = sizeof(float); size_t const k_HexOctetLen = 2; for (uint32_t i = 0; i < num_vals; ++i) { char rawhex[k_DataSize * k_HexOctetLen + 1]; // read little endian string into memory array for (uint32_t j=k_DataSize; (j > 0) && ss.good(); --j) { ss.read(rawhex + ((j-1) * k_HexOctetLen), k_HexOctetLen); } // terminate the string (needed for safe conversion) rawhex[k_DataSize * k_HexOctetLen] = 0; if (ss.good()) { std::stringstream convert; convert << std::hex << rawhex; uint32_t val; convert >> val; std::cerr << (*(float*)(&val)) << "\n"; } else { std::ostringstream os; os << "Not enough values in LUT data. Found " << i; os << ". Expected " << num_vals; std::cerr << os.str() << std::endl; throw std::exception(); } } }
I think the whole istringstring business is an overkill. It's much easier to parse this yourself one digit at a time. First, create a function to convert a hex digit into an integer: signed char htod(char c) { c = tolower(c); if(isdigit(c)) return c - '0'; if(c >= 'a' && c <= 'f') return c - 'a' + 10; return -1; } Then simply convert the string into an integer. The code below doesn't check for errors and assumes big endianness -- but you should be able to fill in the details. unsigned long t = 0; for(int i = 0; i < s.length(); ++i) t |= (t << 4) & htod(s[i]); Then your float is float f = * (float *) &t;
This is what we ended up with, OpenColorIO/src/core/FileFormatIridasLook.cpp (Amardeep's answer with the unsigned uint32_t fix would likely work also) // convert hex ascii to int // return true on success, false on failure bool hexasciitoint(char& ival, char character) { if(character>=48 && character<=57) // [0-9] { ival = static_cast<char>(character-48); return true; } else if(character>=65 && character<=70) // [A-F] { ival = static_cast<char>(10+character-65); return true; } else if(character>=97 && character<=102) // [a-f] { ival = static_cast<char>(10+character-97); return true; } ival = 0; return false; } // convert array of 8 hex ascii to f32 // The input hexascii is required to be a little-endian representation // as used in the iridas file format // "AD10753F" -> 0.9572857022285461f on ALL architectures bool hexasciitofloat(float& fval, const char * ascii) { // Convert all ASCII numbers to their numerical representations char asciinums[8]; for(unsigned int i=0; i<8; ++i) { if(!hexasciitoint(asciinums[i], ascii[i])) { return false; } } unsigned char * fvalbytes = reinterpret_cast<unsigned char *>(&fval); #if OCIO_LITTLE_ENDIAN // Since incoming values are little endian, and we're on little endian // preserve the byte order fvalbytes[0] = (unsigned char) (asciinums[1] | (asciinums[0] << 4)); fvalbytes[1] = (unsigned char) (asciinums[3] | (asciinums[2] << 4)); fvalbytes[2] = (unsigned char) (asciinums[5] | (asciinums[4] << 4)); fvalbytes[3] = (unsigned char) (asciinums[7] | (asciinums[6] << 4)); #else // Since incoming values are little endian, and we're on big endian // flip the byte order fvalbytes[3] = (unsigned char) (asciinums[1] | (asciinums[0] << 4)); fvalbytes[2] = (unsigned char) (asciinums[3] | (asciinums[2] << 4)); fvalbytes[1] = (unsigned char) (asciinums[5] | (asciinums[4] << 4)); fvalbytes[0] = (unsigned char) (asciinums[7] | (asciinums[6] << 4)); #endif return true; }