Sharing Opencv image between C++ and python using Redis - python

I want to share image between C++ and python using Redis. Now I have succeed in sharing numbers. In C++, I use hiredis as the client; in python, I just do import redis. The code I have now is as below:
cpp to set value:
VideoCapture video(0);
Mat img;
int key = 0;
while (key != 27)
{
video >> img;
imshow("img", img);
key = waitKey(1) & 0xFF;
reply = (redisReply*)redisCommand(c, "SET %s %d", "hiredisWord", key);
//printf("SET (binary API): %s\n", reply->str);
freeReplyObject(reply);
img.da
}
python code to get value:
import redis
R = redis.Redis(host='127.0.0.1', port='6379')
while 1:
print(R.get('hiredisWord'))
Now I want to share opencv image instead of numbers. What should I do? Please help, thank you!

Instead of hiredis, here's an example using cpp_redis (available at https://github.com/cpp-redis/cpp_redis).
The following C++ program reads an image from a file on disk using OpenCV, and stores the image in Redis under the key image:
#include <opencv4/opencv2/opencv.hpp>
#include <cpp_redis/cpp_redis>
int main(int argc, char** argv)
{
cv::Mat image = cv::imread("input.jpg");
std::vector<uchar> buf;
cv::imencode(".jpg", image, buf);
cpp_redis::client client;
client.connect();
client.set("image", {buf.begin(), buf.end()});
client.sync_commit();
}
Then in Python you grab the image from the database:
import cv2
import numpy as np
import redis
store = redis.Redis()
image = store.get('image')
array = np.frombuffer(image, np.uint8)
decoded = cv2.imdecode(array, flags=1)
cv2.imshow('hello', decoded)
cv2.waitKey()

Related

Is there a way to use the RtlSetProcessIsCritical function from the Windows API in Python?

I want to use the RtlSetProcessIsCritical function that sets a current process as critical but I really couldn't find a way to use it with Python.
Here is the example of it in C++:
#include <windows.h>
typedef VOID(_stdcall* RtlSetProcessIsCritical) (
IN BOOLEAN NewValue,
OUT PBOOLEAN OldValue,
IN BOOLEAN IsWinlogon);
BOOL ProcessIsCritical()
{
HANDLE hDLL;
RtlSetProcessIsCritical fSetCritical;
hDLL = LoadLibraryA("ntdll.dll");
if (hDLL != NULL)
{
(fSetCritical) = (RtlSetProcessIsCritical)GetProcAddress((HINSTANCE)hDLL, "RtlSetProcessIsCritical");
if (!fSetCritical) return 0;
fSetCritical(1, 0, 0);
return 1;
}
else
return 0;
}
Is there any way to make it work in Python?
Here is the link to an article that describes the RtlSetProcessIsCritical function pretty good: https://www.codeproject.com/Articles/43405/Protecting-Your-Process-with-RtlSetProcessIsCriti
Try to load ntdll.dll in python. Requires Pywin32 library.
import win32process
import win32api
import ctypes
from ctypes import *
ntdll = WinDLL("ntdll.dll")
ntdll.RtlSetProcessIsCritical(1,0,0)

Stream continuous data from a python script to NodeJS

I have a multi-threaded python script that converts an image to a NumPy array (occupancy grid). The python script then continuously changes the values of the array to simulate a dot moving in a map. This array is converted back to an image using Pillow and then encoded to base64. The NodeJS part has an express server running that also has a SocketIO connection with a mobile app.
What I am trying to do is:
send the encoded image to the mobile app from the server, so the option that came to mind was to run the python script from nodeJs, and as the python script transmits an encoded image the server will redirect it to the mobile app.
What I'm looking for is:
stream the python script output to nodeJS without stopping (until the simulation stops)
or use a better-purposed solution to get the data from the python to the mobile phone
Thanks in advance!
python script to run:
import matplotlib.pyplot as plt
import cv2
import threading
import base64
from PIL import Image
temp = 0
global im_bw
pixel = 0
def update_image():
global im_bw
im2 = Image.fromarray(im_bw)
image64 = base64.b64encode(im2.tobytes())
# print the base64 encoded image to javascript
print(image64)
threading.Timer(1, update_image).start()
print("initialization")
im_gray = cv2.imread(r'gridMapBlue.JPG', cv2.IMREAD_GRAYSCALE)
(thresh, im_bw) = cv2.threshold(im_gray, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
thresh = 127
im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]
# noinspection PyBroadException
try:
update_image()
except Exception as e:
print(e.with_traceback())
pass
# simulate the dot moving by modifying the
while True:
if pixel == 200:
break
temp = im_bw[150,pixel]
im_bw[150, pixel] = 127
pixel += 1
Javascript code:
const express = require("express");
const app = express();
const io = require("./api/services/SocketIoService")
const spawn = require('child_process').spawn;
const socketIO = new io(app);
socketIO.server.listen(6969, () => console.log("server started"))
py = spawn('python',["E:\\QU\\Senior\\robot_positioning\\robot_position_map.py"])
py.stdout.on('data', (data)=>console.log(data));
py.stderr.on("error", err =>console.log(err))
image used
edit: added the code for creating the occupancy grid and the javascript code that I tried (but didn't work)
As wished op, not the 100% solution he want, but a working adapted concept.
main file is index.js:
const { spawn } = require("child_process");
const WebSocket = require("ws");
// create websocket server
// open "client.html" in a browser and see how the images flaps between two
const wss = new WebSocket.Server({
port: 8080
});
// feedback
wss.on("connection", function connection(ws) {
console.log("Client conneted to websocket");
});
// spawn python child process
const py = spawn("python", ["image.py"]);
console.log("Python image manipulating process has pid:", py.pid)
// listen for the new image
py.stdout.on("data", (data) => {
// broadcast the new binary image to all clients
wss.clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(data);
}
});
});
py.stderr.on("data", (data) => {
console.error(data.toString());
});
this communicate between the image manipluating image.py and clients (over WebSockets, should be easy to change that to socket.io if needed):
import time
import sys
toogle = True
while True:
if(toogle):
f = open("image1.jpg", "r")
else:
f = open("image2.jpg", "r")
toogle = not toogle
sys.stdout.write(f.read())
sys.stdout.flush()
time.sleep(2)
That file create/manipluate a image and push the image binary over stdout to the parent index.js. For demo purpose we switch between 2 images that get send to the client: "image1.jpg" and "image2.jpg" (use any jpg image you want, if you need other mimetypes/extensions set theme in the html client too).
As client i used a simple html document client.html:
When you open file it just shows a white/blank page, because no image was initzial loaded.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<canvas id="image">unsupported browser</canvas>
<script>
var ws = new WebSocket('ws://localhost:8080');
ws.onopen = () => {
console.log("Connected")
};
ws.onmessage = (evt) => {
// re-create(?) blob data from websocket
var blob = new Blob([evt.data], {
type: 'image/jpg' // -> MIME TYPES OF IMAGES GOES HERE
});
// create blob url
var url = URL.createObjectURL(blob);
var image = document.getElementById("image");
var img = new Image(0, 0);
img.onload = () => {
// get canvas context
let ctx = image.getContext("2d");
ctx.clearRect(0, 0, image.width, image.height);
ctx.drawImage(img, 0, 0);
};
img.src = url;
};
</script>
</body>
</html>
The client renders the received image buffer on the page and refresh/redraw it if the python scripts "says so".
Install the only needed dependencie: npm install ws
To start everything type: node index.js and open the client in a webrowser of your choice (tested with firefox on ubuntu 18.04 LTS)
You should see that the image changes every second. With that working you can start to write the python script that "animate" your moving pixel :)
The script was edited inline here: perhaps its run not on the first hit.
I try it by "copy and paste" and fix the found issues (typos, wrong names, etc..)
If you need any help or changes, let me now.

convert byte array to png file python

I send compressed png file with android studio to python server
and I try to convert the compressed byte array back to PNG image file using PIl
but I get black image
how to convert the compressed PNG file back to image using python
this is my python code
while True:
data = sock.recv(100000)
if str(data) == "b''":
continue
print(data)
image1 = io.BytesIO(data)
image = Image.open(image1).convert("RGBA")
image.show()
and this is my java code that compressed the image
class sendImg implements Runnable {
public void run() {
try {
socket = new Socket(SERVER_IP, SERVER_PORT);
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
ByteArrayOutputStream bout = new ByteArrayOutputStream();
bit.compress(Bitmap.CompressFormat.PNG,80,bout);
long len = bout.size();
out.write(bout.toByteArray());
socket.close();
} catch (final Exception e) {
MainActivity.this.runOnUiThread(new Runnable() {
#Override
public void run() {
textView.setText(e.toString());
}
});
}
}
}

Can you convert a c++ rapid json object to a Python Json object?

I am building a C++ project that connects to a customer-owned Python codebase and sending a json object to the python code. Because the python codebase is customer-owned, I cannot modify the python codebase to receive a json string. I am using Python C/API for the interface.
I've built a small python file to convert the json string to json object using python's json library. I'd prefer to write the json object on the C++ side using rapidjson. Can I build a python json object from C++ rapid json and have it read by python's json library without converting the object to a string? If it's possible, which function would be the right one to convert the json Document to a Python Object?
C++:
PyObject* packJsonDict;
PyObject* customerDict;
int PYCPP::getJsonToInterface()
{
std::string json= "{\"pet\": \"dog\", \"count\": 5}";
PyObject* JsonObject = m_pImpl->pack_JsonString(json);
std::string function_string = "read_json";
PyObject* args = PyTuple_Pack(1, JsonObject);
//trying packing just the json string
PyObject* runnableFunction = PyDict_GetItemString(customerDict, (char*)function_string);
PyObject* result = PyObject_CallObject(runnableFunction, args);
}
PyObject* PYCPP::pack_JsonString(std::string json_string)
{
std::string function_string = "convert_jsonString_to_jsonObject";
PyObject* args = PyTuple_Pack(1, PyString_FromString((char*) json_string.c_str()) );
//trying packing just the json string
PyObject* runnableFunction = PyDict_GetItemString(packJsonDict, (char*)function_string);
PyObject* result = PyObject_CallObject(runnableFunction, args);
if(result == nullptr)
{
std::cout << "result was null!\n";
}
return result;
}
Python:
import json
def convert_jsonString_to_jsonObject(jsonString):
print "converting:", jsonString
return json.loads(jsonString)

NodeJS implementation for Python's pbkdf2_sha256.verify

I have to translate this Python code to NodeJS:
from passlib.hash import pbkdf2_sha256
pbkdf2_sha256.verify('12345678', '$pbkdf2-sha256$2000$8R7jHOOcs7YWImRM6V1LqQ$CIdNv8YlLlCZfeFJihZs7eQxBsauvVfV05v07Ca2Yzg')
>> True
The code above is the entire code, i.e. there is no othe parameters/settings (just run pip install passlib before you run it to install the passlib package).
I am looking for the correct implementation of validatePassword function in Node that will pass this positive implementation test:
validatePassword('12345678', '$pbkdf2-sha256$2000$8R7jHOOcs7YWImRM6V1LqQ$CIdNv8YlLlCZfeFJihZs7eQxBsauvVfV05v07Ca2Yzg')
>> true
Here is the documentation of the passlib.hash.pbkdf2_sha256 with its default parameters' values.
I tried to follow the answers from here with the data from the Python code above, but that solutions didn't pass the test.
I would appreciate some help with this implementation (preferably using built-in NodeJS crypto package).
Thank you in advance.
This would work:
const crypto = require('crypto')
function validatePassword(secret, format) {
let parts = format.split('$')
return parts[4] == crypto.pbkdf2Sync(secret, Buffer.from(parts[3].replace(/\./g, '+') + '='.repeat(parts[3].length % 3), 'base64'),
+parts[2], 32, parts[1].split('-')[1]).toString('base64').replace(/=/g, '').replace(/\+/g, '.')
}
I was not able to get this working with the other answers here, but they did lead me in the right direction.
Here's where I landed:
// eslint-2017
import crypto from 'crypto';
const encode = (password, { algorithm, salt, iterations }) => {
const hash = crypto.pbkdf2Sync(password, salt, iterations, 32, 'sha256');
return `${algorithm}$${iterations}$${salt}$${hash.toString('base64')}`;
};
const decode = (encoded) => {
const [algorithm, iterations, salt, hash] = encoded.split('$');
return {
algorithm,
hash,
iterations: parseInt(iterations, 10),
salt,
};
};
const verify = (password, encoded) => {
const decoded = decode(encoded);
const encodedPassword = encode(password, decoded);
return encoded === encodedPassword;
};
// <algorithm>$<iterations>$<salt>$<hash>
const encoded = 'pbkdf2_sha256$120000$bOqAASYKo3vj$BEBZfntlMJJDpgkAb81LGgdzuO35iqpig0CfJPU4TbU=';
const password = '12345678';
console.info(verify(password, encoded));
I know this is an old post, but it's one of the top results on Google, so figured I'd help someone out that comes across this in 2020.
You can use the crypto.pbkdf2 native node.js api
const crypto = require('crypto');
crypto.pbkdf2('secret', 'salt', 100000, 64, 'sha256', (err, derivedKey) => {
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...08d59ae'
});
It is having the following api:
password <string>
salt <string>
iterations <number>
keylen <number>
digest <string>
callback <Function>
err <Error>
derivedKey <Buffer>
So you will need to play with the input variables to get the expected result as in python.
An alternative approach
I played with input variables, with not much success, and the simplest idea that I got is to make python scripts that validate the passwords and invoking it with child_process.spawn in node.js.
This worked for me based on node-django-hasher(didn't use it because depends on node-gyp)
function validatePassword(plain, hashed) {
const parts = hashed.split('$');
const salt = parts[2];
const iterations = parseInt(parts[1]);
const keylen = 32;
const digest = parts[0].split('_')[1];
const value = parts[3];
const derivedKey = crypto.pbkdf2Sync(plain, salt, iterations, keylen, digest);
return value === Buffer.from(derivedKey, 'binary').toString('base64');
}
Finally solved it. Because passlib does some transformations on the base64 encoded strings none of the mentioned solutions worked for me. I ended up writing my own node-module wich is tested with passlib 1.7.4 hashes. Thanks #kayluhb for pushing me in the right direction!
Feel free to use it: node-passlib

Categories