convert byte array to png file python - python

I send compressed png file with android studio to python server
and I try to convert the compressed byte array back to PNG image file using PIl
but I get black image
how to convert the compressed PNG file back to image using python
this is my python code
while True:
data = sock.recv(100000)
if str(data) == "b''":
continue
print(data)
image1 = io.BytesIO(data)
image = Image.open(image1).convert("RGBA")
image.show()
and this is my java code that compressed the image
class sendImg implements Runnable {
public void run() {
try {
socket = new Socket(SERVER_IP, SERVER_PORT);
DataOutputStream out = new DataOutputStream(socket.getOutputStream());
ByteArrayOutputStream bout = new ByteArrayOutputStream();
bit.compress(Bitmap.CompressFormat.PNG,80,bout);
long len = bout.size();
out.write(bout.toByteArray());
socket.close();
} catch (final Exception e) {
MainActivity.this.runOnUiThread(new Runnable() {
#Override
public void run() {
textView.setText(e.toString());
}
});
}
}
}

Related

Zlib is unable to extract a compress String in java, compression is done in python

Im trying to decompress a string in java, the string is compress in python with base64 encoding.
I tried a day to resolve the issue, The file can decode easily online and also in python.
Find similar post that people have trouble compressing and decompressing in java and python.
zip.txt
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Base64;
import java.util.zip.DataFormatException;
import java.util.zip.Deflater;
import java.util.zip.Inflater;
public class CompressionUtils {
private static final int BUFFER_SIZE = 1024;
public static byte[] decompress(final byte[] data) {
final Inflater inflater = new Inflater();
inflater.setInput(data);
ByteArrayOutputStream outputStream =
new ByteArrayOutputStream(data.length);
byte[] buffer = new byte[data.length];
try {
while (!inflater.finished()) {
final int count = inflater.inflate(buffer);
outputStream.write(buffer, 0, count);
}
outputStream.close();
} catch (DataFormatException | IOException e) {
e.printStackTrace();
log.error("ZlibCompression decompress exception: {}", e.getMessage());
}
inflater.end();
return outputStream.toByteArray();
}
public static void main(String[] args) throws IOException {
decompress(Base64.getDecoder().decode(Files.readAllBytes(Path.of("zip.txt"))));
}
}
Error:
java.util.zip.DataFormatException: incorrect header check
at java.base/java.util.zip.Inflater.inflateBytesBytes(Native Method)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:378)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:464)
Tried also this after #Mark Adler suggestion.
public static void main(String[] args) throws IOException {
byte[] decoded = Base64.getDecoder().decode(Files.readAllBytes(Path.of("zip.txt")));
ByteArrayInputStream in = new ByteArrayInputStream(decoded);
GZIPInputStream gzStream = new GZIPInputStream(in);
decompress(gzStream.readAllBytes());
gzStream.close();
}
java.util.zip.DataFormatException: incorrect header check
at java.base/java.util.zip.Inflater.inflateBytesBytes(Native Method)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:378)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:464)
at efrisapi/com.efrisapi.util.CompressionUtils.decompress(CompressionUtils.java:51)
at efrisapi/com.efrisapi.util.CompressionUtils.main(CompressionUtils.java:67)
That is a gzip stream, not a zlib stream. Use GZIPInputStream.
I was looking in to wrong direction, the string was gzipped, below is the code to resolve it. Thank you #Mark Adler for identifying the issue.
public static void deCompressGZipFile(String gZippedFile, String newFile) throws IOException {
byte[] decoded = Base64.getDecoder().decode(Files.readAllBytes(Path.of(gZippedFile)));
ByteArrayInputStream in = new ByteArrayInputStream(decoded);
GZIPInputStream gZIPInputStream = new GZIPInputStream(in);
FileOutputStream fos = new FileOutputStream(newFile);
byte[] buffer = new byte[1024];
int len;
while ((len = gZIPInputStream.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
// Keep it in finally
fos.close();
gZIPInputStream.close();
}

Stream continuous data from a python script to NodeJS

I have a multi-threaded python script that converts an image to a NumPy array (occupancy grid). The python script then continuously changes the values of the array to simulate a dot moving in a map. This array is converted back to an image using Pillow and then encoded to base64. The NodeJS part has an express server running that also has a SocketIO connection with a mobile app.
What I am trying to do is:
send the encoded image to the mobile app from the server, so the option that came to mind was to run the python script from nodeJs, and as the python script transmits an encoded image the server will redirect it to the mobile app.
What I'm looking for is:
stream the python script output to nodeJS without stopping (until the simulation stops)
or use a better-purposed solution to get the data from the python to the mobile phone
Thanks in advance!
python script to run:
import matplotlib.pyplot as plt
import cv2
import threading
import base64
from PIL import Image
temp = 0
global im_bw
pixel = 0
def update_image():
global im_bw
im2 = Image.fromarray(im_bw)
image64 = base64.b64encode(im2.tobytes())
# print the base64 encoded image to javascript
print(image64)
threading.Timer(1, update_image).start()
print("initialization")
im_gray = cv2.imread(r'gridMapBlue.JPG', cv2.IMREAD_GRAYSCALE)
(thresh, im_bw) = cv2.threshold(im_gray, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
thresh = 127
im_bw = cv2.threshold(im_gray, thresh, 255, cv2.THRESH_BINARY)[1]
# noinspection PyBroadException
try:
update_image()
except Exception as e:
print(e.with_traceback())
pass
# simulate the dot moving by modifying the
while True:
if pixel == 200:
break
temp = im_bw[150,pixel]
im_bw[150, pixel] = 127
pixel += 1
Javascript code:
const express = require("express");
const app = express();
const io = require("./api/services/SocketIoService")
const spawn = require('child_process').spawn;
const socketIO = new io(app);
socketIO.server.listen(6969, () => console.log("server started"))
py = spawn('python',["E:\\QU\\Senior\\robot_positioning\\robot_position_map.py"])
py.stdout.on('data', (data)=>console.log(data));
py.stderr.on("error", err =>console.log(err))
image used
edit: added the code for creating the occupancy grid and the javascript code that I tried (but didn't work)
As wished op, not the 100% solution he want, but a working adapted concept.
main file is index.js:
const { spawn } = require("child_process");
const WebSocket = require("ws");
// create websocket server
// open "client.html" in a browser and see how the images flaps between two
const wss = new WebSocket.Server({
port: 8080
});
// feedback
wss.on("connection", function connection(ws) {
console.log("Client conneted to websocket");
});
// spawn python child process
const py = spawn("python", ["image.py"]);
console.log("Python image manipulating process has pid:", py.pid)
// listen for the new image
py.stdout.on("data", (data) => {
// broadcast the new binary image to all clients
wss.clients.forEach((client) => {
if (client.readyState === WebSocket.OPEN) {
client.send(data);
}
});
});
py.stderr.on("data", (data) => {
console.error(data.toString());
});
this communicate between the image manipluating image.py and clients (over WebSockets, should be easy to change that to socket.io if needed):
import time
import sys
toogle = True
while True:
if(toogle):
f = open("image1.jpg", "r")
else:
f = open("image2.jpg", "r")
toogle = not toogle
sys.stdout.write(f.read())
sys.stdout.flush()
time.sleep(2)
That file create/manipluate a image and push the image binary over stdout to the parent index.js. For demo purpose we switch between 2 images that get send to the client: "image1.jpg" and "image2.jpg" (use any jpg image you want, if you need other mimetypes/extensions set theme in the html client too).
As client i used a simple html document client.html:
When you open file it just shows a white/blank page, because no image was initzial loaded.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<canvas id="image">unsupported browser</canvas>
<script>
var ws = new WebSocket('ws://localhost:8080');
ws.onopen = () => {
console.log("Connected")
};
ws.onmessage = (evt) => {
// re-create(?) blob data from websocket
var blob = new Blob([evt.data], {
type: 'image/jpg' // -> MIME TYPES OF IMAGES GOES HERE
});
// create blob url
var url = URL.createObjectURL(blob);
var image = document.getElementById("image");
var img = new Image(0, 0);
img.onload = () => {
// get canvas context
let ctx = image.getContext("2d");
ctx.clearRect(0, 0, image.width, image.height);
ctx.drawImage(img, 0, 0);
};
img.src = url;
};
</script>
</body>
</html>
The client renders the received image buffer on the page and refresh/redraw it if the python scripts "says so".
Install the only needed dependencie: npm install ws
To start everything type: node index.js and open the client in a webrowser of your choice (tested with firefox on ubuntu 18.04 LTS)
You should see that the image changes every second. With that working you can start to write the python script that "animate" your moving pixel :)
The script was edited inline here: perhaps its run not on the first hit.
I try it by "copy and paste" and fix the found issues (typos, wrong names, etc..)
If you need any help or changes, let me now.

Sharing Opencv image between C++ and python using Redis

I want to share image between C++ and python using Redis. Now I have succeed in sharing numbers. In C++, I use hiredis as the client; in python, I just do import redis. The code I have now is as below:
cpp to set value:
VideoCapture video(0);
Mat img;
int key = 0;
while (key != 27)
{
video >> img;
imshow("img", img);
key = waitKey(1) & 0xFF;
reply = (redisReply*)redisCommand(c, "SET %s %d", "hiredisWord", key);
//printf("SET (binary API): %s\n", reply->str);
freeReplyObject(reply);
img.da
}
python code to get value:
import redis
R = redis.Redis(host='127.0.0.1', port='6379')
while 1:
print(R.get('hiredisWord'))
Now I want to share opencv image instead of numbers. What should I do? Please help, thank you!
Instead of hiredis, here's an example using cpp_redis (available at https://github.com/cpp-redis/cpp_redis).
The following C++ program reads an image from a file on disk using OpenCV, and stores the image in Redis under the key image:
#include <opencv4/opencv2/opencv.hpp>
#include <cpp_redis/cpp_redis>
int main(int argc, char** argv)
{
cv::Mat image = cv::imread("input.jpg");
std::vector<uchar> buf;
cv::imencode(".jpg", image, buf);
cpp_redis::client client;
client.connect();
client.set("image", {buf.begin(), buf.end()});
client.sync_commit();
}
Then in Python you grab the image from the database:
import cv2
import numpy as np
import redis
store = redis.Redis()
image = store.get('image')
array = np.frombuffer(image, np.uint8)
decoded = cv2.imdecode(array, flags=1)
cv2.imshow('hello', decoded)
cv2.waitKey()

How to stream a constantly changing image

The Setup:
I have an NGINX web server. On a web page I am running the javascript to refresh the file tmp_image.jpg every half a second:
var img = new Image();
img.onload = function() {
var canvas = document.getElementById("x");
var context = canvas.getContext("2d");
context.drawImage(img, 0, 0);
setTimeout(timedRefresh, 500);
};
function timedRefresh() {
// just change src attribute, will always trigger the onload callback
try {
img.src = 'tmp_image.jpg?d=' + Date.now();
}catch(e){
console.log(e);
}
}
setTimeout(timedRefresh, 100);
With the HTML:
<canvas id="x" width="700px" height="500px"></canvas>
On another process using a raspberry pi camera I am running a script to write a image to a file tmp_image.jpg with the library picamera:
for frame in cam.camera.capture_continuous(raw, format="rgb", use_video_port=True):
img = frame.array
cv2.imwrite("tmp_image.jpg", img)
The problem:
I am receiving the error:
GET http://192.168.1.234/images/tmp_img.jpg?d=1513855080653 net::ERR_CONTENT_LENGTH_MISMATCH
Which then leads the refresh to crash (even though I have put it in a try catch)
I believe this is because NGINX is trying to read the file while it is being written to. So what should I do? And is this awfully inefficient?

Create image from string sent from iOS device

I have an iOS mobile application that sends an encoded image to a Python3 server.
static func prepareImageAndUpload(imageView: UIImageView) -> String?
{
if let image: UIImage? = imageView.image {
// You create a NSData from your image
let imageData = UIImageJPEGRepresentation(imageView.image!, 0.5)
// You create a base64 string
let base64String = imageData!.base64EncodedStringWithOptions([])
// And you encode it in order to delete any problem of specials char
let encodeImg = base64String.stringByAddingPercentEncodingWithAllowedCharacters(.URLHostAllowedCharacterSet()) as String!
return encodeImg
}
return nil
}
And I am trying to receive that image using the following code:
imageName = "imageToSave.jpg"
fh = open(imageName, "wb")
imgDataBytes = bytes(imgData, encoding="ascii")
imgDataBytesDecoded = base64.b64decode(imgDataBytes)
fh.write(imgDataBytesDecoded)
fh.close()
I create the image file successfully and nothing breaks. And I can see that the filesize is correct, but the image is not correct, since it can't be opened and shows that it is broken.
I am not sure where the error can be, since the logic is as follows:
Encode image with base64 on iOS device
Send it
Decode image with base64 on Python3 server
Save image from decoded bytes
I have tried two new variants:
Remove stringByAddingPercentEncodingWithAllowedCharacters and
the result was the same
Add urldecode in Python3 server and the result was the same

Categories