While converting Png to Svg libssl not found. How to fix that? - python

Hello I am trying to convert Png images to Svg. At my windows computer I could convert png's with this code:
import aspose.words as aw
doc = aw.Document()
builder = aw.DocumentBuilder(doc)
shape = builder.insert_image("negative.png")
shape.image_data.save("Output.svg")
But now I'm in popOs and it gives this error:
No usable version of libssl was found
Aborted (core dumped)
I tried updating openssl and installing libssl-dev.
Any ideas on how to fix that?

Actually, the code you are using does not convert PNG to SVG. ImageData.Save method saves the images in the original format, so the Output.svg file will be simply a PNG file with SVG extension.
If you need to wrap your PNG to SVG, you can use ShapeRenderer:
doc = aw.Document()
builder = aw.DocumentBuilder(doc)
shape = builder.insert_image("C:\\Temp\\in.png")
shape.get_shape_renderer().save("C:\\Temp\\out.svg", aw.saving.ImageSaveOptions(aw.SaveFormat.SVG))
Also, please see Linux system requirements of Aspose.Words for Python.
You should install libsll in your system. For example, here is Ubuntu docker configuration:
FROM ubuntu:22.04
RUN apt update \
&& apt install -y python3.10 python3-pip libgdiplus wget \
&& wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.1l-1ubuntu1_amd64.deb \
&& dpkg -i ./libssl1.1_1.1.1l-1ubuntu1_amd64.deb \
&& rm -i libssl1.1_1.1.1l-1ubuntu1_amd64.deb \
&& python3.10 -m pip install unittest-xml-reporting==3.2.0
ENTRYPOINT ["/usr/bin/python3.10"]

Related

Running LibreOffice converter on Docker

The problem is related to using LibreOffice headless converter to automatically convert uploaded files. Getting this error:
LibreOffice 7 fatal error - Application cannot be started
Ubuntu ver: 21.04
What I have tried:
Getting the file from Azure Blob storage,
put it into BASE_DIR/Input_file,
convert it to PDF using Linux command that I am running by subproccess,
put it into BASE_DIR/Output_file folder.
Below is my code:
I am installing the LibreOffice to docker this way
RUN apt-get update \
&& ACCEPT_EULA=Y apt-get -y install LibreOffice
The main logic:
blob_client = container_client.get_blob_client(f"Folder_with_reports/")
with open(os.path.join(BASE_DIR, f"input_files/{filename}"), "wb") as source_file:
source_file.write(data)
source_file = os.path.join(BASE_DIR, f"input_files/{filename}") # original docs here
output_folder = os.path.join(BASE_DIR, "output_files") # pdf files will be here
# assign the command of converting files through LibreOffice
command = rf"lowriter --headless --convert-to pdf {source_file} --outdir {output_folder}"
# running the command
subprocess.run(command, shell=True)
# reading the file and uploading it back to Azure Storage
with open(os.path.join(BASE_DIR, f"output_files/MyFile.pdf"), "rb") as outp_file:
outp_data = outp_file.read()
blob_name_ = f"test"
container_client.upload_blob(name = blob_name_ ,data = outp_data, blob_type="BlockBlob")
Should I install lowriter instead of LibreOffice? Is it okay to use BASE_DIR for this kind of operations? I would appreciate any suggestion.
Patial solution:
Here I have simplified the case and created additional docker image with this Dockerfile.
I apply both methods: unoconv and straight conversion.
Dockerfile:
FROM ubuntu:21.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get -y upgrade && \
apt-get -y install python3.10 && \
apt update && apt install python3-pip -y
# Method1 - installing LibreOffice and java
RUN apt-get --no-install-recommends install libreoffice -y
RUN apt-get install -y libreoffice-java-common
# Method2 - additionally installing unoconv
RUN apt-get install unoconv
ARG CACHEBUST=1
ADD BASE.py /code/BASE.py
# copying input doc/docx files to the docker's linux
COPY /input_files /code/input_files
CMD ["/code/BASE.py"]
ENTRYPOINT ["python3"]
BASE.py
import os
import subprocess
BASE_DIR = "/code"
# subprocess.run("ls code/input_files", shell=True)
for filename in os.listdir('code/input_files'):
source_file = f"/code/input_files/{filename}" # original document
output_filename = os.path.splitext(filename)[0]+".pdf"
output_file = f"code/output_files/{output_filename}"
output_folder = "code/output_files" # pdf files will be here
# METHOD 1 - LibreOffice straightly
assign the command of converting files through LibreOffice
convert_to_pdf = rf"libreoffice --headless --convert-to pdf {source_file} --outdir {output_folder}"
subprocess.run(r'ls code/output_files/', shell=True)
## METHOD 2 - Using unoconv - also working
# convert_to_pdf = f"unoconv -f pdf {source_file}"
# subprocess.run(convert_to_pdf, shell=True)
# print(f'file {filename} converted')
The above mentioned methods allows to work with the problem if files was already in Linux filesystem while building. But still didn't find a way to write files into system after building the docker image.

why do I get "AttributeError: module 'caffe' has no attribute 'Classifier'" when running Caffe?

My question is very close to this one.
I installed Caffe on google Colab, and I am trying to run this open source CNN model. the pre-trained models and the test can be downloaded here.
I followed these steps:
I repeated all the instructions in this notebook to install Caffe on my google Colab.
I then downloaded the pre-trained models including the test.py from this link (also mentioned above), using the following lines of code:
!wget http://opensurfaces.cs.cornell.edu/static/minc/minc-model.tar.gz
I run the run the test.py, which includes this line of code:
net=caffe.Classifier('deploy-{}.prototxt'.format(arch),'minc-{}.caffemodel'.format(arch),channel_swap=(2,1,0),mean=numpy.array([104,117,124]))
but I get the following error:
AttributeError: module 'caffe' has no attribute 'Classifier'
the code and the the Caffe folder on colab.
I see "classifier.py"in caffe folder on google drive, are they the same thing? and if yes, how can I implement the address into the above mentioned line of code?
Thanks in advance.
this is the code I run before test.py to install caffe on colab.
!ls
!git clone https://github.com/BVLC/caffe.git
!git reset --hard 9b891540183ddc834a02b2bd81b31afae71b2153 #reset to the newest revision that worked OK on 27.03.2021
# !sudo apt-cache search libhdf5-
# !sudo apt-cache search gflags
# !sudo apt --fix-broken install
!sudo apt-get install libgflags2.2
!sudo apt-get install libgflags-dev
!sudo apt-get install libgoogle-glog-dev
# !sudo apt-get install libhdf5-10 - replaced with 100
!sudo apt-get install libhdf5-100
!sudo apt-get install libhdf5-serial-dev
!sudo apt-get install libhdf5-dev
# !sudo apt-get install libhdf5-cpp-11 - replaced with 100
!sudo apt-get install libhdf5-cpp-100
!sudo apt-get install libprotobuf-dev protobuf-compiler
!find /usr -iname "*hdf5.so"
# got: /usr/lib/x86_64-linux-gnu/hdf5/serial
!find /usr -iname "*hdf5_hl.so"
!ln -s /usr/lib/x86_64-linux-gnu/libhdf5_serial.so /usr/lib/x86_64-linux-gnu/libhdf5.so
!ln -s /usr/lib/x86_64-linux-gnu/libhdf5_serial_hl.so /usr/lib/x86_64-linux-gnu/libhdf5_hl.so
#!find /usr -iname "*hdf5.h*" # got:
# /usr/include/hdf5/serial/hdf5.h
# /usr/include/opencv2/flann/hdf5.h
# Let's try the first one.
%env CPATH="/usr/include/hdf5/serial/"
#fatal error: hdf5.h: No such file or directory
!sudo apt-get install libleveldb-dev
!sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
!sudo apt-get install libsnappy-dev
!echo $CPATH
%cd caffe
!ls
!make clean
!cp Makefile.config.example Makefile.config
!sed -i 's/-gencode arch=compute_20/#-gencode arch=compute_20/' Makefile.config #old cuda versions won't compile
!sed -i 's/\/usr\/local\/include/\/usr\/local\/include \/usr\/include\/hdf5\/serial\//' Makefile.config #one of the 4 things needed to fix hdf5 issues
!sed -i 's/# OPENCV_VERSION := 3/OPENCV_VERSION := 3/' Makefile.config #We actually use opencv 4.1.2, but it's similar enough to opencv 3.
!sed -i 's/code=compute_61/code=compute_61 - gencode=arch=compute_70,code=sm_70 - gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75/' Makefile.config #support for new GPUs
!make all -j 4 # -j would use all availiable cores, but RAM related errors occur
import cv2
print(cv2.__version__)
import caffe
# !./data/mnist/get_mnist.sh #Yann Lecun's hosting sometimes fails with 503 error
# So we use alternative source of mnist dataset
!wget www.di.ens.fr/~lelarge/MNIST.tar.gz
!tar -zxvf MNIST.tar.gz
!cp -rv MNIST/raw/* data/mnist/
!./examples/mnist/create_mnist.sh
This is the test.py that I run(at first I get errors to "Place images to be classified in images/brick/.jpg, images/carpet/.jpg, ...'", I create a folder named 'images' and I put the example.jpg inside, then it gets solved.
#!/usr/bin/env python2
from __future__ import division
from __future__ import with_statement
from __future__ import print_function
import caffe
import numpy
import glob
import os.path
import sys
if __name__=='__main__':
if not os.path.exists('images'):
print('Place images to be classified in images/brick/*.jpg, images/carpet/*.jpg, ...')
sys.exit(1)
categories=[x.strip() for x in open('categories.txt').readlines()]
arch='googlenet' # googlenet, vgg16 or alexnet
net=caffe.Classifier('deploy-{}.prototxt'.format(arch),'minc-{}.caffemodel'.format(arch),channel_swap=(2,1,0),mean=numpy.array([104,117,124]))
result={}
for i,x in enumerate(categories):
result[x]=[]
for j,y in enumerate(sorted(glob.glob('images/{}/*'.format(x)))):
z=net.predict([caffe.io.load_image(y)*255.0])[0]
k=z.argmax()
print(arch,y,categories[k],z[k],k==i)
result[x].append(k==i)
for i,x in enumerate(categories):
print(arch,x,sum(result[x])/len(result[x]))
print(arch,sum(sum(x) for x in result.values())/sum(len(x) for x in result.values()))
and I receive this error :
AttributeError Traceback (most recent call
last) in ()
18
19 arch='googlenet' # googlenet, vgg16 or alexnet
---> 20 net=caffe.Classifier('deploy-{}.prototxt'.format(arch),'minc-{}.caffemodel'.format(arch),channel_swap=(2,1,0),mean=numpy.array([104,117,124]))
21
22 result={}
AttributeError: module 'caffe' has no attribute 'Classifier'

object detection with the yolov4 model in Raspberry Pi 3 B+ using the Tensorflow Lite

I try to run the yolov4 model in Raspberry Pi 3 B+ using the Tensorflow Lite.
I took the codes and tried to follow the instructions from the following link and it ran successfully in my pc but not in raspberry.:
https://github.com/haroonshakeel/tensorflow-yolov4-tflite
Used these commands in rasbian:
cd Projects/tflite/
python -m pip install virtualenv
python -m venv tflite-env
source tflite-env/bin/activate
sudo apt -y install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev
sudo apt -y install qt4-dev-tools libatlas-base-dev libhdf5-103
python -m pip install opencv-contrib-python==4.1.0.25
uname -a
uname -m
python --version
python -m pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl
And for the run:
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4 --framework tflite
And it gave me:
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info
I ran the following command for the weights:
python convert_tflite.py --weights ./checkpoints/yolov4-416 --output ./checkpoints/yolov4-416-fp16.tflite --quantize_mode float16
And had this error:
OSError: Unable to create file (unable to open file: name = ' ./checkpoints/yolov4-416', erno = 21, error message = 'Is a directory', flags = 13, o_flags = 242)
Finally, when i tried to run the detection with this command:
python detect_video.py --weights ./checkpoints/yolov4-416.tflite --size 416 --model yolov4 --video ./data/videoNIR.AVI
I had this error:
TypeError(): load() missing 1 required positional argument: 'export dir'
Any way how to solve these errors?
Thanks.
The commands for building the tflite model should not be executed on the raspberry. You do everything on your PC and after on the raspberry you must execute the detection command.Note also that if you have not connected a screen on your raspberry, the code will not be able to work because it is still looking for used the GUI could be executed, which it will not find !!

How to install pjsua2 packages for python?

I am trying to create softphone using Python.
I found this link describing pjsua2 but there are no any clear steps that define how to install pjsua2 package for python.
Can any one please define me clear steps on installing pjsua2 that can be used in python ?
These steps shall work
Step1: Create a directory. /PJSUA2/pjproject/src
Step2: install required modules
sudo apt-get install libasound2-dev libssl-dev libv4l-dev libsdl2-dev libsdl2-gfx-dev libsdl2-image-dev libsdl2-mixer-dev libsdl2-net-dev libsdl2-ttf-dev libx264-dev libavformat-dev libavcodec-dev libavdevice-dev libavfilter-dev libavresample-dev libavutil-dev libavcodec-extra libopus-dev libopencore-amrwb-dev libopencore-amrnb-dev libvo-amrwbenc-dev subversion
Step 3: Download source code svn co http://svn.pjsip.org/repos/pjproject/trunk pjproject
Step 4: Compile main library and install. If you are trying it on RPI, refer to this link. Basically you need to set proper CFLAGS and ensure third_party/build/os-auto.mak.in is properly configured for your platform.
$ cd pjproject
$ ./configure --enable-shared
$ make dep
$ make
$ sudo make install
Step 5: Compile and install python module. Again ensure you have proper user.mak, if you are compiling it for RPI
$ cd pjsip-apps/src/swig/
$ make
$ make install
Step 6: Check installed module
$ python
> import pjsua2
These steps are exactly mentioned here, except for that RPI twist
Update #1:
And dont forget to set ep_cfg.uaConfig.threadCnt = 0, else you will get Segmentation fault. So the sample code in PJSUA2 page shall have change
def pjsua2_test():
# Create and initialize the library
ep_cfg = pj.EpConfig()
ep_cfg.uaConfig.threadCnt = 0; #Python does not like PJSUA2's thread. It will result in segment fault
ep = pj.Endpoint()
ep.libCreate()
ep.libInit(ep_cfg)
The steps look like they are listed here: https://trac.pjsip.org/repos/wiki/Python_SIP/Build_Install
I ran through them and they seemed to work without any problem on Mac Os X. What as the exact issue you were running into?

how to build opencv for python3 when both python2 and python3 are installed

I was trying to build opencv for python3. However, cmake always sets python build option to be python2.7.11 even after I manually specified include and lib option for python3:
-- Python 2:
-- Interpreter: /home/ryu/anaconda2/bin/python2.7 (ver 2.7.11)
-- Python 3:
-- Interpreter: /usr/bin/python3 (ver 3.4.3)
-- Libraries: /usr/lib/x86_64-linux-gnu/libpython3.4m (ver 3.4.3)
-- numpy: /home/ryu/.local/lib/python3.4/site-packages/numpy/core/include (ver 1.11.0)
-- packages path: lib/python3.4/dist-packages
--
-- **Python (for build): /home/ryu/anaconda2/bin/python2.7**
Did I miss some cmake option?
OS: Ubuntu 14,04
thanks
You can override the python executable to build to by appending the argument PYTHON_DEFAULT_EXECUTABLE with the python executable URI during the cmake invokation.
cmake {...} -DPYTHON_DEFAULT_EXECUTABLE=$(which python3) ..
I was struggling with this one for some hours and the answers mentioned above didn't solve the problem straightaway.
Adding to Ivan's answer, I had to include these flags in cmake to make this work:
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D BUILD_opencv_python3=ON \
-D HAVE_opencv_python3=ON \
-D PYTHON_DEFAULT_EXECUTABLE=<path_to_python3>
I leave that here, so it is maybe useful for someone else in the future.
it was take some hours for me. I built Dockerfile with opencv for python3
the key string is
pip install numpy
Full Docker file:
FROM python:3.8
RUN apt-get update && apt-get -y install \
cmake \
qtbase5-dev \
libdc1394-22-dev \
libavcodec-dev \
libavformat-dev \
libswscale-dev
RUN cd /lib \
&& git clone --branch 4.1.1 --depth 1 https://github.com/opencv/opencv.git \
&& git clone --branch 4.1.1 --depth 1 https://github.com/opencv/opencv_contrib.git
RUN pip install numpy \
&& mkdir /lib/opencv/build \
&& cd /lib/opencv/build \
&& cmake -DCMAKE_BUILD_TYPE=RELEASE -DCMAKE_INSTALL_PREFIX=/usr/local -DWITH_TBB=ON -DWITH_V4L=ON -DWITH_QT=ON -DWITH_OPENGL=ON -DWITH_FFMPEG=ON -DOPENCV_ENABLE_NONFREE=ON -DOPENCV_EXTRA_MODULES_PATH=/lib/opencv_contrib/modules .. \
&& make -j8 \
&& make install
CMD ["bash"]
The main point is to force compiler to build cv2 module for python
To make it we need python3 should be included in line To be built in CMakeCache.txt file in build folder of opencv
ref https://breakthrough.github.io/Installing-OpenCV/
If there are any errors, ensure that you downloaded all the required packages - the output should help track down what is missing. To ensure the Python module will be built, you should see python2 in the list of configured modules after running cmake
(in my case python3)
I've been trying to install opencv on a Pi3 and this solution didn't work for me as python (for build) was always set to Python2.7 but I found that by changing the order of an elseif statement at the bottom of 'OpenCVDetectPython.cmake' fixed the problem. For me, this file is located at '~/opencv-3.3.1/cmake'.
The original code segment:
if(PYTHON_DEFAULT_EXECUTABLE)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
elseif(PYTHON2INTERP_FOUND) # Use Python 2 as default Python interpreter
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON2_EXECUTABLE}")
elseif(PYTHON3INTERP_FOUND) # Use Python 3 as fallback Python interpreter (if there is no Python 2)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON3_EXECUTABLE}")
endif()
My re-ordered code segment:
if(PYTHON_DEFAULT_EXECUTABLE)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
elseif(PYTHON3INTERP_FOUND) # Use Python 3 as fallback Python interpreter (if there is no Python 2)
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON3_EXECUTABLE}")
elseif(PYTHON2INTERP_FOUND) # Use Python 2 as default Python interpreter
set(PYTHON_DEFAULT_AVAILABLE "TRUE")
set(PYTHON_DEFAULT_EXECUTABLE "${PYTHON2_EXECUTABLE}")
endif()
I don't know the reasoning behind it, but cmake is set to default to python2 if python2 exists, swapping the order of these elseif statements switches it to default to python3 if it exists
** Disclaimer **
I was using the script found at https://gist.github.com/willprice/c216fcbeba8d14ad1138 to download, install and build everything
(script was modified to not create a virtual environment as I didn't want one and with j1 not j4 as it failed around 85% when running with multiple cores).
I don't think the relevant file exists until you have attempted a build.
Changing the options in cmake did nothing for me no matter what options I modified. The simpliest (hacky) solution for me was to
sudo mv /usr/bin/python2.7 /usr/bin/pythonNO-temp
Then you build and install opencv
then
sudo mv /usr/bin/pythonNO-temp /usr/bin/python2.7

Categories