How to import a Python/OpenCV/OCR script into Android Studio? - python

I'm trying to build a small android app in which I will be able to recognize 7-segment digits or other type of numbers and display them on the screen.
I was able to make a small script in Python that can recognize any letters/numbers using OpenCV and Tesseract. For 7-segment digits, it doesn't seem to be as easy so I'm trying to use machine learning using this nice tutorial:
Simple Digit Recognition OCR in OpenCV-Python
On the other side, I have to learn how to use Android Studio (3.4.2) but I managed to import the OpenCV library (3.4.0) using this protocol:
https://android.jlelse.eu/a-beginners-guide-to-setting-up-opencv-android-library-on-android-studio-19794e220f3c
I have a general question. What strategy should I follow to conclude this project? Just the big lines ... or more if you want. Since I almost managed to make it work nicely using Python, how can I import my python code into Android Studio? Sorry for this stupid question. I just need to know if I'm taking the problem from the right angle. Is there an easy way to convert it to Java?....
Thanks for your advice.

Related

Python3.9, Tesseract 5, PyQT5 from desktop to Android device. Possible?

I have made an small non-comercial Py program that performs some simple OCR tasks, it works well enough in both Windows and Linux systems, but the usage doesn't feel right, i figured out that it would be much better running on an Android device.
The problem is i have almost 0 knowledge of Java (i'm just learning) so i would like to "reuse" my Python code.
So to be precise, my question is:
Is it possible to port my Python 3.9 program that uses [opencv 2, Tesseract 5, PyQT5, pandas, numpy and matplotlib] to an Android device?
If so, i would really appreciate some pointers in the right direction.
I have done my fair share of googling and found out about Kivy but also about pyqtdeploy, now im not sure if i should just trash the PyQT5 code and replace it with Kivy.
Also the more puzzling matter for me is Tesseract, seems logical that it should work on an Android device, but in the desktop i had to install Tesseract executable and have no idea how to do that on Android, also i have a .traineddata file i really need would it also work on Android?
Well, i think that's more than enough questions for a first timer.
Thanks in advance

How to compile python, QT5 app and create single app custom raspberry ISO

I have written a small program in python and QT5. I want it on raspberry PI as a single application. I have checked out buildroot and yocto but can't seems to find a single tutorial which can explain the steps from compiling the python code and make it part of buildroot/yocto. Can anyone please guide me or point me to a tutorial.
Regards,
In Buildroot, there are two ways that you can add your own Python code to the build.
Using a root filesystem overlay to add you python files in the appropriate place in the root filesystem. This has a lot of limitations though: it doesn't get byte-compiled, you have to make sure yourself that it gets installed in some location in PYTHONPATH, and generally it's a bit more difficult to maintain. However, it's a very simple approach for a first try.
Creating a custom python package. It's also not really complicated to do that, but you really do have to read the documentation.

Dynamic Gesture Recognition and Kinect with Python?

I am working on "Kinect for XBox One" on my HP Laptop with Windows 10 and 64-bit operating system. I have worked on python before and want to work in it only with jupyter notebok or python command line.
The topic of my project is Dynamic Sign Language Recognition and till now I have worked on only static images. I found many tutorials for working with kinect camera but every tutorial has been done with C++, C# or Sketch in Processing 3. I have downloaded Processing 3 and tried some programs in Sketch also by following this link: https://www.youtube.com/watch?v=XKatPT3HlqA
But even after 2 days, I am not able to run a simple program in it and only a black picture is there as an output, kinect is detected though.
I have also tried Pykinect and python example from this link: https://github.com/Kinect/PyKinect2
It was good and I was able to track the skeleton of the body. I want to learn Pykinect and many more such examples but I am not getting any source from where I can learn all these. My aim is to use all the three cues:RGB, Depth, and Skeleton for my work.
Even for dynamic gesture recognition, there are projects in C++ and languages other than python.
If you have any suggestions regarding kinect with python and dynamic gesture recognition, then you are welcome.
After searching for days, I figured out that there are no tutorials on Kinect using Python. Those who want to learn kinect with python and Windows should go to this link first: https://github.com/Kinect/PyKinect2
Go by the instructions and run the example programs whether in Visual studio, python command line or jupyter notebook. There are no tutorials defining the programming functions of Pykinect library. The only way to learn it is through one more link:
https://github.com/Microsoft/PTVS
Explore this link as it has one or two more examples which will help in understanding the functions. I am not done yet so I will keep updating my answer if I find any more sources.

How to capture screen on Wayland(Gnome) in Python code?

I'm trying to capture my screen using Python because I'll use it on OpenCV, but I couldn't find a way to make it work on Gnome, since Gnome uses Wayland and all libraries that I've found only work with X11.
For now I'm not considering change my interface. I'm searching a solution to this problem.
Does someone know a solution?
To be more specific, I'll use the images to train an AI and so I need they continuously.
EDIT:
I've found this but how can I pass frames to OpenCV in Python instead of save a video file?
The proper way to do screencasting these days is by using the Screencast portal, which is part of XDG desktop portals and is already supported by GNOME, KDE, wlroots (and more). As an added advantage, this will also work in containerized formats like Flatpaks.
You can find an example on how to do screencasting in Pyhon using this snippet, created by one of the Mutter maintainers. If you look for parse_launch(), you will see a GStreamer pipeline which you can modify to include the GStreamer OpenCV elements that can do the processing for you.
Note: in your edit, you link to a predecessor of that portal, which is GNOME-specifc, internal API, so I wouldn't rely on it ;-)

Is there any way to run Python on Bada?

I'm starting developing to Bada platform. But C++ isn't my favorite language. So, is there any way to run Python on Bada?
Update: For Android there is a scpripting layer (SL4A), and it's make possible to quickly prototype applications for android on the device itself using high level scripting languages. Is there nothing like that for Bada?
Thanks.
Is it possible to use Python on Bada ?
In simple words No.
Applications must be written originally in C/C++/Objective-C . No third-party APIs, development tools or “code translators (e.g. from Python to C++) are allowed.
You can’t even compile very classic library such as OpenSSL or libCurl. The support of the STL is not complete
The Bada platform APIs are a lot more closed than Apple’s ones.
You can try with boost::python, but Im not sure if it will work in a proper way.

Categories