How do i send response from Raspberry Pi to a web client - python

I am doing a project on Attendance Tracking using Raspberry Pi. I have used Python with opencv for this. I have installed python and opencv in RPi. Next I trained the system with already available faces of some students. Then I trained those faces and stored the training data in a separate file. Then using Picamera I captured the image and did the recognition part with already available trained data. When I did everything separately everything works fine.
But I have some confusing questions in my mind!
1) How can I run the code in raspberry pi via Android app?
2) How do I send the correctly identified face labels to the android app?
I need to do it with node.js. If it's possible please guide me.

Related

Sending data from raspberry pi to webserver

I have built a weather station using a raspberry pi and arduino and get readings every 10 minutes. I have built a code that saves all the readings locally in a database (mariadb) and in excel.
I have also built a webserver (using apache) on a separate raspberry pi to host a website. I have used simple scripts (html, css and php) to start building the website, but at the moment it is very basic and its only dynamic property is displaying the time. I have also installed mariadb on the webserver in case the database is required here. My goal would be to send the data from the Weather Station to the Website, but I am pretty new to this and not sure how to begin. I would appreciate any help/direction you can give me.
At the moment I have tried looking at different sources / similar projects, but I have failed to get a comprehensive list of suggestions.

Is there a way to convert/deploy a python project based on mediapipe and opencv on android?

I have made a pose estimation project using mediapipe and opencv in python. How can I convert or deploy it on android devices by using tools like Beeware/Chaquopy, or recode the whole thing in Kivy?
I am open to suggestions on changes in the project to incorporate it in android.

How to send data from Android/iOS app to a python script running on a raspberry pi using Bluetooth?

So me and my friend are working on this project where a raspberry pi with certain sensors collects some data and outputs it.
Now, we want certain parameters to be passed on runtime. So we were thinking of creating a react-native app which could connect with the Pi using Bluetooth. Once connected, it could then send the arguments over Bluetooth.
The python script would then intercept these arguments and then run its program and then send its output back to the app.
While searching for libraries to help me with this, I came across this library: https://github.com/Polidea/react-native-ble-plx
While going through the documentation, I came across https://github.com/Polidea/react-native-ble-plx/wiki/Characteristic-Writing, which seems to be the method used for writing some data and sending it.
In Bluetooth terms, what exactly are these services and characteristics and do I have to create my own service and characteristic while working on my project? Or can I write to any existing characteristic so that the python script can read that?
If I have wrongly understood certain concepts, please correct me. Also if there are any better ways to architect this approach please let me know
Currently, there is no official Expo Bluetooth API. I looked up the same library you are looking at and I find it a little complicated.
I found this link https://askubuntu.com/questions/838697/share-files-between-2-computers-via-bluetooth-from-terminal, where you could open up one terminal and use Bluetoothctl to connect to a specific device. Now, open up another terminal and use the bluetooth-sendto --device= MAC:ADDRESS (read that link) to send the file from Raspberry Pi to a phone (I tested on Android and it worked).
You could also send data from the phone to Raspberry Pi. Again, check out that link.

Image Recognition on Lego Ev3 Embedded System (Python)

I'm trying to make a sorting robot using the 'Lego Mindstorm EV3 Kit'.
Currently the robot is able to capture images and transfer them via bluetooth to a standard laptop. The laptop is responsible for the image recognition and sends back a prediction to the EV3 Robot. I've written a simple python program which uses the scikit-learn library for the machine intelligence and a few other libraries for feature extraction ect. It's currently working as it is, however I would like to get everything running on the EV3.
I've tried installing the libraries using the pip install and apt-get, and I've managed to get most of it installed on the EV3. My current problem is that I'm running out of memory while importing all the libraries in python. I've tried limiting the imports as much as possible, but since I only have about 50 MB of RAM to work with, I quickly run into problems. I've even tried adding virtual ram to the EV3, but it didn't work.
1) Do any of you have experience with image recognition on 'Lego Mindstorm EV3'. What libraries did you use. I might try TensorFlow, but I'm pretty sure I'll run into a similar problem with memory.
2) Do any of you have experience in implementing a simple machine learning algorithm in python, which can differentiate between images. My next try is going to be implementing a simple Neural Network. Remember I can still train the network on a big machine. Do you see any problems with this approach, and do you have any suggestions. I'm thinking just a "simple" neural network using the back propagation algorithm.
Thanks

Using OpenCV with python for ios apps

I am looking for ways to write an OpenCv program using python that detects objects in real time for an ios apps. Thus, it's probably going to be a mobile web application since i am using python. Now is there a way to initialize the ios cams like than of an iphone to get a live video feed. Normally in desktop applications OpenCV will initialize the desktop cam upon
cam = cv2.videocapture(). how about for ios devices, when creating mobile web apps?

Categories