I'm trying to make a sorting robot using the 'Lego Mindstorm EV3 Kit'.
Currently the robot is able to capture images and transfer them via bluetooth to a standard laptop. The laptop is responsible for the image recognition and sends back a prediction to the EV3 Robot. I've written a simple python program which uses the scikit-learn library for the machine intelligence and a few other libraries for feature extraction ect. It's currently working as it is, however I would like to get everything running on the EV3.
I've tried installing the libraries using the pip install and apt-get, and I've managed to get most of it installed on the EV3. My current problem is that I'm running out of memory while importing all the libraries in python. I've tried limiting the imports as much as possible, but since I only have about 50 MB of RAM to work with, I quickly run into problems. I've even tried adding virtual ram to the EV3, but it didn't work.
1) Do any of you have experience with image recognition on 'Lego Mindstorm EV3'. What libraries did you use. I might try TensorFlow, but I'm pretty sure I'll run into a similar problem with memory.
2) Do any of you have experience in implementing a simple machine learning algorithm in python, which can differentiate between images. My next try is going to be implementing a simple Neural Network. Remember I can still train the network on a big machine. Do you see any problems with this approach, and do you have any suggestions. I'm thinking just a "simple" neural network using the back propagation algorithm.
Thanks
Related
I currently have an ML algorithm that I have been using colab to train, as running this model is pretty heavy, I was wondering if it is possible to connect a python script that I have on for example my laptop to colab, e.g. uploading an input to my ML algorithm and then getting the output sent back?
I have googled pretty heavily on this but I am unable to find anyone discussing tying regular python scripts to my colab output.
I'm trying to use a custom keras model I trained with tensorflow-gpu on my desktop with Python on a mobile phone (Android), however I need to run it with Python on the phone as well. I looked up TensorFlow Lite, however that appears to be written for Java.
Is there any lite (Python) version of TensorFlow, some kind of barebones package that's just set up for making predictions from a TensorFlow/keras model file? I'm trying to focus on saving space, so a solution under 50mb would be desired.
Thanks
TensorFlow Serving was built for the specific purpose of serving pre-trained models. I'm not sure if it runs (or how difficult to make it run) on Android or what it's compiled footprint is, if it's less than 50MB. If you can make it work, please do report back here!
I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly?
Python supports wide range of platforms, including arm-based.
You raspberry pi supports Linux distros, just install Python and go on.
First, you may want to be clear on hardware - there is wide range of hardware with various capabilities. For example raspberry by is considered a powerful hardware. EspEye and Arduio Nano 33 BLE considered low end platforms.
It also depends which ML solution you are deploying. I think the most widely deployed method is neural network. Generally, the work flow is to train the model on a PC or on Cloud using lots of data. It is done on PC due to large amount of resources needed to run back prop. The inference is much lighter which can be done on the edge devices.
the question seems not to be concrete enough, so let me explain: I programmed an Webapplication to viszualize data of different sensors in a wireless network. The sensordata is stored in a SQLite-database, which is connected as client to a MQTT-Broker. The whole project is implemented on a RaspberryPi3, which is also the central node of the network.
For the whole project I used differnet softwares like apache2, mosquitto, sqlite3. Furthermore the RPi needs to be configurated, so external Hardwre can be connected to it (GPIO, I2C, UART and some modules).
I wrote an installationguide with more then 60 commands.
What is the most efficient way to write a tool, which installs an configurate the Raspberry with all needed components? (sh, bash, python ...)
Maybe you can recommend me some guides which explains sh and bash.
I would configure one installation until you are satisfied and than use dd to clone your sd-card image. You can us dd again to perform the installation on another raspi.
Best regards, Georg
I have recently started coding my own agents in python(not using pybrain library) and it has come to the point where I would like to start running some proper benchmarking. I would like to run on a few mazes, pole balancing and mountain car.
I am aware of PyBrain having these environments already coded, is it possible to interact with those environments by connecting it to my own code using some sort of bridge/interface? I saw something like RLGlue bridge but the state of it seems questionable.
Has anyone successfully used the PyBrain environments already with a different bridge? I suppose the easiest solution would be to just adapt my code to the format of PyBrain agent and forgo bridging... but i'd like to save time by just connecting the two.
Thanks