Connecting to colab from a python script elsewhere - python

I currently have an ML algorithm that I have been using colab to train, as running this model is pretty heavy, I was wondering if it is possible to connect a python script that I have on for example my laptop to colab, e.g. uploading an input to my ML algorithm and then getting the output sent back?
I have googled pretty heavily on this but I am unable to find anyone discussing tying regular python scripts to my colab output.

Related

How to run a program on a schedule - Google colab

Over the past few weeks I've been coding a program which runs a reddit bot locally from my machine. I've perfected it such that it does not reply to the same comment which it has replied to before, it runs quite efficiently, and in my opinion is complete.
Now, I'm looking for a way to get the program to run on a schedule. I currently have the code in google colab, and I don't know how to use google colab for this functionality.
The program does not require any local storage, it's one code file, and does not require much memory, so I wanted to ask if anyone has a resource which has an detailed tutorial accessible for beginners which I could use to host this code
Note: The code requires an installation of PRAW, in google colab I simply do !pip install PRAW if that means anything differently for what I need to do, what should I do differently?
Thank you in advance.
Google Collab is not designed for this kind of things, and most likely it cannot be used to run your app on schedule.
Probably the easiest solution is some kind of Continuous Integration tool that lets you run code remotely.
Step 1 would be to host your code to some remote code repository like GitHub. Since it most likely won't have to be interactive switching from Collab notebook to simple Python script will make your config much easier later on.
Step 2 would be connecting that repo to some CI tool. One I am familiar with that lets you run pipelines on schedule is CircleCI, with this tutorial here showing very simplistic configuration for running Python scripts from a pipeline.

is there a way to run code on GPUs from my terminal?

I have been using GPU acceleration sevices like Google Colab for a while, but I am not satisfied. I don't like having to write all my code in Jupyter Notebooks, and I have some other issues too. I am wondering if there is a way to get something set up where i could just run a command from my terminal, something like upload train.py to upload a file train.py to a server, and then later run run train.py or something like that to run it on that server, and to have the output appear in my local terminal. Does anyone know a way to achieve something like this?
.. if there is a way to get something set up where I could just run a command from my terminal, something like upload train.py to upload a file train.py to a server, and then later run run train.py or something like that to run it on that server
If you are talking about running a code on the google colab server with GPU, no.
As I remember they updated their policy and now you can only use the GPU on google Colab via the Colab notebooks. If you have a Linux server with a GPU, you can connect to it via SSH and install Cuda and libraries like tensorflow_gpu or pytorch and run your code.
If you are looking for cheap alternatives for GPU servers, check this and this link.
Heroku is a non-GPU alternative where you can prototype your codes and then use any of the cloud providers such as AWS or Google Cloud. (As I remember AWS provides a decent number of free hours of GPU time at signup).
Then there is another alternative called FloydHub that I have heard people call heroku for deep learning. I haven't used it personally but this might also be what you are looking for.
On a personal note, even though it's not that efficient. I prototype my codes locally and then upload them to my google drive and do the final training on google Colab GPU. It's an unnecessary step, but that's the best I could find without renting a server.

How to get a Google Colab cell output using my own Python program

I have no experience with Google Colab, yet I have a python program that I run on my computer but needs to use an algorithm that runs in Google Colab. If it is possible, how should my program call my algorithm from Google Colab and get its output?

Image Recognition on Lego Ev3 Embedded System (Python)

I'm trying to make a sorting robot using the 'Lego Mindstorm EV3 Kit'.
Currently the robot is able to capture images and transfer them via bluetooth to a standard laptop. The laptop is responsible for the image recognition and sends back a prediction to the EV3 Robot. I've written a simple python program which uses the scikit-learn library for the machine intelligence and a few other libraries for feature extraction ect. It's currently working as it is, however I would like to get everything running on the EV3.
I've tried installing the libraries using the pip install and apt-get, and I've managed to get most of it installed on the EV3. My current problem is that I'm running out of memory while importing all the libraries in python. I've tried limiting the imports as much as possible, but since I only have about 50 MB of RAM to work with, I quickly run into problems. I've even tried adding virtual ram to the EV3, but it didn't work.
1) Do any of you have experience with image recognition on 'Lego Mindstorm EV3'. What libraries did you use. I might try TensorFlow, but I'm pretty sure I'll run into a similar problem with memory.
2) Do any of you have experience in implementing a simple machine learning algorithm in python, which can differentiate between images. My next try is going to be implementing a simple Neural Network. Remember I can still train the network on a big machine. Do you see any problems with this approach, and do you have any suggestions. I'm thinking just a "simple" neural network using the back propagation algorithm.
Thanks

Speed up Python startup or connect it with VB.NET

I've got a following setup:
VB.NET Web-Service is running and it needs to regularly call Python script with machine learning model to predict some stuff. To do this my Web-Service generates a file with input for Python and runs Python script as a subprocess. The script makes predictions and returns them, as standard output, back to Web-Service.
The problem is, that the script requires a few seconds to import all the machine learning libraries and load saved model from drive. It's much more than doing actual prediction. During this time Web-Service is blocked by running subprocess. I have to reduce this time drastically.
What I need is a solution to either:
1. Improve libraries and model loading time.
2. Communicate Python script with VB.NET Web-Service and run Python all the time with imports and ML model already loaded.
Not sure I understood the question but here are some things I can think of.
If this is a network thing you should have python compress the code before sending it over the web.
Or if you're able to, use multithreading while reading the file from the web.
Maybe you should upload some more code so we can help you better.
I've found what I needed.
I've used web.py to convert the Python script into a Web-Service and now both VB.NET and Python Web-Services can communicate. Python is running all the time, so there is no delay for loading libraries and data each time a calculation have to be done.

Categories