I have recently started coding my own agents in python(not using pybrain library) and it has come to the point where I would like to start running some proper benchmarking. I would like to run on a few mazes, pole balancing and mountain car.
I am aware of PyBrain having these environments already coded, is it possible to interact with those environments by connecting it to my own code using some sort of bridge/interface? I saw something like RLGlue bridge but the state of it seems questionable.
Has anyone successfully used the PyBrain environments already with a different bridge? I suppose the easiest solution would be to just adapt my code to the format of PyBrain agent and forgo bridging... but i'd like to save time by just connecting the two.
Thanks
Related
A friend of mine and me are doing some field research for our Physics degree. And we are using jupyter notebook to analyse the data we get. We usually sit together working at two different copies of the same file that in the end will be drag and dropped together using jupyter lab. This is obviously not ideal, so i thought is there any way for just two people to work on one document in Jupyter, sadly Google Colab has been Deprecated and CoCalc is expensive. So i thought id ask here if there is a way to make one person run a Jupyter notebook and the other one just being able to access it over peer to peer aswell so we could write in the same file at the same time.
Do you guys know something that makes me do this maybe a workaround that i can do.
Thanks for answers in advance
CoCalc is expensive.
Fortunately, we also provide a complete free easy to install open source version of CoCalc, which you can run on any computer that supports Docker. For example, here's how to run it on Google cloud.
(I have put too many years of my life into making realtiime collaboration work for Jupyter via CoCalc... In any case, the open source code has been battle tested in production for a while now and is working well finally. I hope it can solve your problem...)
You can upload your notebook to Deepnote. It provides a hosted environment, where you and your colleague can connect at the same time and work on the same notebook in real-time (the same way you'd do in Google Docs).
Colab is also good, but writing at the same time will result in conflicts.
Notebook itself doesn't support to collaborate simultaneously, but you can use GitHub to manage your python script and upload it into Colab separately. This way Github can help manage the file history and solve the conflicts.
JupyterLab 3.1.0a7 introduced real time collaboration.
There is a screencast showing it in action.
Key thing to note is the new top-level menu item called Share, to the right of Settings & Help.
You can click on launch binder here or here to try it now.
"Once you see the JupyterLab interface, there's a new top-level menu item called "Share"; click that, grab and share that URL, and you're done!"-SOURCE: Step #5 here
There's a gist here that seems to be updated regularly with how to activate the feature.
There's a detailed walk-through here if you want to add the ability into your own repositories that can launch via MyBinder.org. Although if that repo falls behind the gist, you'll probably want to consult the gist for the current best practices once you have the idea from the detailed walk-through.
Closely related question with an answer by #krassowski, is here. You may want to look there for some additional details.
While you can use github for this it can get messy, many people clear output cells when committing to git to avoid conflict issues. Which would defeat the object of your review work.
You should try Curvenote (which we're building for that reason) it doesn't offer compute as its a collaborative writing tool, works on top of Jupyter via a chrome extenson and gives you real time versioning, commenting and diffs.
Google Colab has been Deprecated and CoCalc is expensive
Noteable.io is 100% free for all users including storage, compute, RAM. For your purposes, it will be ideal as you will get Google Drive like collaboration (commenting, #mentioning, Annotating data points), versioning, sharing, interactive visualizations, choice of using Python and SQL in the same notebook and a ton of other features.
Here are good example notebooks on Noteable:
Climate Change: An analysis of Dew Point for the city of Toronto
Healthcare Sector Employee Attrition Exploratory Data Analysis
Exploratory Data Analysis Using SQL and Python - Online Retailer Orders
I'm trying to make a sorting robot using the 'Lego Mindstorm EV3 Kit'.
Currently the robot is able to capture images and transfer them via bluetooth to a standard laptop. The laptop is responsible for the image recognition and sends back a prediction to the EV3 Robot. I've written a simple python program which uses the scikit-learn library for the machine intelligence and a few other libraries for feature extraction ect. It's currently working as it is, however I would like to get everything running on the EV3.
I've tried installing the libraries using the pip install and apt-get, and I've managed to get most of it installed on the EV3. My current problem is that I'm running out of memory while importing all the libraries in python. I've tried limiting the imports as much as possible, but since I only have about 50 MB of RAM to work with, I quickly run into problems. I've even tried adding virtual ram to the EV3, but it didn't work.
1) Do any of you have experience with image recognition on 'Lego Mindstorm EV3'. What libraries did you use. I might try TensorFlow, but I'm pretty sure I'll run into a similar problem with memory.
2) Do any of you have experience in implementing a simple machine learning algorithm in python, which can differentiate between images. My next try is going to be implementing a simple Neural Network. Remember I can still train the network on a big machine. Do you see any problems with this approach, and do you have any suggestions. I'm thinking just a "simple" neural network using the back propagation algorithm.
Thanks
I am new to TensorFlow, Linux, and ML. I am trying to use a GPU in another system in my lab for training my model. I have connected to the system using SSH.
Now what I am stuck on is how should I write the python code? One thing I can do is run python in the terminal window where I can see the username of the other machine I am connected to but it takes a lot of efforts and is not an efficient way of doing it.
What I want to do is write the python code in a file(on my machine) and run it on the machine possessing GPU. Can you describe to me how I should do that?
P.S: I understand it is a very basic doubt but I would appreciate if you can help me with it
Sorry to plug my own site, but I described how to do this with Pycharm in a blogpost.
Really hope this helps you out! If you have any more questions, feel free to ask!
Tensorflow has a Distributed Tensorflow tutorial out now. Since you can connect over SSH, this is an option.
the question seems not to be concrete enough, so let me explain: I programmed an Webapplication to viszualize data of different sensors in a wireless network. The sensordata is stored in a SQLite-database, which is connected as client to a MQTT-Broker. The whole project is implemented on a RaspberryPi3, which is also the central node of the network.
For the whole project I used differnet softwares like apache2, mosquitto, sqlite3. Furthermore the RPi needs to be configurated, so external Hardwre can be connected to it (GPIO, I2C, UART and some modules).
I wrote an installationguide with more then 60 commands.
What is the most efficient way to write a tool, which installs an configurate the Raspberry with all needed components? (sh, bash, python ...)
Maybe you can recommend me some guides which explains sh and bash.
I would configure one installation until you are satisfied and than use dd to clone your sd-card image. You can us dd again to perform the installation on another raspi.
Best regards, Georg
I have started using Python in a real-time application (serial communication with to gps modules at once), but have found out recently about Lua. Which language would be more suited to the application?
My definition of real-time in this context is the fastest possible time to receive, process and output the data. (Feedback system)
Both are fine languages. Neither should take you years to learn. An easy way to make the decision is to look at what modules are out there already.
For example, you mentioned that your application is related to GPS. Take a look at what libraries are already written to hook Python and Lua into your particular GPS hardware. Maybe someone's already done most of the hard work for you. If not, then go down a step. If you're talking to your GPS over an I2C link, look at I2C libraries in both languages. See which ones are more popular and better maintained.
That said, garbage collected languages have historically had problems with meeting real time requirements. Depending on yours, you may need to go with a lower level language. You should also ensure that whatever system you're running on will support your programming environment. I've worked with systems where Python would have been great but it doesn't fit in 5K of code space.
Take a look at eLua and see if it meets your needs.