I am new to TensorFlow, Linux, and ML. I am trying to use a GPU in another system in my lab for training my model. I have connected to the system using SSH.
Now what I am stuck on is how should I write the python code? One thing I can do is run python in the terminal window where I can see the username of the other machine I am connected to but it takes a lot of efforts and is not an efficient way of doing it.
What I want to do is write the python code in a file(on my machine) and run it on the machine possessing GPU. Can you describe to me how I should do that?
P.S: I understand it is a very basic doubt but I would appreciate if you can help me with it
Sorry to plug my own site, but I described how to do this with Pycharm in a blogpost.
Really hope this helps you out! If you have any more questions, feel free to ask!
Tensorflow has a Distributed Tensorflow tutorial out now. Since you can connect over SSH, this is an option.
Related
Over the past few weeks I've been coding a program which runs a reddit bot locally from my machine. I've perfected it such that it does not reply to the same comment which it has replied to before, it runs quite efficiently, and in my opinion is complete.
Now, I'm looking for a way to get the program to run on a schedule. I currently have the code in google colab, and I don't know how to use google colab for this functionality.
The program does not require any local storage, it's one code file, and does not require much memory, so I wanted to ask if anyone has a resource which has an detailed tutorial accessible for beginners which I could use to host this code
Note: The code requires an installation of PRAW, in google colab I simply do !pip install PRAW if that means anything differently for what I need to do, what should I do differently?
Thank you in advance.
Google Collab is not designed for this kind of things, and most likely it cannot be used to run your app on schedule.
Probably the easiest solution is some kind of Continuous Integration tool that lets you run code remotely.
Step 1 would be to host your code to some remote code repository like GitHub. Since it most likely won't have to be interactive switching from Collab notebook to simple Python script will make your config much easier later on.
Step 2 would be connecting that repo to some CI tool. One I am familiar with that lets you run pipelines on schedule is CircleCI, with this tutorial here showing very simplistic configuration for running Python scripts from a pipeline.
I am looking to test embedded system product using Python scripting and I need some guidance.
The setup: A Raspberry pi have been connected to the embedded system PCB and I access that Raspberry pi remotely using ssh or vncserver.
The issue: I am using PyCharm on my laptop to access the Python scripts. Since the embedded system is connected to Raspberry pi, I am not able to debug using my laptop as I would not be able to get the values of variables etc while debugging. So I installed PyCharm on RPi but it keeps on crashing frequently which I guess is because RPi is not able to take that much load as it also connected to VNCServer while using PyCharm.
What I am looking for: Some guidance how to debug in such a case so I can test whether scripts have issue or device is faulty or something else. A better and efficient method for debugging in such a scenario where there are multiple layers.
I have limited exposure so I may be missing out something. Please feel free to correct me.
Thanks.
Update: Just in case someone is looking for the answer to this question, Pycharm paid version allows remote debugging over ssh. That worked for me.
I'm using Pyomo library to solve an optimization problem with Gurobi optimizer. I have an academic license for Gurobi, but afaik this should not pose any kind of limitation. I'd like to optimize the model with different parameters in parallel. In other words, I have many instances of the same model but different parameters that could be solved independently.
I've followed the instruction that I found on the documentation page of Pyomo regarding pyro and dispatching (here the reference). Basically, I started the Pyomo name server with pyomo_ns, then the dispatch server with dispatch_srvr, and finally I have launched four instances of pyro_mip_server.
When I launch my python script it works. No errors occur. The problem is that it takes the same amount of time as the serial execution. I've also monitored the activity of all eight cores that my CPU has and only one is constantly at 100% of the load. It's like no concurrent execution is happening at all.
I'm using SolverManager provided by Pyomo library to submit from the Python script different instances of my model. You can find here the guide of how to use it.
I'm on a Linux machine, the latest LTS Ubuntu distro.
Have somebody had such an experience or does someone know what's the problem here? If you need any additional info just let me know :). Thank you!
With the lockdown going on, I work on my professional laptop to run Machine Learning models. Sadly, this laptop is not very powerful.
Since I have a powerful machine at home, I wish to use it's power. I know it is possible, but the solution shown implies to copy files on the remote computer. I have restriction from my company that doesn't allow me to do so for security reasons.
Is there a way to still manage to use my home computer resources to run my code ?
Thanks in advance,
I do the same thing without copying the data using WinSCP. It's Data manager with GUI which works very similar to PUTTY. With this you can live synchronize folders in both machines, edit the files on your machine and test on the other. But for this trick you will need both Pycharm and Jupyter. Pycharm for editting functions with WinSCP and jupyter for testing the functions. I hope it helps.
Are there any amazon machine image available for Anaconda python in Amazon web services? I am looking for something that is similar to R AMI like this -->http://www.louisaslett.com/RStudio_AMI/
Please let me know
I strongly believe that you can use Predictive Analytics Framework AMI, unfortunately it isn't free (but you can get free trial).
You can get probably best results with your own AMI. The easiest way is to use one of free AMI, install all necessary packages, then create image from it.
Hope that this will help you.
Yes, there may not have been one when you first asked, but hopefully this answer will help others. There are several free Anaconda AMI options, for both Python 2 and 3. See https://aws.amazon.com/marketplace/seller-profile?id=29f81979-a535-4f44-9e9f-6800807ad996&ref=dtl_B07CNFWMPC