I'm running a self implemented algorithm on my personal laptop using python 3.8. It's taking over 5 minutes to run while other people seem to be able to run it in 2 min or under.
I have an old laptop. Does this impact my runtime significantly?
It can't be told if it will impact significantly or not without seeing your algorithm and your hardware specs.
But having old hardware will surely impact your program runtime.
You can optimize your code for your hardware specifically to improve runtime.
Although python is very flexible and can run well in older devices.
Related
I am working on a project with some ML models. I worked on it during the summer, and have returned to it recently. For some reason, it is a lot slower to train and test now than it was then--I think it is a problem with python not using all of my system resources all of a sudden.
I am working on a project where I am building, training, and testing some ML models. I am using the sktime package in a python3.7 conda environment with jupyter notebook to do so.
I first started working on this project in the summer, and when I was building the models, I timed how long the training process took. I am resuming the project now, and I tried training the exact same model with the exact same data again, and it took around 6 hours this time compared to 76 minutes when I trained it during the summer. I have noticed that running inference on the test set also takes longer.
I am running on an 10-core M1 Max with 64 Gigs of RAM. I can tell that my computer is barely breaking a sweat right now, and with activity monitor it says that python is using 99.9% CPU, but it says overall the user is only using around 11.60% of the CPU. I remember my computer working a little harder when I was working on this project during the summer (the fan would actually turn on and the computer would get hot, but that is not happening now), so I have a feeling that the problem here is that my environment is not using the full system resources accessible to them. I have checked and my RAM limit on jupyter is enough, so that is not the issue. I am very confused what could have changed in this environment between the summer and now that has caused this problem. Any help would be much appreciated.
I want to implement machine learning on hardware platform s which can learning by itself Is there any way to by which machine learning on hardware works seamlessly?
Python supports wide range of platforms, including arm-based.
You raspberry pi supports Linux distros, just install Python and go on.
First, you may want to be clear on hardware - there is wide range of hardware with various capabilities. For example raspberry by is considered a powerful hardware. EspEye and Arduio Nano 33 BLE considered low end platforms.
It also depends which ML solution you are deploying. I think the most widely deployed method is neural network. Generally, the work flow is to train the model on a PC or on Cloud using lots of data. It is done on PC due to large amount of resources needed to run back prop. The inference is much lighter which can be done on the edge devices.
I am planning on writing some software for a web server that uses machine learning to process large amounts of data. This will be real estate data from a MySQL server. I will be using the CUDA framework from Nvidia with python/caffe or the c++ library. I will be using a Tesla P100. Although python is more widely used for machine learning I presume it is hard to write a server app in python without sacrificing performance. Is this true? Is c++ well supported for machine learning? Will anything be sacrificed by writing a professional server app in python (ex: connecting to MySQL database)?
Python is a language that performs worse than c++ in terms of runtime for several reasons:
First and foremost, Python is a scripting language that runs with an interpreter as opposed to c++ which compiled into machine code before running.
Secondly: python runs in the background a garbage collector system while in c++ the memory management is done manually by the programmer.
In your case, I recommend that you work with Python for several reasons:
Writing in Python in CUDA allows you to compile the code even though it is Python (CUDA provides JIT - Just In Time compiler, as well as a compiler and other effective tools), Which greatly improves performance
Python provides many, rich varied libraries that will help you a lot in the project, especially in the field of machine learning.
The development time and code length will be significantly shorter in Python.
From my experience with working at CUDA in the Python language I recommend you use numba and numbapro libraries, they are comfortable to work with and provide support for many libraries like numpy.
Best of luck.
Both C++ and Python are perfectly reasonable languages for implementing web servers. Python is actually quite common, and there are many frameworks for writing web servers in Python such as flask, bottle, django, etc. Architecturally, I wonder whether you really need the machine learning (which I imagine would be a data processing pipeline) and the web server to be the same process / binary; however, even if you do them in the same server, I suspect that either language would be perfectly reasonable; moreover, if you ever came to the point where you needed to run a piece of computation in C++ for performance, using SWIG to call C++ from Python or using some form of message passing from Python to a C++ helper process (such as via gRPC) are options.
I have been working on some python code that does heavy calculations, and it's finally finished. I am now running the code as much as possible, but because it uses a constant +- 70% processor and 1,5Gb ram it uses a lot of power. For this reason i only run the script when i am charging my laptop, but i often forget.
What i want to do to solve this problem is making it automatic, of course. My plan is to always have the script running, but when my laptop is not charging it will be idle.
The main problem with this is that i have to detect that my laptop is being charged. I have looked around for information on this, but i can't find any that suits my problem. I have seen people use the ctypes library, but i can't seem to find a function that tells me if my laptop is being charged.
I am looking for a python function that tells me if my laptop is being charged.
edit: My operating system is windows 10.
Is there any portable way or library to check if Python script is running on a virtualized OS and also which virtualization platform it's running on.
This Detect virtualized OS from an application? questions discusses a c version.
I think you call linux command virt-what in python.
The descriptio of virt-what is here: http://people.redhat.com/~rjones/virt-what/
To my knowledge, there is no nice, portable way to figure this out from Python. Most of the time, the way people try to figure out if they're being virtualized or not is looking for clues left by the VM -- either some instruction isn't quite right, or some behavior is off.
However, all might not be lost if you're willing to go off box. When you're in a VM, you will almost never have perfect native performance. Thus, if you make enough measurements against a server you trust, then it might be possible to detect if you're in a VM. This is especially the case if you're running on a machine with multiple machines. Check your own time, how much time you're getting scheduled, and how much wall time has past (based on an external measurement because you can't trust the local machine). You'll probably have better luck if you can watch how much time has passed on the local machine rather than just inside one process.