Getting gpu vendor name on windows and linux - python

I'm currently writing some integration tests which should run on different physical machines and VMs with different OS.
For one type of test I have to find out if an nvidia-graphic card is installed on the running machine. I don't need any other information - only the vendor name (and it would be OK if I only knew if it is an NVIDIA graphic card or not - not interested in other vendors).
I can only use the python standard lib so I think the best way is to use subprocesses and using the shell.
Are there some commands for Windows(Win10x64) and Linux(Fedora, CentOS, SUSE) (without installing any tools or external libs) to find out the gpu vendor?

Following solution:
On Linux I'm using lsmod (or /sbin/lsmod; thanks to n00dl3) to see any occurence of "nvidia" and on Windows I'm using wmic path win32_VideoController get name to get some gpu information.

Related

Best approach to distribute a python package on a local segregated network

Dear community members,
I would like to have your opinions about my situation. I wrote some python modules to simplify the daily routine at work. I'm the only 'developer' and the user community is a handful of people with limited computer skills. The scripts should be available on several computers (Windows 7, 10 and 11) connected on a segregated network with no internet access.
I'm writing the code on a single PC (Windows 10) using Anaconda as environment and Spyder as IDE. The scripts are saved on a shared network disk that is accessible from all other PC in the segregated LAN.
And here comes my question: how should I package and distribute the code on all client PC?
My first idea was to not distribute it at all. I mean, I wanted to leave the code on the shared disk, and let the users double click on a shortcut on the desktop to have it running. The advantage is that I don't have to care about package creation and distribution.
Nevertheless I can see these limitations:
I need to install Anaconda on all PCs and, in particular on windows 7, I can only install an old release of it.
I need to modify the code in order that user-based configurations are saved on a local file and not on the shared disc.
In order to have access to shared modules between scripts, I need to add all relevant paths to the python search path on all PCs.
My second approach was to build an exe file for each script with pyinstaller and have them distributed on all clients. I can automatize the build and the copy on all pcs, so that I'm sure that everybody is using the same latest version. The advantage is that I don't need to install Anaconda everywhere but it has some drawbacks:
Each exe file is huge. All scripts have a Qt GUI and the size of the one-file exe generated by pyinstaller can easily reach 500 MB. This means that when the user double click on the icon, (s)he may have to wait a couple of seconds (depending on the disk speed and caching) before it is loaded and (s)he may thing that the computer is blocked and not working.
pyinstaller is multi-platform but not cross-platform. It means that I need to have two other development pcs, one with Win 7 and one with Win 11, to generate the exe file.
My third possibility was to generate a real python package that can be installed on all PC. And here the tricky point is, should it be installed with conda or with pip. I have a lot of confusion in mind about package building. I have seen and followed the tutorial on how to build a source and wheel python package on the python doc, but I don't know if it is the correct approach being my python environment inside anaconda.
I have seen that on git-hub you can automatically build an anaconda package starting from your python code and I would need to read the whole workflow documentation on how to do it because it doesn't look so easy to me. The drawback is that the clients PC have no access to GitHub, so I would need to manually copy the output package from a pc with internet access to somewhere on the segregated net and then let it install on all clients.
So, at the end of this long message, I hope I managed to describe my problem and I'm sure your answers will shed some light on it. I know that the question may sound trivial to the more advanced developers, but there are also newbies out here in need of good advices!
Thanks!

Retrieve ATI GPU info (memory, GPU load and so on) on OSX via python

I am trying to collect info about my video card to profile it, while running my python application; although I can't find any python module that is able to do so.
On Windows I have been using GPUtil, with my Nvidia card and it works great; but my mac has an ATI card. I tried also pyadl but it require ADLSDK3.0, which from what I can tell is windows only, not OSX.
At this point I am stuck; can't use GPUtil, can't use pyadl and can't find anything similar to GPUtil; which is a great library but sadly is Nvidia only.

Is it possible to use socketCAN protocol on MacOS

I am looking to connect to a car wirelessly using socketCAN protocol on MacOS using the module python-can on python3. I don't know how to install the socketCAN protocol on MacOS. Pls help.
This cat managed to get it basically working:
socketcanx
I have compiled it on my hackintosh (running Mojave) and it works from the terminal. I have not played around with it more than that, as it was just easier to use Liunx in a VM or docker or one of my Linux machines. When compiled, I was able to use all of my makeshift CAN devices and a USB2CAN device without issue. I am sure with some work, it can be used with Python-CAN, though you would need to write your own interface module for Python-CAN. As far as I can tell, it should work more or less the same, though the code is old (4 years since last update).
As stated in the accepted answer, you can use a native CAN device that is compatible with macOS and as long as it's compatible with Python-CAN, then you are good to go (or if it works on mac and not compatible, you can create and interface for the device and submit a pull-request on github for Python-CAN).
SocketCAN is implemented only for the Linux kernel. So it is not available on other operating systems. But as long as your CAN adapter is supported by python-can, you don't need SocketCAN.

MAKE through Cygwin overloads memory (too many processes)

What I'm trying to do is install SIP 4.14.7 through Cygwin using the make command. I'm running Python version 3.3.2 (with Python added to the PATH) on a Windows 7 x64 SP1 machine with 4GB RAM and an Intel Core 2 Duo. Since what I'm doing is from within the Cygwin terminal, I'll avoid using the Win32 path format.
Following the installation instructions provided with sip-4.14.7.zip, here is what I've done:
Uncompressed the .zip into /c/python33/SIP/
Launched the Cygwin terminal and went to the /cygdrive/c/python33/SIP/ folder
Ran python configure.py (No options since I was fine with the default settings)
Ran make install
As far as I can tell, I followed the instructions as I should have, but obviously I'm not doing something right here.
Here's what happens:SCREENSHOT
The number of make.exe processes go up to about 1800 before Windows gets too low on memory and the whole thing reverses itself until there are no more 'make.exe' processes running as shown here: SCREENSHOT2
I've Googled this and searched around here on stackoverflow.com but couldn't find anything related to this particular issue. It seems that unless using the -j option the MAKE command should only process one job at a time. I've also tried using the -l option thinking it would limit the processes unless enough memory was available, but the results were the same.
I tried to provide as much detail as possible, but if there is any more information that I should post to help diagnose this issue, I'd be glad to provide it. Otherwise, any suggestions here would be much appreciated.
The latest version of Cygwin includes the PyQT4 package (in All->Python within Setup.exe). It's python-pyqt4 and python3-pyqt4. If you are trying to live in Cygwin, I'd install that version into Cygwin and use it. No make required from the looks of it.

Using PyParallel in Windows XP

I have successfully implemented the PyParallel module in both Linux and Mac OSX as part of a large application to interface with a sensor I am developing.... I am now attempting to use this application on an instance of Windows XP. I have found several references (including right from the PySerial/PyParallel group) that:
The windows version needs a compiled extension and the giveio.sys driver for Windows NT/2k/XP. It uses ctypes to access functions in a prebuilt DLL.
However, I don't know what "a compiled extension" requirement is. And, I can't seem to get givio.sys to work. I obtained giveio.sys here and followed the recommendations, but LoadDrv.exe fails to "start" the service (it does "install", however).
I cannot find specific examples online of getting PyParallel working on Windows XP. Since PyParallel is "thoroughly" integrated into the application and is working on both Linux and Mac OSX, I'd prefer not to use a different module -- especially since PyParallel is great to not require root/administrator privileges to utilize.
I was having trouble with giveio.sys and LoadDrv.exe as well.
There is a handy installer that does it all automatically:
http://sourceforge.net/projects/pyserial/files/pyparallel/giveio/

Categories