Can we pickle continuations in PyPy? How? - python

I am trying PyPy for the first time because I need serializable continuations. Specifically, this is what I am attempting:
from _continuation import continulet
import pickle
def f(cont):
cont.switch(111)
cont.switch(222)
cont.switch(333)
c = continulet(f)
print(c.switch())
print(c.switch())
saved = pickle.dumps(c)
When I try to pickle c I get this error, though: NotImplementedError: continulet's pickle support is currently disabled.
So, is there some way to enable pickling of continuations? The message suggests this, but so far I couldn't find out how.
Edit: I am using "PyPy 7.3.1 with GCC 9.3.0" (Python 3.6.9) on Linux.

Related

kubeflow AttributeError: 'ComponentStore' object has no attribute 'uri_search_template'

I am trying to load some prebuit gcp kubeflow components using kfp.components.ComponentStore. However I am getting this error:
line 180, in _load_component_spec_in_component_ref
if self.uri_search_template:
AttributeError: 'ComponentStore' object has no attribute 'uri_search_template'
when at this line of code:
mlengine_train_op = component_store.load_component('ml_engine/train')
KFP version:
kfp 1.8.10
kfp-pipeline-spec 0.1.13
kfp-server-api 1.7.1
Steps to reproduce
import kfp
from kfp.components import func_to_container_op
COMPONENT_URL_SEARCH_PREFIX = "https://raw.githubusercontent.com/kubeflow/pipelines/1.7.1/components/gcp/"
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
I got this code from an example that was running on kfp 0.2.5. This is probably some version problem but does anyone knows how to update this code to the newer version?
I solved the issue. It is something that #Kabilan Mohanraj also mentioned is his comment. It looks like there is a bug if you don't set up uri_search_template when creating the ComponentStore.
I solved it by providing it as parameter, it doesn't really matter what you pass to the constructor really, but it is required because in the code there is the following
if self.url_search_prefixes:
but if you don't pass something that attribute doesn't get created in the constructor.
I fixed it like this:
component_store = kfp.components.ComponentStore(
local_search_paths=["local_search_path"], url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX]), uri_search_template="{name}/"
I hope it will help someone in the future.
I suggested in the github issue to address this by either make the error more clear or to have better check on uri_search_template attribute.

Get CPU and GPU Temp using Python Windows

I was wondering if there was a way to get the CPU and the GPU temperature in python. I have already found a way for Linux (using psutil.sensors_temperature()), and I wanted to find a way for Windows.
A way to find the temperatures for Mac OS would also be appreciated, but I mainly want a way for windows.
I prefer to only use python modules, but DLL and C/C++ extensions are also completely acceptable!
When I try doing the below, I get None:
import wmi
w = wmi.WMI()
prin(w.Win32_TemperatureProbe()[0].CurrentReading)
When I try doing the below, I get an error:
import wmi
w = wmi.WMI(namespace="root\wmi")
temperature_info = w.MSAcpi_ThermalZoneTemperature()[0]
print(temperature_info.CurrentTemperature)
Error:
wmi.x_wmi: <x_wmi: Unexpected COM Error (-2147217396, 'OLE error 0x8004100c', None, None)>
I have heard of OpenHardwareMoniter, but this requires me to install something that is not a python module. I would also prefer to not have to run the script as admin to get the results.
I am also fine with running windows cmd commands with python, but I have not found one that returns the CPU temp.
Update: I found this: https://stackoverflow.com/a/58924992/13710015.
I can't figure out how to use it though.
When I tried doing: print(OUTPUT_temp._fields_), I got
[('Board Temp', <class 'ctypes.c_ulong'>), ('CPU Temp', <class 'ctypes.c_ulong'>), ('Board Temp2', <class 'ctypes.c_ulong'>), ('temp4', <class 'ctypes.c_ulong'>), ('temp5', <class 'ctypes.c_ulong'>)]
Note: I really do not want to run this as admin. If I absolutely have to, I can, but I prefer not to.
I think there doesn't have a directly way to achieve that. Some CPU producers wouldn't provide wmi to let your know the temperature directly.
You could use OpenHardwareMoniter.dll. Use the dynamic library.
Firstly, Download the OpenHardwareMoniter. It contains a file called OpenHardwareMonitorLib.dll (version 0.9.6, December 2020).
Install the module pythonnet:
pip install pythonnet
Below code works fine on my PC (Get the CPU temperature):
import clr # the pythonnet module.
clr.AddReference(r'YourdllPath')
# e.g. clr.AddReference(r'OpenHardwareMonitor/OpenHardwareMonitorLib'), without .dll
from OpenHardwareMonitor.Hardware import Computer
c = Computer()
c.CPUEnabled = True # get the Info about CPU
c.GPUEnabled = True # get the Info about GPU
c.Open()
while True:
for a in range(0, len(c.Hardware[0].Sensors)):
# print(c.Hardware[0].Sensors[a].Identifier)
if "/temperature" in str(c.Hardware[0].Sensors[a].Identifier):
print(c.Hardware[0].Sensors[a].get_Value())
c.Hardware[0].Update()
To Get the GPU temperature, change the c.Hardware[0] to c.Hardware[1].
Compare the result with :
Attention: If you want to get the CPU temperature, you need to run it as Administrator. If not, you will only get the value of Load. For GPU temperature, it can work without Admin permissions (as on Windows 10 21H1).
I did some changes from a Chinese Blog
I found a pretty good module for getting the temperature of NVIDIA GPUs.
pip install gputil
Code to print out temperature
import GPUtil
gpu = GPUtil.getGPUs()[0]
print(gpu.temperature)
I found it here
so you can use gpiozero to get the temp
first pip install gpiozero and from gpiozero import CPUTemperature to import it and cpu = CPUTemperature() print(cpu.temperature)
code:
from gpiozero import CPUTemperature
cpu = CPUTemperature()
print(cpu.temperature)
hope you enjoy. :)

VSCode Itellisense with python C extension module (petsc4py)

I'm currently using a python module called petsc4py (https://pypi.org/project/petsc4py/). My main issue is that none of the typical intellisense features seems to work with this module.
I'm guessing it might have something to do with it being a C extension module, but I am not sure exactly why this happens. I initially thought that intellisense was unable to look inside ".so" files, but it seems that numpy is able to do this with the array object, which in my case is inside a file called multiarray.cpython-37m-x86_64-linux-gnu (check example below).
Does anyone know why I see this behaviour in the petsc4py module. Is there anything that I (or the developers of petsc4py) can do to get intellisense to work?
Example:
import sys
import petsc4py
petsc4py.init(sys.argv)
from petsc4py import PETSc
x_p = PETSc.Vec().create()
x_p.setSizes(10)
x_p.setFromOptions()
u_p = x_p.duplicate()
import numpy as np
x_n = np.array([1,2,3])
u_n = x_n.copy()
In this example, when trying to work with a Vec object from petsc4py, doing u_p.duplicate() cannot find the function and the suggestion is simply a repetition of the function immediately before. However, using an array from numpy, doing u_n.copy() works perfectly.
If you're compiling in-place then you're bumping up against https://github.com/microsoft/python-language-server/issues/197.

Python 3.6: Why does pickle.dumps(nparray) permanently increase refcount?

An np.ndarray, when pickled, increments reference counter from the dumps function, however the ref count is never decremented.
Python 3.6.4 Anaconda
Ubuntu 16.04.5 LTS
numpy 1.16.0
I have already tried converting to a list using numpy.array.tolist() however this method is far too slow.
import numpy as np
import pickle
import sys
a = np.ndarray((10, 10), dtype=np.uint8)
print(sys.getrefcount(a)) # 2
pickle.dumps(a)
print(sys.getrefcount(a)) # 3
I would expect the output to be 2, 2 due to the Py_DECREF that occurs in the pickler dumps function, however it remains.
Output is 2, 3 and I cannot fix it. I am leaking memory like crazy.
Currently digging into _pickle.c.
You've run into this specific bug, and it's a regression in Numpy 1.16.0 only. New code to add support for a new pickle protocol 5 was leaking references to a bound __reduce__ method in the fallback case.
You can either wait for that bug to be fixed and 1.16.1 to be released, or go back to Numpy 1.15.4.

Tensorrt Plugin and caffe parser in python

I am new to Tensorrt and I am not so familiar with C language also. May I ask if there is any example to import caffe modell(caffeparser) and at the same time to use plugin with python. Plugin library example: "https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/_nv_infer_plugin_8h_source.html".
I saw an example doing something like the below. Is it necessary to modify the the pluginfactory class? or it has been already done with the python plugin api?
import tensorrt
import tensorrtplugins
from tensorrt.plugins import _nv_infer_plugin_bindings as nvinferplugin
from tensorrt.parsers import caffeparser
plugin_factory = tensorrtplugins.FullyConnectedPluginFactory()
parser = caffeparser.create_caffe_parser()
parser.set_plugin_factory(plugin_factory)
engine = trt.utils.caffe_to_trt_engine(G_LOGGER,
MODEL_PROTOTXT,
CAFFE_MODEL,
1,
1 << 20,
OUTPUT_LAYERS,
trt.infer.DataType.FLOAT,
plugin_factory
)
P.s: I am trying to convert YOLO2 to Tensorrt format. Therefore, some layers(e.g kYOLOREORG and kPRELU) can only be supported by the plugin.
Another way to do so is to add the plugin during constructing the network, by method network.add_plugin_ext() ?However, I am not so sure how to specify the previous layer that is going to be imported later.
Thank you so much for your answer. Your help will be much appreciated!

Categories