Woke up today and all of a sudden get
C:\Python27\lib\site-packages\pyopencl\__init__.py:61: CompilerWarning: Non-empty compiler output encountered. Set the environment variable PYOPENCL_COMPILER_OUTPUT=1 to see more.
"to see more.", CompilerWarning)
C:\Python27\lib\site-packages\pyopencl\cache.py:101: UserWarning: could not obtain cache lock--delete 'c:\users\User\appdata\local\temp\pyopencl-compiler-cache-v2-uiduser-py2.7.3.final.0\lock' if necessary
% self.lock_file)
When I ran any sort of PqOpenCL code, ex:
import numpy
import pyopencl as cl
import pyopencl.array as clarray
from pyopencl.reduction import ReductionKernel
ctx = cl.create_some_context()
queue = cl.CommandQueue(ctx)
krnl = ReductionKernel(ctx, numpy.float32, neutral="0",
reduce_expr="a+b", map_expr="x[i]*y[i]",
arguments="__global float *x, __global float *y")
x = clarray.arange(queue, 400, dtype=numpy.float32)
y = clarray.arange(queue, 400, dtype=numpy.float32)
m = krnl(x, y).get()
Sample and part of the solution came from here
Solution suggested rolling back numpy, which I did from 1.8.0 to 1.7.2 but still same problem
Edit 1
Added as per suggestion
import os
os.environ['PYOPENCL_COMPILER_OUTPUT'] = '1'
C:\Python27\lib\site-packages\pyopencl\__init__.py:57: CompilerWarning: From-source build succeeded, but resulted in non-empty logs:
Build on <pyopencl.Device 'Intel(R) HD Graphics 4000' on 'Intel(R) OpenCL' at 0x51eadff0> succeeded, but said:
fcl build 1 succeeded.
fcl build 2 succeeded.
bcl build succeeded.
warn(text, CompilerWarning)
import os
os.environ['PYOPENCL_COMPILER_OUTPUT'] = '1'
Do this to see the compiler output, i've gotten the same message before. It was just the intel opencl compiler saying it had vectorized\optimized the opencl kernel.
Related
When I try running an example PyBullet file, like the one below, I keep getting the following error message:
import pybullet as p
from time import sleep
import pybullet_data
physicsClient = p.connect(p.GUI)
p.setAdditionalSearchPath(pybullet_data.getDataPath())
p.setGravity(0, 0, -10)
planeId = p.loadURDF("plane.urdf", [0,0,-2])
boxId = p.loadURDF("cube.urdf", [0,3,2],useMaximalCoordinates = True)
bunnyId = p.loadSoftBody("bunny.obj")#.obj")#.vtk")
useRealTimeSimulation = 1
if (useRealTimeSimulation):
p.setRealTimeSimulation(1)
p.changeDynamics(boxId,-1,mass=10)
while p.isConnected():
p.setGravity(0, 0, -10)
if (useRealTimeSimulation):
sleep(0.01) # Time in seconds.
else:
p.stepSimulation()
The error shows as following:
bunnyId = p.loadSoftBody("bunny.obj")#.obj")#.vtk")
error: Cannot load soft body.
I have Windows 10. I'm running PyBullet on a notebook (Python 3.6), but I get the same error with Visual Studio (Python 3.7). What can I do to fix it?
This is a solved issue in https://github.com/bulletphysics/bullet3/pull/4010#issue-1035353580,
either upgrade pybullet or copy the .obj file from the repository to pybullet_data directory will be fine.
I am working on MacOS BigSur/Version 11.5.2/ Python 3.9.5.
So i keep getting this import mistake when i try to run my python code:
*import subprocess
import psutil
import schedule
import time
import csv
import sys
import httpx
import asyncio
import json
csv_headers = ["company_id", "dep_id", "host_id", "cpu_count", "cpu_load", "ram_total",
"ram_used", "diskspace_total", "diskspace_used", "diskspace_free", "operations_read",
"operations_write", "timeresponse_min_ms", "timeresponse_avg_ms", "timeresponse_max_ms"]
def get_statistics_windows():
statistics = {}
statistics['company_id'] = "comp301"
statistics['dep_id'] = "dep301"
statistics['host_id'] = "host301"
# Get Physical and Logical CPU Count
physical_and_logical_cpu_count = psutil.cpu_count()
statistics['cpu_count'] = physical_and_logical_cpu_count
# Get CPU load in percentage (system wide)
statistics['cpu_load'] = psutil.cpu_percent(interval=1)
# Memory usage
statistics['ram_total'] = round(psutil.virtual_memory().total / 1024 ** 3, 2)
statistics['ram_used'] = round(psutil.virtual_memory().used / 1024 ** 3, 2)
#Disk Usage
statistics['diskspace_total'] = round(psutil.disk_usage('/').total / 1024 ** 3, 2)
statistics['diskspace_used'] = round(psutil.disk_usage('/').used / 1024 ** 3, 2)
statistics['diskspace_free'] = round(psutil.disk_usage('/').free / 1024 ** 3, 2)
statistics['operations_read'] = psutil.disk_io_counters().read_count # read bytes
statistics['operations_write'] = psutil.disk_io_counters().write_count # written bytes
# Network latency
ping_result = subprocess.run(['ping', '-i 5', '-c 5', 'google.com'], stdout=subprocess.PIPE).stdout.decode(
'utf-8').split('\n')
min_response_time, max_response_time, avg_response_time = ping_result[0].split('=')[-1], ping_result[1].split('=')[-1], ping_result[2].split('=')[-1]
statistics['timeresponse_min_ms'] = min_response_time.replace('ms', '').strip()
statistics['timeresponse_avg_ms'] = avg_response_time.replace('ms', '').strip()
statistics['timeresponse_max_ms'] = max_response_time.replace('ms', '').strip()
return statistics*
There is 100% no mistake in the code because i was asked to test it, so it works on other machines.
I tried to do
pip install psutil
and got - "Successfully installed psutil-5.8.0" response but in my Visual Studio i still keep getting this error.
I also used a command sudo pip install --upgrade psutil which successfully update the package but still doesnt seem to work in my Visual Studio
Does anybody know how to make my Visual Studio see this package, so my code could run?
Thanks in advance!
I'm trying to run keras/theano using GPU on jupyter notebook, my system is mac os High Sierra 10.13 with NVIDIA GeForce GT 330M.
I followed instructions on this site: I installed cuda and cudnn, then edit ~/.bash_profile. I can't use this command $ THEANO_FLAGS=mode=FAST_RUN python imdb_cnn.py since I'm using jupyter notebook.
Moreover I've also edit .theanorc file's and it looks like this:
[global]
floatX = float32
device = gpu
force_device = True
optimizer_including=cudnn
[nvcc]
fastmath = True
[cuda]
root=Users/jack/cuda
Then, I tried to run this code to check if I was using Gpu:
from theano import function, config, shared, tensor
import numpy
import time
vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
iters = 1000
rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, tensor.Elemwise) and
('Gpu' not in type(x.op).__name__)
for x in f.maker.fgraph.toposort()]):
print('Used the cpu')
else:
print('Used the gpu')
...and I was using cpu instead!
What should I put on the top of this code to use GPU?
I've also tried to put this:
import os
os.environ['THEANO_FLAGS'] = "device=cuda,force_device=True,floatX=float32"
but it didn't work because I always get this output:
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)] Looping 1000
times took 2.618842 seconds Result is [1.2317803 1.6187934 1.5227807
... 2.2077181 2.2996776 1.6232328]
Used the cpu
EDIT: I try to run the previous "test" code on terminal using
THEANO_FLAGS=mode=FAST_RUN,device=cuda0,floatX=float32 python gpuocpu.py
I get this error:
ERROR (theano.gpuarray): Could not initialize pygpu, support disabled
Traceback (most recent call last):
File "/Users/jack/miniconda3/lib/python3.6/site-packages/theano/gpuarray/__init__.py", line 227, in <module>
use(config.device)
File "/Users/jack/miniconda3/lib/python3.6/site-packages/theano/gpuarray/__init__.py", line 214, in use
init_dev(device, preallocate=preallocate)
File "/Users/jack/miniconda3/lib/python3.6/site-packages/theano/gpuarray/__init__.py", line 99, in init_dev
**args)
File "pygpu/gpuarray.pyx", line 658, in pygpu.gpuarray.init
File "pygpu/gpuarray.pyx", line 587, in pygpu.gpuarray.pygpu_init
I found this thread and I run this on terminal: set DEVICE=cuda0 and set GPUARRAY_CUDA_VERSION=80 but I still get the same error plus the output saying that I'm using cpu.
EDIT2: I re-installed Cuda (using .dmg file, not from terminal), I followed Invidia Installation Guide and applied this tips:
Uncheck System Preferences > Energy Saver > Automatic Graphic Switch
Drag the Computer sleep bar to Never in System Preferences > Energy Saver
Now i get this new error: Segmentation fault: 11
>>> import pygpu
>>> pygpu.test()
pygpu is installed in /Users/jack/miniconda3/lib/python3.6/site-packages/pygpu
NumPy version 1.15.4
NumPy relaxed strides checking option: True
NumPy is installed in /Users/jack/miniconda3/lib/python3.6/site-packages/numpy
Python version 3.6.7 |Anaconda, Inc.| (default, Oct 23 2018, 14:01:38) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
nose version 1.3.7
Segmentation fault: 11
I attempt to install Spark Release 2.4.0 on my pc, which system is win7_x64.
However when I try to run simple code to check whether spark is ready to work:
code:
import os
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster('local[*]').setAppName('word_count')
sc = SparkContext(conf=conf)
d = ['a b c d', 'b c d e', 'c d e f']
d_rdd = sc.parallelize(d)
rdd_res = d_rdd.flatMap(lambda x: x.split(' ')).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b)
print(rdd_res)
print(rdd_res.collect())
I get this error:
error1
I open the worker.py file to check the code.
I find that, in version 2.4.0, the code is :
worker.py v2.4.0
However, in version 2.3.2, the code is:
worker.py v2.3.2
Then I reinstall spark-2.3.2-bin-hadoop2.7 , the code works fine.
Also, I find this question:
ImportError: No module named 'resource'
So, I think spark-2.4.0-bin-hadoop2.7 can not work in win7 because of importing
resource module in worker.py, which is a Unix specific package.
I hope someone could fix this problem in spark.
i got this error and I have spark 2.4.0, jdk 11 and kafka 2.11 on windows.
I was able to resolve this by doing -
1) cd spark_home\python\lib
ex. cd C:\myprograms\spark-2.4.0-bin-hadoop2.7\python
2) unzip pyspark.zip
3) edit worker.py , comment out 'import resource' and also following para and save the file. This para is just an add on and is not a core code, so its fine to comment it out.
4)remove the older pyspark.zip and create new zip.
5) in jupyter notebook restart the kernel.
commented para in worker.py -
# set up memory limits
#memory_limit_mb = int(os.environ.get('PYSPARK_EXECUTOR_MEMORY_MB', "-1"))
#total_memory = resource.RLIMIT_AS
#try:
# if memory_limit_mb > 0:
#(soft_limit, hard_limit) = resource.getrlimit(total_memory)
#msg = "Current mem limits: {0} of max {1}\n".format(soft_limit, hard_limit)
#print(msg, file=sys.stderr)
# convert to bytes
#new_limit = memory_limit_mb * 1024 * 1024
#if soft_limit == resource.RLIM_INFINITY or new_limit < soft_limit:
# msg = "Setting mem limits to {0} of max {1}\n".format(new_limit, new_limit)
# print(msg, file=sys.stderr)
# resource.setrlimit(total_memory, (new_limit, new_limit))
#except (resource.error, OSError, ValueError) as e:
# # not all systems support resource limits, so warn instead of failing
# print("WARN: Failed to set memory limit: {0}\n".format(e), file=sys.stderr)
Python has some compatibility issue with the newly released Spark 2.4.0 version. I also faced this similar issue. If you download and configure Spark 2.3.2 in your system (change environment variables), the problem will be resolved.
I'm trying to get a minimal OpenGL shader working in pyglet but getting the following error when it compiles:
ERROR: 0:1: '' : version '420' is not supported
It seems like this might be due to pyglet using the legacy OpenGL profile as in this question but if that is the case how can I get pyglet to use a diffrent profile? I can't find it documented anywhere and feel like I must be missing something obvious.
The following code reproduces the problem (adapted from this example):
import pyglet
import pyglet.gl as gl
import ctypes
window = pyglet.window.Window()
handle = gl.glCreateProgram()
shader = gl.glCreateShader(gl.GL_VERTEX_SHADER)
source = """
#version 420
in vec3 vertex_position;
void main() {
gl_Position = vec4(vertex_position, 1.0);
}
"""
src = ctypes.c_char_p(source)
gl.glShaderSource(shader, 1,
ctypes.cast(ctypes.pointer(src),
ctypes.POINTER(ctypes.POINTER(ctypes.c_char))), None)
gl.glCompileShader(shader)
# Retrieve compilation status
status = ctypes.c_int(0)
gl.glGetShaderiv(shader, gl.GL_COMPILE_STATUS, ctypes.byref(status))
# If compilation failed, print the log
if not status:
gl.glGetShaderiv(shader, gl.GL_INFO_LOG_LENGTH, ctypes.byref(status))
buffer = ctypes.create_string_buffer(status.value)
gl.glGetShaderInfoLog(shader, status, None, buffer)
print buffer.value
pyglet.app.run()
Hardware: MacBook Air (13-inch, Mid 2012) Intel HD Graphics 4000 1024 MB
OSX: 10.10.1
Python: 2.7.6
pyglet: 1.2.4
Seems to be an open bug, there is an unresolved issue here in the old Google code repo, and I've filed a new bug here.
So I guess currently (as of pyglet 1.2.4) it's not possible to get a >3.0 OpenGL context.