PyBullet cannot load urdf file - python

When I try running an example PyBullet file, like the one below, I keep getting the following error message:
import pybullet as p
from time import sleep
import pybullet_data
physicsClient = p.connect(p.GUI)
p.setAdditionalSearchPath(pybullet_data.getDataPath())
p.setGravity(0, 0, -10)
planeId = p.loadURDF("plane.urdf", [0,0,-2])
boxId = p.loadURDF("cube.urdf", [0,3,2],useMaximalCoordinates = True)
bunnyId = p.loadSoftBody("bunny.obj")#.obj")#.vtk")
useRealTimeSimulation = 1
if (useRealTimeSimulation):
p.setRealTimeSimulation(1)
p.changeDynamics(boxId,-1,mass=10)
while p.isConnected():
p.setGravity(0, 0, -10)
if (useRealTimeSimulation):
sleep(0.01) # Time in seconds.
else:
p.stepSimulation()
The error shows as following:
bunnyId = p.loadSoftBody("bunny.obj")#.obj")#.vtk")
error: Cannot load soft body.
I have Windows 10. I'm running PyBullet on a notebook (Python 3.6), but I get the same error with Visual Studio (Python 3.7). What can I do to fix it?

This is a solved issue in https://github.com/bulletphysics/bullet3/pull/4010#issue-1035353580,
either upgrade pybullet or copy the .obj file from the repository to pybullet_data directory will be fine.

Related

Unable to run program using Pycharm Debugger, but normal Run works fine when using opencv - [Errno 2] No such file or directory:

I have checked a lot of similar posts where users were having issues with the Pycharm debugger not working, but running it using the run button working fine, but none of them applied to an issue with opencv.
Here is my simple script:
import numpy as np
import cv2
image = cv2.imread('../FGVC/data/balloon/1548266469.88633.png')
image_2 = cv2.imread('../FGVC/data/balloon_mask/1.jpg')
cv2.imshow('img', image)
cv2.imshow('img2', image_2)
yo = np.bitwise_and(image, image_2)
ye = np.bitwise_or(image, image_2)
cv2.imshow('combined', yo)
cv2.imshow('combinedd', ye)
cv2.waitKey(0)
I get the following exception whenever I import cv2 through the python debugger.
[Errno 2] No such file or directory: '/home/user/anaconda3/envs/py36/lib/python3.6/site-packages/cv2/config-3.6.py'
I am using an anaconda virtual environment running Python 3.6. I did check that the cv2 directory and indeed there is no config-3.6.py file, but there was a config-3.py file, so I duplicated that and called it config-3.6.py, but then I started running into the following issue:
(<class 'KeyError'>, KeyError(b'LD_LIBRARY_PATH',), <traceback object at 0x7fe97dd32ac8>)
This is the content of my config-3.6.py file.
PYTHON_EXTENSIONS_PATHS = [
LOADER_DIR
] + PYTHON_EXTENSIONS_PATHS
ci_and_not_headless = False
try:
from .version import ci_build, headless
ci_and_not_headless = ci_build and not headless
except:
pass
# the Qt plugin is included currently only in the pre-built wheels
if sys.platform.startswith("linux") and ci_and_not_headless:
os.environ["QT_QPA_PLATFORM_PLUGIN_PATH"] = os.path.join(
os.path.dirname(os.path.abspath(__file__)), "qt", "plugins"
)
# Qt will throw warning on Linux if fonts are not found
if sys.platform.startswith("linux") and ci_and_not_headless:
os.environ["QT_QPA_FONTDIR"] = os.path.join(
os.path.dirname(os.path.abspath(__file__)), "qt", "fonts"
)
Edit: what is also strange is that after the exception being raised the rest of the open-cv script works as expected the same way it works with the run button.
I figured out that the issue was that I had made it so the Pycharm debugger would break "On Raise" of an exception for a different project and the settings carried over to this project.
I just unchecked the "On Raise" option and the problem was solved.

Running a simple_3dviz script on macOS Big Sur getting modernGL error

I try to run a simple_3dviz script, but I always get the following error:
File "...venv/lib/python3.8/site-packages/moderngl/context.py", line 1228, in program
res.mglo, ls1, ls2, ls3, ls4, ls5, res._subroutines, res._geom, res._glo = self.mglo.program(
moderngl.error.Error: cannot create program
I have already installed PyOpenGL and PyOpenGL-accelerate, but the error stays the same.
My Python script:
from simple_3dviz import Mesh
from simple_3dviz.window import show
from simple_3dviz.utils import render
...
show(
Mesh.from_voxel_grid(voxels=workpiece.voxels),
light=(-1,-1,1)
)

How to make pywin32 work with pyinstaller to make an executable?

I was trying to make a code where I had to change creation time of a file to creation time of another file (copying creation time from one file to another). I could find some answers about changing creation time on the following links and they work very well as well.
How do I change the file creation date of a Windows file?
Modify file create / access / write timestamp with python under windows
But when I try to put the code mentioned in an executable using pyinstaller it shows the following error:
AttributeError: 'pywintypes.datetime' object has no attribute 'astimezone'
How can I get over this error?
Below is the code which can be used to reproduce the situatuion
import pywintypes, win32file, win32con, ntsecuritycon
import os
def changeFileCTime(fname, createdNewtime):
wintime = pywintypes.Time(int(createdNewtime))
winfile = win32file.CreateFile(fname, ntsecuritycon.FILE_WRITE_ATTRIBUTES, 0, None, win32con.OPEN_EXISTING, 0, None)
win32file.SetFileTime(winfile, wintime, None, None)
winfile.close()
def main():
filename = input("File to copy creation time from: ")
ctime = os.path.getctime(filename)
filename = input("File to set ctime to: ")
changeFileCTime(filename, ctime)
if __name__ == '__main__':
main()
Versions of programs:
Python - 3.8.2
Pyinstaller - 4.1
pyinstaller code:
pyinstaller --onefile --console test.py
where test.py is the filename with the above code.

Spark Release 2.4.0 can not work on win7 because of ImportError: No module named 'resource'

I attempt to install Spark Release 2.4.0 on my pc, which system is win7_x64.
However when I try to run simple code to check whether spark is ready to work:
code:
import os
from pyspark import SparkConf, SparkContext
conf = SparkConf().setMaster('local[*]').setAppName('word_count')
sc = SparkContext(conf=conf)
d = ['a b c d', 'b c d e', 'c d e f']
d_rdd = sc.parallelize(d)
rdd_res = d_rdd.flatMap(lambda x: x.split(' ')).map(lambda word: (word, 1)).reduceByKey(lambda a, b: a+b)
print(rdd_res)
print(rdd_res.collect())
I get this error:
error1
I open the worker.py file to check the code.
I find that, in version 2.4.0, the code is :
worker.py v2.4.0
However, in version 2.3.2, the code is:
worker.py v2.3.2
Then I reinstall spark-2.3.2-bin-hadoop2.7 , the code works fine.
Also, I find this question:
ImportError: No module named 'resource'
So, I think spark-2.4.0-bin-hadoop2.7 can not work in win7 because of importing
resource module in worker.py, which is a Unix specific package.
I hope someone could fix this problem in spark.
i got this error and I have spark 2.4.0, jdk 11 and kafka 2.11 on windows.
I was able to resolve this by doing -
1) cd spark_home\python\lib
ex. cd C:\myprograms\spark-2.4.0-bin-hadoop2.7\python
2) unzip pyspark.zip
3) edit worker.py , comment out 'import resource' and also following para and save the file. This para is just an add on and is not a core code, so its fine to comment it out.
4)remove the older pyspark.zip and create new zip.
5) in jupyter notebook restart the kernel.
commented para in worker.py -
# set up memory limits
#memory_limit_mb = int(os.environ.get('PYSPARK_EXECUTOR_MEMORY_MB', "-1"))
#total_memory = resource.RLIMIT_AS
#try:
# if memory_limit_mb > 0:
#(soft_limit, hard_limit) = resource.getrlimit(total_memory)
#msg = "Current mem limits: {0} of max {1}\n".format(soft_limit, hard_limit)
#print(msg, file=sys.stderr)
# convert to bytes
#new_limit = memory_limit_mb * 1024 * 1024
#if soft_limit == resource.RLIM_INFINITY or new_limit < soft_limit:
# msg = "Setting mem limits to {0} of max {1}\n".format(new_limit, new_limit)
# print(msg, file=sys.stderr)
# resource.setrlimit(total_memory, (new_limit, new_limit))
#except (resource.error, OSError, ValueError) as e:
# # not all systems support resource limits, so warn instead of failing
# print("WARN: Failed to set memory limit: {0}\n".format(e), file=sys.stderr)
Python has some compatibility issue with the newly released Spark 2.4.0 version. I also faced this similar issue. If you download and configure Spark 2.3.2 in your system (change environment variables), the problem will be resolved.

LibraryLoader object is not callable typeerror - python cdll - ctypes package

I tried upload scenario with keyboard press simulation through ctypes package in python for selenium webdriver. It is working fine with my local machine installed with windows 8.1.
But when i run the same code in my development server which is a linux box, that will call a remote machine of windows 7 OS, i got error like windll not found in this part of my code
def SendInput(*inputs):
nInputs = len(inputs)
LPINPUT = INPUT * nInputs
pInputs = LPINPUT(*inputs)
cbSize = ctypes.c_int(ctypes.sizeof(INPUT))
return ctypes.windll.user32.SendInput(nInputs, pInputs, cbSize)
So I did change my code to a if else statement which prompts if the OS is windows got to above snippet of code else go to below snippet of code,
cdll.LoadLibrary("libc.so.6")
Xtst = cdll("libXtst.so.6")
Xlib = cdll("libX11.so.6")
dpy = Xtst.XOpenDisplay(None)
def SendInput(txt):
for c in txt:
sym = Xlib.XStringToKeysym(c)
code = Xlib.XKeysymToKeycode(dpy, sym)
Xtst.XTestFakeKeyEvent(dpy, code, True, 0)
Xtst.XTestFakeKeyEvent(dpy, code, False, 0)
Xlib.XFlush(dpy)
But after adding this i am getting error in my linux box like
TypeError: 'LibraryLoader' object is not callable.
I did search for resources over internet, but i was not able to get them. Can someone help me to get it through.

Categories