How to install copperhead python module? - python

i want to install this module but get me some error :
my error in pc :
C:\Users\Ali\Desktop\copperhead-master>python setup.py
C:\Users\Ali\Desktop\copperhead-master\setuptools-0.6c9-py2.7.egg-info already e
xists
scons: Reading SConscript files ...
Checking C:\Anaconda\MinGW\bin\g++.exe version... (cached) C:\Anaconda\MinGW\bin
\g++.exe was not found
g++ 4.5 or better required, please add path to siteconf.py
Traceback (most recent call last):
File "setup.py", line 40, in <module>
raise CompileError("Error while building Python Extensions")
distutils.errors.CompileError: Error while building Python Extensions
C:\Users\Ali\Desktop\copperhead-master>
and i fix inside of siteconf.py but still get me error ,
source of siteconf.py :
#! /usr/bin/env python
#
# Configuration file.
# Use Python syntax, e.g.:
# VARIABLE = "value"
#
# The following information can be recorded:
#
# CXX : path and name of the host c++ compiler, eg: /usr/bin/g++-4.5
#
# CC : path and name of the host c compiler, eg: /usr/bin/gcc
#
# BOOST_INC_DIR : Directory where the Boost include files are found.
#
# BOOST_LIB_DIR : Directory where Boost shared libraries are found.
#
# BOOST_PYTHON_LIBNAME : Name of Boost::Python shared library.
# NOTE: Boost::Python must be compiled using the same compiler
# that was used to build your Python. Strange errors will
# ensue if this is not true.
# CUDA_INC_DIR : Directory where CUDA include files are found
#
# CUDA_LIB_DIR : Directory where CUDA libraries are found
#
# NP_INC_DIR : Directory where Numpy include files are found.
#
# TBB_INC_DIR : Directory where TBB include files are found
#
# TBB_LIB_DIR : Directory where TBB libraries are found
#
# THRUST_DIR : Directory where Thrust include files are found.
#
BOOST_INC_DIR = "C:\\Boost\\include\\boost-1_55\\boost"
BOOST_LIB_DIR = "C:\\Boost\\lib"
BOOST_PYTHON_LIBNAME = None
CC = "C:\\Anaconda\\MinGW\\bin\\gcc.exe"
CUDA_INC_DIR = "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v5.5\\include"
CUDA_LIB_DIR = "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v5.5\\lib\\x64"
CXX = "C:\\Anaconda\\MinGW\\bin\\g++.exe"
NP_INC_DIR = "C:\\Anaconda\\lib\\site-packages\\numpy\\core\\include"
TBB_INC_DIR = None
TBB_LIB_DIR = None
THRUST_DIR = None
G++ directory and file name is true , so what need i do to fix error ?
i use \ inside of directories is it true ? i also test with \ and //

Related

Not Able to Run GEM5 with RISC-V: "!seWorkload occurred: Couldn't find appropriate workload object"

I am trying to run gem5 with RISC-V. I have the Linux 64-bits cross compiler ready and I have also installed and compiled gem5. I then tried to use the following tutorial to run gem5: https://canvas.kth.se/courses/24933/pages/tutorial-simulating-a-cpu-with-gem5
I wrote a simple Hello World C program and compiled it using the following command:
riscv64-unknown-linux-gnu-gcc -c hello.c -static -Wall -O0 -o hello
But when I try to run gem5, I get the following error:
build/RISCV/sim/process.cc:137: fatal: fatal condition !seWorkload occurred: Couldn't find appropriate workload object.
I tried to come over this problem but I could not. I added print statements to the configuration file and realized that the error occurs in the line m5.instantiate() in the configuration file attached below. Does anyone know how to solve this issue? What is an seWorkload and why gem5 considers the object as not appropriate?
I am using Ubuntu 22.04. For reference, this is the configuration python file I use for gem5:
import m5
from m5.objects import *
import sys
system = System()
system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
system.mem_mode = 'timing'
system.mem_ranges = [AddrRange('512MB')]
system.cpu = TimingSimpleCPU()
system.membus = SystemXBar()
system.cpu.icache_port = system.membus.cpu_side_ports
system.cpu.dcache_port = system.membus.cpu_side_ports
system.mem_ctrl = MemCtrl()
system.mem_ctrl.dram = DDR3_1600_8x8()
system.mem_ctrl.dram.range = system.mem_ranges[0]
system.mem_ctrl.port = system.membus.mem_side_ports
# start a process
process = Process()
# read command line arguments for the path to the executable
process.cmd = [str(sys.argv[1])]
system.cpu.workload = process
system.cpu.createThreads()
root = Root(full_system = False, system = system)
m5.instantiate() # the error occurs from this line
print("Beginning simulation!")
exit_event = m5.simulate()
print('Exiting # tick %i because %s' %(m5.curTick(), exit_event.getCause()))
m5.util.addToPath('../../') is missing. This is used to add the common scripts to the path to your directory from where you are instantiating the simulation.

Docker images builds with Bazel on Apple M1

I setup a new project that uses Bazel to build/package my Python applications.
The project uses rules_python to setup a py_binary rule with my source files and rules_docker to setup a py_image rule to build my image.
I am successfully able to bazel run the py_binary by itself.
But when trying to run the py_image rule, it succeeds with building the image, but fails to run the binary entry-point and throws the following error:
INFO: Analyzed target //demo:demo_img (110 packages loaded, 12496 targets configured).
INFO: Found 1 target...
Target //demo:demo_img up-to-date:
bazel-out/darwin_arm64-fastbuild-ST-9e3a93240a9e/bin/demo/demo_img-layer.tar
INFO: Elapsed time: 8.722s, Critical Path: 3.94s
INFO: 31 processes: 13 internal, 18 darwin-sandbox.
INFO: Build completed successfully, 31 total actions
INFO: Build completed successfully, 31 total actions
f83e52040704: Loading layer [==================================================>] 147.1MB/147.1MB
Loaded image ID: sha256:078152695f1056177bd21cd96171245f42f7415f5a1ff802b20fbd973eecddfd
Tagging 078152695f1056177bd21cd96171245f42f7415f5a1ff802b20fbd973eecddfd as bazel/demo:demo_img
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Traceback (most recent call last):
File "/app/demo/demo_img.binary", line 392, in <module>
Main()
File "/app/demo/demo_img.binary", line 382, in Main
os.execv(args[0], args)
OSError: [Errno 8] Exec format error: '/app/demo/demo_img.binary.runfiles/python3_8_aarch64-apple-darwin/bin/python3'
Taking a look at the generated image
docker run -it --entrypoint sh bazel/demo:demo_img
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
/app/demo/demo_img.binary.runfiles/__main__ # uname -a
Linux c94f44a24832 5.10.104-linuxkit #1 SMP PREEMPT Thu Mar 17 17:05:54 UTC 2022 aarch64 Linux
My current setup also uses a hermetic Python interpreter following this blog post: https://thethoughtfulkoala.com/posts/2020/05/16/bazel-hermetic-python.html
I am assuming that this problem exists due to the mismatch in OS type? The Python binary is built with an interpreter using apple/darwin where as the image is based on linux?
How do I configure py_image to build a binary for linux when developing on an M1 Macbook?
Appendix:
The following files are part of my sample project:
__main__.py
from flask import Flask
app = Flask(__name__)
#app.route("/", methods=["GET"])
def root():
return "OK"
if __name__ == "__main__":
app.run()
BUILD.bazel
load("#rules_python//python:defs.bzl", "py_binary")
load("#io_bazel_rules_docker//python3:image.bzl", py_image = "py3_image")
py_binary(
name = "demo_bin",
srcs = ["__main__.py"],
imports = [".."],
main = "__main__.py",
visibility = ["//:__subpackages__"],
deps = [
"#python_deps_flask//:pkg",
],
)
container_image(
name = "py_alpine_base",
base = "#python-alpine//image",
symlinks = {
"/usr/bin/python": "/usr/local/bin/python", # To work as base for py3_image
"/usr/bin/python3": "/usr/local/bin/python3", # To work as base for py3_image
},
)
py_image(
name = "demo_img",
srcs = ["__main__.py"],
base = "//:py_alpine_base",
main = "__main__.py",
deps = [
"#python_deps_flask//:pkg",
],
)
Where python-alpine is defined in WORKSPACE. It references an arm64 image from dockerhub.
load("#io_bazel_rules_docker//container:container.bzl", "container_pull")
container_pull(
name = "python-alpine",
registry = "index.docker.io",
repository = "arm64v8/python",
tag = "3.8-alpine",
)
The error usually means that instruction set of container host doesn’t match the instructions set of the container image that is attempting to initiate.
Use —platform in build command and specify linux/amd64

How to include header files for Python extension in c (CPython)

I'm trying to write a python extension that enables me to use my already working c library in python (working on Raspbian and compiling with arm-linux-gnueabihf-gcc) .
The setup.py compiles (with some warnings) but when I import the extension (bme) in my python3 interpreter i get the following error
>>> import bme
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/python3.7/dist-packages/bme.cpython-37m-arm-linux-gnueabihf.so: undefined symbol: bsec_sensor_control
The bsec_sensor_control function is defined in the bsec_interface.h header.
I tried to include the header directly into the program via relative path.
Here is my setup.py
from distutils.core import setup, Extension
bme_module = Extension('bme',
include_dirs = ['/usr/local/include',
'../BSEC_1.4.8.0_Generic_Release/algo/normal_version/bin/RaspberryPi/PiThree_ArmV6',
'../BSEC_1.4.8.0_Generic_Release/config/generic_33v_3s_28d',
'../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example'],
library_dirs = ['../BSEC_1.4.8.0_Generic_Release/algo/normal_version/bin/RaspberryPi/PiThree_ArmV6'],
libraries = ['pthread', 'm', 'rt', 'wiringPi'],
# extra_objects = ['../BSEC_1.4.8.0_Generic_Release/algo/normal_version/bin/RaspberryPi/PiThree_ArmV6/libalgobsec.a'],
extra_compile_args = ['-fPIC'],
depends = ['bsec_integration.h', 'bsec_interface.h', 'bsec_datatypes.h', 'bsec_integration.c', 'bme680.c', 'libalgobsec.a'],
sources =['Pybme.c', '../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bsec_integration.c', '../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bme680.c'])
setup (name = 'bme',
version = '1.0',
description = 'Provide BME68X and BSEC outputs for python',
author = 'Nathan',
# url='https://url/of/website',
ext_modules = [bme_module],
headers = ['../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bme680.h',
'../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bme680_defs.h',
'../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bsec_integration.h',
'../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bsec_interface.h',
'../BSEC_1.4.8.0_Generic_Release/examples/bsec_iot_example/bsec_datatypes.h'])
Since I already tried a lot of things now I would be very happy about some advise on how to fix this or how to figure out the problem.
Thanks
Nathan
You need to link the object, archive or library that defines the symbol bsec_sensor_control. Bosch requires you to accept a license to download the library so that is how I am willing to push it. They do provide build information on github for BSEC-Arduino-library.

Including and distributing third party libraries with a Python C extension

I'm building a C Python extension which makes use of a "third party" library— in this case, one that I've built using a separate build process and toolchain. Call this library libplumbus.dylib.
Directory structure would be:
grumbo/
include/
plumbus.h
lib/
libplumbus.so
grumbo.c
setup.py
My setup.py looks approximately like:
from setuptools import Extension, setup
native_module = Extension(
'grumbo',
define_macros = [('MAJOR_VERSION', '1'),
('MINOR_VERSION', '0')],
sources = ['grumbo.c'],
include_dirs = ['include'],
libraries = ['plumbus'],
library_dirs = ['lib'])
setup(
name = 'grumbo',
version = '1.0',
ext_modules = [native_module] )
Since libplumbus is an external library, when I run import grumbo I get:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: dlopen(/path/to/grumbo/grumbo.cpython-37m-darwin.so, 2): Library not loaded: lib/libplumbus.dylib
Referenced from: /path/to/grumbo/grumbo.cpython-37m-darwin.so
Reason: image not found
What's the simplest way to set things up so that libplumbus is included with the distribution and properly loaded when grumbo is imported? (Note that this should work with a virtualenv).
I have tried adding lib/libplumbus.dylib to package_data, but this doesn't work, even if I add -Wl,-rpath,#loader_path/grumbo/lib to the Extension's extra_link_args.
The goal of this post is to have a setup.py which would create a source distribution. That means after running
python setup.py sdist
the resulting dist/grumbo-1.0.tar.gz could be used for installation via
pip install grumbo-1.0.tar.gz
We will start for a setup.py for Linux/MacOS, but then tweak to make it work for Windows as well.
The first step is to get the additional data (includes/library) into the distribution. I'm not sure it is really impossible to add data for a module, but setuptools offers functionality to add data for packages, so let's make a package from your module (which is probably a good idea anyway).
The new structure of package grumbo looks as follows:
src/
grumbo/
__init__.py # empty
grumbo.c
include/
plumbus.h
lib/
libplumbus.so
setup.py
and changed setup.py:
from setuptools import setup, Extension, find_packages
native_module = Extension(
name='grumbo.grumbo',
sources = ["src/grumbo/grumbo.c"],
)
kwargs = {
'name' : 'grumbo',
'version' : '1.0',
'ext_modules' : [native_module],
'packages':find_packages(where='src'),
'package_dir':{"": "src"},
}
setup(**kwargs)
It doesn't do much yet, but at least our package can be found by setuptools. The build fails, because the includes are missing.
Now let's add the needed includes from the include-folder to the distribution via package-data:
...
kwargs = {
...,
'package_data' : { 'grumbo': ['include/*.h']},
}
...
With that our include-files are copied to the source distribution. However because it will be build "somewhere" we don't know yet, adding include_dirs = ['include'] to the Extension definition just doesn't cut it.
There must be a better way (and less brittle) to find the right include path, but that is what I came up with:
...
import os
import sys
import sysconfig
def path_to_build_folder():
"""Returns the name of a distutils build directory"""
f = "{dirname}.{platform}-{version[0]}.{version[1]}"
dir_name = f.format(dirname='lib',
platform=sysconfig.get_platform(),
version=sys.version_info)
return os.path.join('build', dir_name, 'grumbo')
native_module = Extension(
...,
include_dirs = [os.path.join(path_to_build_folder(),'include')],
)
...
Now, the extension is built, but cannot be yet loaded because it is not linked against shared-object libplumbus.so and thus some symbols are unresolved.
Similar to the header files, we can add our library to the distribution:
kwargs = {
...,
'package_data' : { 'grumbo': ['include/*.h', 'lib/*.so']},
}
...
and add the right lib-path for the linker:
...
native_module = Extension(
...
libraries = ['plumbus'],
library_dirs = [os.path.join(path_to_build_folder(), 'lib')],
)
...
Now, we are almost there:
the extension is built an put into site-packages/grumbo/
the extension depends on libplumbus.so as can be seen with help of ldd
libplumbus.so is put into site-packages/grumbo/lib
However, we still cannot import the extension, as import grumbo.grumbo leads to
ImportError: libplumbus.so: cannot open shared object file: No such
file or directory
because the loader cannot find the needed shared object which resides in the folder .\lib relative to our extension. We could use rpath to "help" the loader:
...
native_module = Extension(
...
extra_link_args = ["-Wl,-rpath=$ORIGIN/lib/."],
)
...
And now we are done:
>>> import grumbo.grumbo
# works!
Also building and installing a wheel should work:
python setup.py bdist_wheel
and then:
pip install grumbo-1.0-xxxx.whl
The first mile stone is achieved. Now we extend it, so it works other platforms as well.
Same source distribution for Linux and Macos:
To be able to install the same source distribution on Linux and MacOS, both versions of the shared library (for Linux and MacOS) must be present. An option is to add a suffix to the names of shared objects: e.g. having libplumbus.linux.so and libplumbis.macos.so. The right shared object can be picked in the setup.py depending on the platform:
...
import platform
def pick_library():
my_system = platform.system()
if my_system == 'Linux':
return "plumbus.linux"
if my_system == 'Darwin':
return "plumbus.macos"
if my_system == 'Windows':
return "plumbus"
raise ValueError("Unknown platform: " + my_system)
native_module = Extension(
...
libraries = [pick_library()],
...
)
Tweaking for Windows:
On Windows, dynamic libraries are dlls and not shared objects, so there are some differences that need to be taken into account:
when the C-extension is built, it needs plumbus.lib-file, which we need to put into the lib-subfolder.
when the C-extension is loaded during the run time, it needs plumbus.dll-file.
Windows has no notion of rpath, thus we need to put the dll right next to the extension, so it can be found (see also this SO-post for more details).
That means the folder structure should be as follows:
src/
grumbo/
__init__.py
grumbo.c
plumbus.dll # needed for Windows
include/
plumbus.h
lib/
libplumbus.linux.so # needed on Linux
libplumbus.macos.so # needed on Macos
plumbus.lib # needed on Windows
setup.py
There are also some changes in the setup.py. First, extending the package_data so dll and lib are picked up:
...
kwargs = {
...
'package_data' : { 'grumbo': ['include/*.h', 'lib/*.so',
'lib/*.lib', '*.dll', # for windows
]},
}
...
Second, rpath can only be used on Linux/MacOS, thus:
def get_extra_link_args():
if platform.system() == 'Windows':
return []
else:
return ["-Wl,-rpath=$ORIGIN/lib/."]
native_module = Extension(
...
extra_link_args = get_extra_link_args(),
)
That it!
The complete setup file (you might want to add macro-definition or similar, which I've skipped):
from setuptools import setup, Extension, find_packages
import os
import sys
import sysconfig
def path_to_build_folder():
"""Returns the name of a distutils build directory"""
f = "{dirname}.{platform}-{version[0]}.{version[1]}"
dir_name = f.format(dirname='lib',
platform=sysconfig.get_platform(),
version=sys.version_info)
return os.path.join('build', dir_name, 'grumbo')
import platform
def pick_library():
my_system = platform.system()
if my_system == 'Linux':
return "plumbus.linux"
if my_system == 'Darwin':
return "plumbus.macos"
if my_system == 'Windows':
return "plumbus"
raise ValueError("Unknown platform: " + my_system)
def get_extra_link_args():
if platform.system() == 'Windows':
return []
else:
return ["-Wl,-rpath=$ORIGIN/lib/."]
native_module = Extension(
name='grumbo.grumbo',
sources = ["src/grumbo/grumbo.c"],
include_dirs = [os.path.join(path_to_build_folder(),'include')],
libraries = [pick_library()],
library_dirs = [os.path.join(path_to_build_folder(), 'lib')],
extra_link_args = get_extra_link_args(),
)
kwargs = {
'name' : 'grumbo',
'version' : '1.0',
'ext_modules' : [native_module],
'packages':find_packages(where='src'),
'package_dir':{"": "src"},
'package_data' : { 'grumbo': ['include/*.h', 'lib/*.so',
'lib/*.lib', '*.dll', # for windows
]},
}
setup(**kwargs)

What does "Error 309" mean?

In our build we're creating an executable file with unit tests like this:
tests = env.Program(os.path.join(env['testDir'], name + '_test'),
src + createManifest(env),
LIBS = libs,
LIBPATH = buildLibPath(env),
LINKFLAGS = env['LINKFLAGS'],
CPPPATH = cppPath)
This correctly creates an executable, which later is being run by the following builder:
action = tests[0].abspath + '&& echo %DATE% %TIME% > ${TARGET}'
runTests = env.Command(source = tests,
target = 'test_'+name+'.tmp',
action = action)
Up to this point everything works fine: the tests are being run during the build.
I've recently found Visual Leak Detector tool and wanted to include this in the build. So, I've changed the environment for the builders like this:
vldInclude = os.path.join(os.path.normpath(env['vldIncDir']), 'vld.h')
env.Append(CPPFLAGS='/FI' + vldInclude)
env.Append(LIBPATH = env['vldLibDir'])
vldLib = os.path.join(env['vldLibDir'], 'vld.lib')
libs.append(vldLib) # used in the Program call for the LIBS parameter, see above
scons: *** [build\debug\libname\test_libname.dummy] Error 309
This error message isn't very helpful. What does it mean and how to fix it?
It turns out that the magic number 309 is more googleable when written as: 0xC0000135 (no idea why C, but 135HEX == 309DEC), and it is an identifier of the STATUS_DLL_NOT_FOUND error.
So, it's not a SCons error, but Windows error, that leaks through SCons.
This means that some DLLs are missing, needed by VLD. Lurking into the VLD installation directory (usually: C:\Program Files (x86)\Visual Leak Detector) two DLL files and one manifest file can be found in the bin\Win32 subdirectory.
Not to have the build being dependent on the machine's environment, you can either add the directory to env['ENV']['PATH'] or copy the files to the directory where the tests are being run.
To do the latter:
You need another VLD configuration option, beside the library directory, namely the binaries directory. Let's call it vldBinDir. At the build's startup you can copy these files to the build directory:
def setupVld(env):
sourcePath = env['vldBinDir']
targetPath = env['testDir']
toCopy = ['dbghelp.dll',
'vld_x86.dll',
'Microsoft.DTfW.DHL.manifest']
nodes = []
for c in toCopy:
n = env.Command(os.path.join(targetPath, c),
os.path.join(sourcePath, c),
SCons.Defaults.Copy("${TARGET}", "${SOURCE}"))
nodes.append(n)
env['vldDeps'] = nodes
And then, when creating particular tests, make sure to add the dependency:
for n in env['vldDeps']:
env.Depends(tests, n)

Categories