I am working on macOS Monterey version 12.6 running Python 3.9.9 with Poetry 1.2.1. I'm trying to install all dependancies for my project from pyproject.toml with 'poetry install' ,and I keep getting the message
The Poetry config is invalid:
- [description] ' Tombstone is basically composed of a web server process (apiv2 web\n
server) and a celery worker process\n
(aiworker) for running async/background tasks outside\n
of a web request context.\n'
does not match '^[^\n]*$'
This doesn't give me much to go off of as far as debugging is concerned. The description is just line 4 of my pyproject.toml.
I have tried to uninstall Poetry and reinstall using Homebrew. I have also tried acting moving the poetry config to a different location inside my system, but I keep getting this error.
Here is the pyproject.toml:
[tool.poetry]
authors =
description = """
Tombstone is basically composed of a web server process (apiv2 web
server) and a celery worker process
(aiworker) for running async/background tasks outside
of a web request context.
"""
name = "tombstone"
version = "1.0.0"
[[tool.poetry.source]]
name = "tombstone-common-package"
secondary = true
url =
[tool.poetry.dependencies]
Python = ">=3.9,<3.10"
analytics-python = "1.3.1"
arpeggio = "1.10.2"
auth0-python = "3.16.0"
awscli = "1.19.106"
billiard = "3.6.4.0"
boto3 = "1.17.106"
cachecontrol = "0.12.8"
celery = {extras = ["redis", "zstd"], version = "4.4.7"}
celery-slack = "0.4.1"
cffi = "1.15.0"
click = "7.1.2"
dask = "2021.10.0"
distributed = "2021.10.0"
factory-boy = "3.2.1"
faiss-cpu = {version = "1.7.1.post3", markers = "sys_platform == 'darwin' or platform_machine == 'aarch64'"}
faiss-gpu = {version = "1.7.1.post2", markers = "sys_platform == 'linux' and platform_machine == 'x86_64'"}
faker = "8.2.1"
Flask = "2.0.2"
flask-admin = "1.5.8"
flask-caching = "1.10.1"
flask-cors = "3.0.10"
flask-jwt = "0.3.2"
Flask-RESTful = "0.3.9"
flask-restful-swagger-2 = "0.35"
flask-restplus = "0.13.0"
flask-sqlalchemy = "2.5.1"
flower = "0.9.4"
fs = "2.4.13"
fs-s3fs = "1.1.1"
gensim = "4.1.2"
google-auth = "2.3.3"
grpcio = "1.34.1"
gunicorn = "20.1.0"
joblib = "1.1.0"
json-log-formatter = "0.4.0"
marshmallow = "3.14.0"
marshmallow-sqlalchemy = "0.26.1"
matplotlib = "3.4.3"
networkx = "2.6.3"
nltk = "3.6.5"
numba = "0.54.1"
numpy = "1.19.5"
openpyxl = "3.0.9"
packaging = "20.9"
pandas = "1.2.5"
pika = "1.2.0"
prometheus-client = "0.12.0"
prometheus-flask-exporter = "0.18.5"
psycopg2-binary = "2.9.1"
pyarrow = "4.0.1"
pydash = "4.9.3"
pyjwt = "1.4.2"
python-dateutil = "2.8.2"
python-dotenv = "0.19.1"
python-jose = "3.3.0"
pytz = "2021.3"
pyyaml = "5.4.1"
requests = "2.26.0"
s3fs = "0.6.0"
scikit-learn = "1.0.1"
scipy = "1.7.1"
sendgrid = "6.8.3"
seqeval = "1.2.2"
simplejson = "3.17.5"
smart-open = "5.1.0"
sqlalchemy = "1.4.26"
tabulate = "0.8.9"
tensorboard = "2.7.0"
tensorflow = {version = "2.5.1", markers = "platform_machine == 'x86_64'"}
tombstone-common = "0.0.7"
tzlocal = "2.1"
# Tests dependecies. Currently, poetry doesn't support dependency groups in the
# requirements, so we need to use the extras feature. If adding a new test
# dependency REMEMBER TO ADD an entry on [tool.poetry.extras]
bandit = {version = "1.7.0", optional = true}
coverage = {version = "5.5", optional = true}
flake8 = {version = "3.9.2", optional = true}
flake8_formatter_junit_xml = {version = "0.0.6", optional = true}
pylint = {version = "2.8.3", optional = true}
pylint_junit = {version = "0.3.2", optional = true}
pytest = {version = "6.2.5", optional = true}
pytest-cov = {version = "2.12.1", optional = true}
pytest-factoryboy = {version = "2.1.0", optional = true}
pytest-flask = {version = "1.2.0", optional = true}
pytest-flask-sqlalchemy = {version = "1.0.2", optional = true}
pytest-html = {version = "3.1.1", optional = true}
pytest-mock = {version = "3.6.1", optional = true}
pytest-test-groups = {version = "1.0.3", optional = true}
pytype = {version = "2021.10.25", optional = true}
safety = {version = "1.10.3", optional = true}
easypost = "^5.1.3"
marshmallow-dataclass = "8.1.0"
ddtrace = "^1.0.0"
blinker = "^1.4"
mapbox = "^0.18.0"
geopy = "^2.2.0"
[tool.poetry.extras]
# Currently, poetry doesn't support dependency groups in the requirements, so we
# need to use the extras feature. Keep an eye on poetry 1.2 that will implement
# this feature
tests = [
"bandit",
"coverage",
"flake8",
"flake8_formatter_junit_xml",
"pylint",
"pylint_junit",
"pytest",
"pytest-cov",
"pytest-factoryboy",
"pytest-flask",
"pytest-flask-sqlalchemy",
"pytest-html",
"pytest-mock",
"pytest-test-groups",
"pytype",
"safety",
]
[tool.poetry.dev-dependencies]
# Add here the dependencies for local development
autopep8 = "1.5.7"
flask-shell-ptpython = "1.0.2"
honcho = "1.1.0"
ipython = "7.20.0"
pip-check-reqs = "2.1.1"
prompt-toolkit = "3.0.21"
pyspellchecker = "0.6.2"
pytest-watch = "4.2.0"
setuptools = "51.0.0"
wheel = "0.36.2"
[build-system]
build-backend = "poetry.core.masonry.api"
requires = ["poetry-core>=1.0.0"]
A multi-line string is not valid for the description field. The error message does not match '^[^\n]*$' is telling you that it is not valid for the value to contain any newline characters.
Related
I've a monorepo that contains a set of Python AWS lambdas and I'm using Bazel for building and packaging the lambdas. I'm now trying to use Bazel to create a zip file that follows the expected AWS Lambdas packaging and that I can upload to Lambda. Wondering what's the best way to do this with Bazel?
Below are a few different things I've tried thus far:
Attempt 1: py_binary
BUILD.bazel
py_binary(
name = "main_binary",
srcs = glob(["*.py"]),
main = "main.py",
visibility = ["//appcode/api/transaction_details:__subpackages__"],
deps = [
requirement("Faker"),
],
)
Problem:
This generates the following:
main_binary (python executable)
main_binary.runfiles
main_binary.runfiles_manifest
Lambda expects the handler to be in the format of lambda_function.lambda_handler. Since main_binary is an executable vs. a python file, it doesn't expose the actual handler method and the lambda blows up because it can't find it. I tried updating the handler configuration to simply point to the main_binary but it blows up because it expects two arguments(i.e. lambda_function.lambda_handler).
Attempt 2: py_library + pkg_zip
BUILD.bazel
py_library(
name = "main",
srcs = glob(["*.py"]),
visibility = ["//appcode/api/transaction_details:__subpackages__"],
deps = [
requirement("Faker"),
],
)
pkg_zip(
name = "main_zip",
srcs =["//appcode/api/transaction_details/src:main" ],
)
Problem:
This generates a zip file with:
main.py
__init__.py
The zip file now includes the main.py but none of its runtime dependencies. Thus the lambda blows up because it can't find Faker.
Other Attempts:
I've also tried using the --build_python_zip flag as well as the #bazel_tools//tools/zip:zipper with a generic rule but they both lead to similar outcomes as the two previous attempts.
We use #bazel_tools//tools/zip:zipper with a custom rule. We also pull serverless in using rules_nodejs and run it through bazel, which causes the package building to happen prior to running sls deploy.
We use pip_parse from rules_python. I'm not sure whether the _short_path function below will work with pip_install or other mechanisms.
File filtering is supported, although it's awkward. Ideally the zip generation would be handled by a separate binary (i.e., a Python script) which would allow filtering using regular expressions/globs/etc. Bazel doesn't support regular expressions in Starlark, so we use our own thing.
I've included an excerpt:
lambda.bzl
"""
Support for serverless deployments.
"""
def contains(pattern):
return "contains:" + pattern
def startswith(pattern):
return "startswith:" + pattern
def endswith(pattern):
return "endswith:" + pattern
def _is_ignored(path, patterns):
for p in patterns:
if p.startswith("contains:"):
if p[len("contains:"):] in path:
return True
elif p.startswith("startswith:"):
if path.startswith(p[len("startswith:"):]):
return True
elif p.startswith("endswith:"):
if path.endswith(p[len("endswith:"):]):
return True
else:
fail("Invalid pattern: " + p)
return False
def _short_path(file_):
# Remove prefixes for external and generated files.
# E.g.,
# ../py_deps_pypi__pydantic/pydantic/__init__.py -> pydantic/__init__.py
short_path = file_.short_path
if short_path.startswith("../"):
second_slash = short_path.index("/", 3)
short_path = short_path[second_slash + 1:]
return short_path
def _py_lambda_zip_impl(ctx):
deps = ctx.attr.target[DefaultInfo].default_runfiles.files
f = ctx.outputs.output
args = []
for dep in deps.to_list():
short_path = _short_path(dep)
# Skip ignored patterns
if _is_ignored(short_path, ctx.attr.ignore):
continue
args.append(short_path + "=" + dep.path)
ctx.actions.run(
outputs = [f],
inputs = deps,
executable = ctx.executable._zipper,
arguments = ["cC", f.path] + args,
progress_message = "Creating archive...",
mnemonic = "archiver",
)
out = depset(direct = [f])
return [
DefaultInfo(
files = out,
),
OutputGroupInfo(
all_files = out,
),
]
_py_lambda_zip = rule(
implementation = _py_lambda_zip_impl,
attrs = {
"target": attr.label(),
"ignore": attr.string_list(),
"_zipper": attr.label(
default = Label("#bazel_tools//tools/zip:zipper"),
cfg = "host",
executable = True,
),
"output": attr.output(),
},
executable = False,
test = False,
)
def py_lambda_zip(name, target, ignore, **kwargs):
_py_lambda_zip(
name = name,
target = target,
ignore = ignore,
output = name + ".zip",
**kwargs
)
BUILD.bazel
load("#npm_serverless//serverless:index.bzl", "serverless")
load(":lambda.bzl", "contains", "endswith", "py_lambda_zip", "startswith")
py_binary(
name = "my_lambda_app",
...
)
py_lambda_zip(
name = "lambda_archive",
ignore = [
contains("/__pycache__/"),
endswith(".pyc"),
endswith(".pyo"),
# Ignore boto since it's provided by Lambda.
startswith("boto3/"),
startswith("botocore/"),
# With the move to hermetic toolchains, the zip gets a lib/ directory containing the
# python runtime. We don't need that.
startswith("lib/"),
],
target = ":my_lambda_app",
# Only allow building on linux, since we don't want to upload a lambda zip file
# with e.g. macos compiled binaries.
target_compatible_with = [
"#platforms//os:linux",
],
)
# The sls command requires that serverless.yml be in its working directory, and that the yaml file
# NOT be a symlink. So this target builds a directory containing a copy of serverless.yml, and also
# symlinks the generated lambda_archive.zip in the same directory.
#
# It also generates a chdir.js script that we instruct node to execute to change to the proper working directory.
genrule(
name = "sls_files",
srcs = [
"lambda_archive.zip",
"serverless.yml",
],
outs = [
"sls_files/lambda_archive.zip",
"sls_files/serverless.yml",
"sls_files/chdir.js",
],
cmd = """
mkdir -p $(#D)/sls_files
cp $(location serverless.yml) $(#D)/sls_files/serverless.yml
cp -P $(location lambda_archive.zip) $(#D)/sls_files/lambda_archive.zip
echo "const fs = require('fs');" \
"const path = require('path');" \
"process.chdir(path.dirname(fs.realpathSync(__filename)));" > $(#D)/sls_files/chdir.js
""",
)
# Usage:
# bazel run //:sls -- deploy <more args>
serverless(
name = "sls",
args = ["""--node_options=--require=./$(location sls_files/chdir.js)"""],
data = [
"sls_files/chdir.js",
"sls_files/serverless.yml",
"sls_files/lambda_archive.zip",
],
)
serverless.yml
service: my-app
package:
artifact: lambda_archive.zip
# ... other config ...
Below are the changes I made to the previous answer to generate the lambda zip. Thanks #jvolkman for the original suggestion.
project/BUILD.bazel: Added rule to generate requirements_lock.txt from project/requirements.txt
load("#rules_python//python:pip.bzl", "compile_pip_requirements")
compile_pip_requirements(
name = "requirements",
extra_args = ["--allow-unsafe"],
requirements_in = "requirements.txt",
requirements_txt = "requirements_lock.txt",
)
project/WORKSPACE.bazel: swap pip_install with pip_parse
workspace(name = "mdc-eligibility")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_python",
sha256 = "9fcf91dbcc31fde6d1edb15f117246d912c33c36f44cf681976bd886538deba6",
strip_prefix = "rules_python-0.8.0",
url = "https://github.com/bazelbuild/rules_python/archive/refs/tags/0.8.0.tar.gz",
)
load("#rules_python//python:repositories.bzl", "python_register_toolchains")
python_register_toolchains(
name = "python3_9",
python_version = "3.9",
)
load("#rules_python//python:pip.bzl", "pip_parse")
load("#python3_9//:defs.bzl", "interpreter")
pip_parse(
name = "mndc-eligibility-deps",
requirements_lock = "//:requirements_lock.txt",
python_interpreter_target = interpreter,
quiet = False
)
load("#mndc-eligibility-deps//:requirements.bzl", "install_deps")
install_deps()
project/build_rules/lambda_packaging/lambda.bzl: Modified custom rule provided by #jvolkman to include source code in the resulting zip code.
def contains(pattern):
return "contains:" + pattern
def startswith(pattern):
return "startswith:" + pattern
def endswith(pattern):
return "endswith:" + pattern
def _is_ignored(path, patterns):
for p in patterns:
if p.startswith("contains:"):
if p[len("contains:"):] in path:
return True
elif p.startswith("startswith:"):
if path.startswith(p[len("startswith:"):]):
return True
elif p.startswith("endswith:"):
if path.endswith(p[len("endswith:"):]):
return True
else:
fail("Invalid pattern: " + p)
return False
def _short_path(file_):
# Remove prefixes for external and generated files.
# E.g.,
# ../py_deps_pypi__pydantic/pydantic/__init__.py -> pydantic/__init__.py
short_path = file_.short_path
if short_path.startswith("../"):
second_slash = short_path.index("/", 3)
short_path = short_path[second_slash + 1:]
return short_path
# steven chambers
def _py_lambda_zip_impl(ctx):
deps = ctx.attr.target[DefaultInfo].default_runfiles.files
f = ctx.outputs.output
args = []
for dep in deps.to_list():
short_path = _short_path(dep)
# Skip ignored patterns
if _is_ignored(short_path, ctx.attr.ignore):
continue
args.append(short_path + "=" + dep.path)
# MODIFICATION: Added source files to the map of files to zip
source_files = ctx.attr.target[DefaultInfo].files
for source_file in source_files.to_list():
args.append(source_file.basename+"="+source_file.path)
ctx.actions.run(
outputs = [f],
inputs = deps,
executable = ctx.executable._zipper,
arguments = ["cC", f.path] + args,
progress_message = "Creating archive...",
mnemonic = "archiver",
)
out = depset(direct = [f])
return [
DefaultInfo(
files = out,
),
OutputGroupInfo(
all_files = out,
),
]
_py_lambda_zip = rule(
implementation = _py_lambda_zip_impl,
attrs = {
"target": attr.label(),
"ignore": attr.string_list(),
"_zipper": attr.label(
default = Label("#bazel_tools//tools/zip:zipper"),
cfg = "host",
executable = True,
),
"output": attr.output(),
},
executable = False,
test = False,
)
def py_lambda_zip(name, target, ignore, **kwargs):
_py_lambda_zip(
name = name,
target = target,
ignore = ignore,
output = name + ".zip",
**kwargs
)
project/appcode/api/transaction_details/src/BUILD.bazel: Used custom py_lambda_zip rule to zip up py_library
load("#mndc-eligibility-deps//:requirements.bzl", "requirement")
load("#python3_9//:defs.bzl", "interpreter")
load("//build_rules/lambda_packaging:lambda.bzl", "contains", "endswith", "py_lambda_zip", "startswith")
py_library(
name = "main",
srcs = glob(["*.py"]),
visibility = ["//appcode/api/transaction_details:__subpackages__"],
deps = [
requirement("Faker"),
],
)
py_lambda_zip(
name = "lambda_archive",
ignore = [
contains("/__pycache__/"),
endswith(".pyc"),
endswith(".pyo"),
# Ignore boto since it's provided by Lambda.
startswith("boto3/"),
startswith("botocore/"),
# With the move to hermetic toolchains, the zip gets a lib/ directory containing the
# python runtime. We don't need that.
startswith("lib/"),
],
target = ":main",
)
Hello Stack Overflow friends!
I'm trying to use this conversion cloud module (https://github.com/groupdocs-conversion-cloud/groupdocs-conversion-cloud-python), but it's returning the module error: ModuleNotFoundError: No module named 'groupdocs_conversion_cloud'
I've installed the packaged through the terminal with no problem, even though the error keeps returning.
Here's my code:
import groupdocs_conversion_cloud
# Get your app_sid and app_key at https://dashboard.groupdocs.cloud (free registration is required).
app_sid = "3e9***ca"
app_key = "f0***ad407e1"
# Create instance of the API
convert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(app_sid, app_key)
file_api = groupdocs_conversion_cloud.FileApi.from_keys(app_sid, app_key)
try:
#upload soruce file to storage
filename = 'FT_Manteiga virgem de cacau.pdf'
remote_name = 'FT_Manteiga virgem de cacau.pdf'
output_name= 'FT_Manteiga virgem de cacau.docx'
strformat='docx'
request_upload = groupdocs_conversion_cloud.UploadFileRequest(remote_name,filename)
response_upload = file_api.upload_file(request_upload)
#Convert PDF to Word document
settings = groupdocs_conversion_cloud.ConvertSettings()
settings.file_path =remote_name
settings.format = strformat
settings.output_path = output_name
loadOptions = groupdocs_conversion_cloud.PdfLoadOptions()
loadOptions.hide_pdf_annotations = True
loadOptions.remove_embedded_files = False
loadOptions.flatten_all_fields = True
settings.load_options = loadOptions
convertOptions = groupdocs_conversion_cloud.DocxConvertOptions()
convertOptions.from_page = 1
convertOptions.pages_count = 1
settings.convert_options = convertOptions
request = groupdocs_conversion_cloud.ConvertDocumentRequest(settings)
response = convert_api.convert_document(request)
print("Document converted successfully: " + str(response))
except groupdocs_conversion_cloud.ApiException as e:
print("Exception when calling get_supported_conversion_types: {0}".format(e.message))
I've installed this lib using pip install groupdocs_conversion_cloud
I am using confluent-kafka and I need to serialize my keys as strings and produce some messages. I have a working code for the case where I retrieve the schema from the schema registry and use it to produce a message. The problem is that it fails when I am trying to read the schema from a local file instead.
The code below is the working one for the schema registry:
import argparse
from confluent_kafka.schema_registry.avro import AvroSerializer
from confluent_kafka.serialization import StringSerializer
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka import SerializingProducer
import avro.schema
SCHEMA_HOST = '192.168.40.10'
TOPIC = 'my_topic'
SCHEMA = 'path/to/schema.avsc'
# Just parse argumments
parser = argparse.ArgumentParser(description="Avro Kafka Generator")
parser.add_argument('--schema_registry_host', default=SCHEMA_HOST, help="schema registry host")
parser.add_argument('--schema', type=str, default=SCHEMA, help="schema to produce under")
parser.add_argument('--topic', type=str, default=TOPIC, help="topic to publish to")
parser.add_argument('--frequency', type=float, default=1.0, help="number of message per second")
args = parser.parse_args()
# Actual code
schema_registry_conf = {'url': "http://{}:8081".format(SCHEMA_HOST)}
schema_registry_client = SchemaRegistryClient(schema_registry_conf)
schema = schema_registry_client.get_latest_version(subject_name=TOPIC + "-value")
# schema = schema_registry_client.get_schema(schema.schema_id)
schema = schema_registry_client.get_schema(schema.schema_id)
schema_str = schema.schema_str
pro_conf = {"auto.register.schemas": False}
avro_serializer = AvroSerializer(schema_registry_client=schema_registry_client, schema_str=schema_str, conf=pro_conf)
conf = {'bootstrap.servers': "{}:9095".format(args.schema_registry_host),
'schema.registry.url': "http://{}:8081".format(args.schema_registry_host)}
# avro_producer = AvroProducer(conf, default_value_schema=value_schema)
producer_conf = {'bootstrap.servers': "{}:9095".format(SCHEMA_HOST),
'key.serializer': StringSerializer('utf_8'),
'value.serializer': avro_serializer}
avro_producer = SerializingProducer(producer_conf)
But when I try to use a variation for the local file it fails:
# Read schema from local file
value_schema = avro.schema.Parse(open(args.schema, "r").read())
schema_str = open(args.schema, "r").read().replace(' ', '').replace('\n', '')
pro_conf = {"auto.register.schemas": True}
avro_serializer = AvroSerializer(schema_registry_client=schema_registry_client, schema_str=schema_str, conf=pro_conf)
this part is common to both versions:
producer_conf = {'bootstrap.servers': "{}:9095".format(SCHEMA_HOST),
'key.serializer': StringSerializer('utf_8'),
'value.serializer': avro_serializer}
avro_producer = SerializingProducer(producer_conf)
avro_producer.produce(topic=args.topic, value=message)
The error I am getting is the following;
KafkaError{code=_VALUE_SERIALIZATION,val=-161,str="'RecordSchema'
object has no attribute 'lookup_schema'"}
Obviously, it's not the best approach and I guess if it worked the code seems ugly and error prone. But it doesn't even work so, I need some help on how I could read a local schema and use the AvroSerializer afterwards.
I'm experimenting on converting a makefile from another buildsystem to waf.
I'm trying to direct waf to the directory containing the necessary dlls.
However, when running waf configure:
Checking for library libiconv2 : not found
It can't find the required library.
Directory stucture:
project/
| build/
| inc/
| | XGetopt.h
| | common.h
| | define.h
| | libpst.h
| | libstrfunc.h
| | lzfu.h
| | msg.h
| | timeconv.h
| | vbuf.h
| libs/
| | libiconv2.dll
| | regex2.dll
| src/
| | XGetopt.c
| | debug.c
| | dumpblocks.c
| | getidblock.c
| | libpst.c
| | libstrfunc.c
| | lspst.c
| | lzfu.c
| | readpst.c
| | timeconv.c
| | vbuf.c
| | deltasearch.cpp
| | msg.cpp
| | nick2ldif.cpp
| | pst2dii.cpp
| | pst2ldif.cpp
| | wscript_build
| waf-1.7.10
| wscript
top-level wscript:
#! /usr/bin/env python
VERSION = "0.1"
APPNAME = "readpst"
top = "." # The topmost directory of the waf project
out = "build/temp" # The build directory of the waf project
import os
from waflib import Build
from waflib import ConfigSet
from waflib import Logs
# Variant memory variables
var_path = out + "/variant.txt" # The variant memory file path
default_variant = "debug" # The default if no variant is stored
stored_variant = ""
def options(opt):
'''
A script hook function that defines addtional switch options for the build.
'''
opt.load("compiler_cxx")
def configure(cfg):
'''
A script hook function that configures the build environment.
'''
cfg.load("compiler_cxx")
cfg.find_program("strip")
cfg.env.PREFIX = "."
cfg.env.DEFINES = ["WAF=1"]
cfg.env.FEATURES = [] # Additional features
cfg.env.LIBPATH = [os.path.join(os.getcwd(), "libs")]
print cfg.env.LIBPATH
cfg.define("VERSION", VERSION)
base_env = cfg.env
# Compiler checks
cfg.check_large_file(mandatory = False)
cfg.check_inline()
# Check for the existance and function of specific headers
cfg.check(header_name = "stdint.h")
cfg.check(header_name = "stdio.h")
cfg.check(compiler="cxx", uselib_store="LIBICONV2", mandatory=True, lib="libiconv2")
# Define the debug build environment
cfg.setenv("debug", env = base_env.derive())
cfg.env.CFLAGS = ["-g"]
cfg.define("DEBUG", 1)
cfg.write_config_header("/debug/inc/config.h")
# Define the release build environment
cfg.setenv("release", env = base_env.derive())
cfg.env.CFLAGS = ["-O2"]
cfg.env.FEATURES = ["strip"]
cfg.define("RELEASE", 1)
cfg.write_config_header("/release/inc/config.h")
def pre(ctx):
'''
A callback for before build task start.
'''
print "Starting %sbuild" % (("%s " % ctx.variant) if(ctx.variant) else "")
if ctx.cmd == "install":
print "Installing"
def post(ctx):
'''
A callback for after build task finish.
'''
global var_path
print "Finished %sbuild" % (("%s " % ctx.variant) if(ctx.variant) else "")
env = ConfigSet.ConfigSet()
env.stored_variant = ctx.variant
env.store(var_path)
def build(bld):
'''
A script hook function that specifies the build behaviour.
'''
bld.add_pre_fun(pre)
bld.add_post_fun(post)
bld.recurse\
(
[
"src"
]
)
if bld.cmd != "clean":
bld.logger = Logs.make_logger("test.log", "build") # just to get a clean output
def dist(ctx):
'''
A script hook function that specifies the packaging behaviour.
'''
ctx.base_name = "_".join([APPNAME, VERSION])
ctx.algo = "zip"
file_ex_patterns = \
[
out + "/**",
"**/.waf-1*",
"**/*~",
"**/*.pyc",
"**/*.swp",
"**/.lock-w*"
]
file_in_patterns = \
[
"**/wscript*",
"**/*.h",
"**/*.c",
"**/*.cpp",
"**/*.txt",
]
ctx.files = ctx.path.ant_glob(incl = file_in_patterns, excl = file_ex_patterns)
def set_variant():
'''
A function that facilitates dynamic changing of the Context classes variant member.
It retrieves the stored variant, if existant, otherwise the default.
'''
global default_variant
global stored_variant
global var_path
env = ConfigSet.ConfigSet()
try:
env.load(var_path)
except:
stored_variant = default_variant
else:
if(env.stored_variant):
stored_variant = env.stored_variant
print "Resuming %s variant" % stored_variant
else:
stored_variant = default_variant
def get_variant():
'''
A function that facilitates dynamic changing of the Context classes variant member.
It sets the variant, if undefined, and returns.
'''
global stored_variant
if(not stored_variant):
set_variant()
return stored_variant
class release(Build.BuildContext):
'''
A class that provides the release build.
'''
cmd = "release"
variant = "release"
class debug(Build.BuildContext):
'''
A class that provides the debug build.
'''
cmd = "debug"
variant = "debug"
class default_build(Build.BuildContext):
'''
A class that provides the default variant build.
This is set to debug.
'''
variant = "debug"
class default_clean(Build.CleanContext):
'''
A class that provides the stored variant build clean.
'''
#property
def variant(self):
return get_variant()
class default_install(Build.InstallContext):
'''
A class that provides the stored variant build install.
'''
#property
def variant(self):
return get_variant()
class default_uninstall(Build.UninstallContext):
'''
A class that provides the stored variant build uninstall.
'''
#property
def variant(self):
return get_variant()
# Addtional features
from waflib import Task, TaskGen
class strip(Task.Task):
run_str = "${STRIP} ${SRC}"
color = "BLUE"
#TaskGen.feature("strip")
#TaskGen.after("apply_link")
def add_strip_task(self):
try:
link_task = self.link_task
except:
return
tsk = self.create_task("strip", self.link_task.outputs[0])
You just lack the use variable setup, but this has to be fixed in your child-wscripts, i.e.
bld.program (...,
libpath = ['/usr/lib', 'subpath'], #this has to be relative to the wscript it appears in! (or the root wscript, can not recall)
...,
use = ['iconv2', 'regex2'] )
See section 9.1.2 of the waf book
Alternatively: (and probably the cleaner version)
cfg.check_cc(lib='iconv2', uselib_store="LIBICONV2", mandatory=True)
and then use uselib with
bld.program (...,
libpath = ['/usr/lib', 'subpath'], #this has to be relative to the wscript it appears in! (or the root wscript, can not recall)
...,
uselib = ['LIBICONV2', ...] )
After some consideration, I realised I required further information. The default error information provided by waf seems to be about waf itself, rather than the wscripts or the project.
To rectify this, loggers need to be added. I added loggers to the configure and build functions.
configure logger:
cfg.logger = Logs.make_logger("configure_%s.log" % datetime.date.today().strftime("%Y_%m_%d"), "configure")
build logger:
bld.logger = Logs.make_logger("build_%s.log" % datetime.date.today().strftime("%Y_%m_%d"), "build")
Doing this lead me to that nature of the problems:
['C:\\MinGW64\\bin\\g++.exe', '-Wl,--enable-auto-import', '-Wl,--enable-auto-import', 'test.cpp.1.o', '-o', 'C:\\Users\\Administrator\\Downloads\\libpst-0.6.60\\clean\\build\\temp\\conf_check_5fe204eaa3b3bcb7a9f85e15cebb726e\\testbuild\\testprog.exe', '-Wl,-Bstatic', '-Wl,-Bdynamic', '-LC:\\Users\\Administrator\\Downloads\\libpst-0.6.60\\clean\\libs', '-llibiconv2']
err: c:/mingw64/bin/../lib/gcc/x86_64-w64-mingw32/4.7.2/../../../../x86_64-w64-mingw32/bin/ld.exe: skipping incompatible C:\Users\Administrator\Downloads\libpst-0.6.60\clean\libs/libiconv2.dll when searching for -llibiconv2
The library path has been passed correctly to gcc, but the dll is 32bit whereas the gcc installation is 64bit and so it is incompatible.
top-level wscript:
#! /usr/bin/env python
VERSION = "0.1"
APPNAME = "readpst"
top = "." # The topmost directory of the waf project
out = "build/temp" # The build directory of the waf project
import os
import datetime
from waflib import Build
from waflib import ConfigSet
from waflib import Logs
# Variant memory variables
var_path = out + "/variant.txt" # The variant memory file path
default_variant = "debug" # The default if no variant is stored
stored_variant = ""
def options(opt):
'''
A script hook function that defines addtional switch options for the build.
'''
opt.load("compiler_c compiler_cxx")
def configure(cfg):
'''
A script hook function that configures the build environment.
'''
cfg.logger = Logs.make_logger("configure_%s.log" % datetime.date.today().strftime("%Y_%m_%d"), "configure")
cfg.load("compiler_c compiler_cxx")
cfg.find_program("strip")
cfg.env.DEFINES = \
[
"WAF=1",
"HAVE_CONFIG_H=1"
]
cfg.env.FEATURES = [] # Additional features
cfg.env.append_value("LIBPATH", os.path.join(os.getcwd(), "libs"))
cfg.env.append_value("INCLUDES", os.path.join(os.getcwd(), "inc"))
cfg.env.append_value("INCLUDES", os.path.join(os.getcwd(), "inc", "glib-2.0"))
cfg.env.append_value("INCLUDES", os.path.join(os.getcwd(), "inc", "glib-2.0", "glib"))
cfg.env.append_value("INCLUDES", os.path.join(os.getcwd(), "libs", "regex", "2.7", "regex-2.7-src", "src"))
cfg.env.append_value("INCLUDES", os.path.join(os.getcwd(), "libs", "libiconv", "1.9.2", "libiconv-1.9.2-src", "include"))
cfg.define("VERSION", VERSION)
base_env = cfg.env
# Compiler checks
cfg.check_large_file(mandatory = False)
cfg.check_inline()
cfg.multicheck\
(
{"header_name" : "fcntl.h"},
{"header_name" : "iostream"},
{"header_name" : "list"},
{"header_name" : "set"},
{"header_name" : "string"},
{"header_name" : "vector"},
msg = "Checking for standard headers",
mandatory = True
)
cfg.check(header_name = "glib.h", mandatory = False)
cfg.multicheck\
(
{"header_name" : "gsf\\gsf-infile-stdio.h"},
{"header_name" : "gsf\\gsf-infile.h"},
{"header_name" : "gsf\\gsf-input-stdio.h"},
{"header_name" : "gsf\\gsf-outfile-msole.h"},
{"header_name" : "gsf\\gsf-outfile.h"},
{"header_name" : "gsf\\gsf-output-stdio.h"},
{"header_name" : "gsf\\gsf-utils.h"},
msg = "Checking for gsf headers",
mandatory = False
)
# Checking for headers expected in config.h
cfg.check(header_name = "ctype.h", define_name = "HAVE_CTYPE_H" , mandatory = False)
cfg.check(header_name = "dirent.h", define_name = "HAVE_DIRENT_H" , mandatory = False)
cfg.check(header_name = "errno.h", define_name = "HAVE_ERRNO_H" , mandatory = False)
cfg.check(header_name = "gd.h", define_name = "HAVE_GD_H" , mandatory = False)
cfg.check(header_name = "iconv.h", define_name = "HAVE_ICON" , mandatory = False)
cfg.check(header_name = "limits.h", define_name = "HAVE_LIMITS_H" , mandatory = False)
cfg.check(header_name = "regex.h", define_name = "HAVE_REGEX_H" , mandatory = False)
#cfg.check(header_name = "semaphore.h", define_name = "HAVE_SEMAPHORE_H", mandatory = False)
cfg.check(header_name = "signal.h", define_name = "HAVE_SIGNAL_H" , mandatory = False)
cfg.check(header_name = "string.h", define_name = "HAVE_STRING_H" , mandatory = False)
cfg.check(header_name = "sys/shm.h", define_name = "HAVE_SYS_SHM_H" , mandatory = False)
cfg.check(header_name = "sys/stat.h", define_name = "HAVE_SYS_STAT_H" , mandatory = False)
cfg.check(header_name = "sys/types.h", define_name = "HAVE_SYS_TYPES_H", mandatory = False)
cfg.check(header_name = "sys/wait.h", define_name = "HAVE_SYS_WAIT_H" , mandatory = False)
cfg.check(header_name = "wchar.h", define_name = "HAVE_WCHAR_H" , mandatory = False)
cfg.check(header_name = "define.h", mandatory = False)
cfg.check(header_name = "lzfu.h", mandatory = False)
cfg.check(header_name = "msg.h", mandatory = False)
# Check for the existance and function of specific headers
cfg.check_cxx(lib = "libiconv2", uselib_store = "LIBICONV2", mandatory = False)
# Define the debug build environment
cfg.setenv("debug", env = base_env.derive())
cfg.env.append_value("CFLAGS", "-g")
cfg.define("DEBUG", 1)
cfg.write_config_header("/debug/inc/config.h")
# Define the release build environment
cfg.setenv("release", env = base_env.derive())
cfg.env.append_value("CFLAGS", "-02")
cfg.env.FEATURES = ["strip"]
cfg.define("RELEASE", 1)
cfg.write_config_header("/release/inc/config.h")
def pre(ctx):
'''
A callback for before build task start.
'''
print "Starting %sbuild" % (("%s " % ctx.variant) if(ctx.variant) else "")
if ctx.cmd == "install":
print "Installing"
def post(ctx):
'''
A callback for after build task finish.
'''
global var_path
print "Finished %sbuild" % (("%s " % ctx.variant) if(ctx.variant) else "")
env = ConfigSet.ConfigSet()
env.stored_variant = ctx.variant
env.store(var_path)
def build(bld):
'''
A script hook function that specifies the build behaviour.
'''
if bld.cmd != "clean":
bld.logger = Logs.make_logger("build_%s.log" % datetime.date.today().strftime("%Y_%m_%d"), "build")
bld.add_pre_fun(pre)
bld.add_post_fun(post)
bld.recurse\
(
[
"src"
]
)
def dist(ctx):
'''
A script hook function that specifies the packaging behaviour.
'''
ctx.base_name = "_".join([APPNAME, VERSION])
ctx.algo = "zip"
file_ex_patterns = \
[
out + "/**",
"**/.waf-1*",
"**/*~",
"**/*.pyc",
"**/*.swp",
"**/.lock-w*"
]
file_in_patterns = \
[
"**/wscript*",
"**/*.h",
"**/*.c",
"**/*.cpp",
"**/*.txt",
]
ctx.files = ctx.path.ant_glob(incl = file_in_patterns, excl = file_ex_patterns)
def set_variant():
'''
A function that facilitates dynamic changing of the Context classes variant member.
It retrieves the stored variant, if existant, otherwise the default.
'''
global default_variant
global stored_variant
global var_path
env = ConfigSet.ConfigSet()
try:
env.load(var_path)
except:
stored_variant = default_variant
else:
if(env.stored_variant):
stored_variant = env.stored_variant
print "Resuming %s variant" % stored_variant
else:
stored_variant = default_variant
def get_variant():
'''
A function that facilitates dynamic changing of the Context classes variant member.
It sets the variant, if undefined, and returns.
'''
global stored_variant
if(not stored_variant):
set_variant()
return stored_variant
class release(Build.BuildContext):
'''
A class that provides the release build.
'''
cmd = "release"
variant = "release"
class debug(Build.BuildContext):
'''
A class that provides the debug build.
'''
cmd = "debug"
variant = "debug"
class default_build(Build.BuildContext):
'''
A class that provides the default variant build.
This is set to debug.
'''
variant = "debug"
class default_clean(Build.CleanContext):
'''
A class that provides the stored variant build clean.
'''
#property
def variant(self):
return get_variant()
class default_install(Build.InstallContext):
'''
A class that provides the stored variant build install.
'''
#property
def variant(self):
return get_variant()
class default_uninstall(Build.UninstallContext):
'''
A class that provides the stored variant build uninstall.
'''
#property
def variant(self):
return get_variant()
# Addtional features
from waflib import Task, TaskGen
class strip(Task.Task):
run_str = "${STRIP} ${SRC}"
color = "BLUE"
#TaskGen.feature("strip")
#TaskGen.after("apply_link")
def add_strip_task(self):
try:
link_task = self.link_task
except:
return
tsk = self.create_task("strip", self.link_task.outputs[0])
I have integrated collective.documenviewer on my plone website. This is used for viewing PDF and other office files online.
One of the optional add-on products is plone.app.async which in turn uses zc.async. Now, the installation went well without errors. But when I save a file, an error is generated that I can't figure out: Below is the error:
2012-08-29T12:52:03 ERROR collective.documentviewer Error using plone.app.async with collective.documentviewer. Converting pdf without plone.app.async...
Traceback (most recent call last):
File "/home/frank/apps/myplonesite/plone/eggs/collective.documentviewer-2.2a1-py2.7.egg/collective/documentviewer/async.py", line 143, in queueJob
runner = JobRunner(object)
File "/home/frank/apps/myplonesite/plone/eggs/collective.documentviewer-2.2a1-py2.7.egg/collective/documentviewer/async.py", line 50, in __init__
self.queue = self.async.getQueues()['']
File "/home/frank/apps/myplonesite/plone/eggs/plone.app.async-1.2-py2.7.egg/plone/app/async/service.py", line 100, in getQueues
return self._conn.root()[KEY]
File "/home/frank/apps/myplonesite/plone/../../python27/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'zc.async'
These are the versions that I am using:
plone.app.async = 1.2
zc.async = 1.5.4
How do I do away with the KeyError issue ?
UPDATE: Below is my buildout
[buildout]
newest = false
allow-picked-versions = false
index = http://dist.candid.org/candid
extends =
versions.cfg
parts =
lxml
svneggs
svnproducts
zeo
instance
worker
paster
plonesite
versions = versions
find-links =
http://dist.candid.org/candid
develop =
../src/candid.main
../src/ploned.ui
../src/z3c.traverser
../src/repoze.whooze
../src/marginalia
../src/ore.alchemist
../src/alchemist.ui
../src/alchemist.catalyst
../src/alchemist.traversal
../src/alchemist.security
../src/portal.auth
eggs =
Plone
Products.PloneHelpCenter
Products.LinguaPlone
candid
alchemist.ui
alchemist.catalyst
alchemist.traversal
alchemist.security
ploned.ui
candidcms.plonepas
candidcms.policy
candidcms.theme
psycopg2
Products.Scrawl
collective.contacts
collective.tabr
candidcms.workspaces
lotr.repository
archetypes.multifile
Products.ATVocabularyManager
collective.dynatree
collective.portlet.explore
z3c.json
collective.js.jqueryui
python-cjson
collective.plonetruegallery
lotr.templates
portal.auth
Products.PloneFormGen
quintagroup.pfg.captcha
collective.documentviewer
five.intid
plone.app.async
zcml =
candidcms.plonepas
candidcms.policy
candidcms.theme
candid.portal
candidcms.workspaces
archetypes.multifile
lotr.templates
collective.contacts
collective.tabr
collective.portlet.explore
[instance]
recipe = plone.recipe.zope2instance
user = uadmin:uadmin
eggs =
${buildout:eggs}
Products.CMFPlone
Paste
PasteScript
PasteDeploy
repoze.tm2
repoze.retry
repoze.who
zcml =
${buildout:zcml}
zcml-additional =
<include package="plone.app.async" file="single_db_instance.zcml" />
environment-vars =
ZC_ASYNC_UUID ${buildout:directory}/var/instance-uuid.txt
products =
${svnproducts:location}
# !+XAPIAN PATH(mn, apr-2012) hardcoded path to candid xapian installation
# temporary fix because plone uses the 'candid.portal' package which is in the
# candid.main package. Once the candid.portal package is factored out this entry
# should be removed.
extra-paths =
../parts/xapian/lib/python
[lxml]
recipe = z3c.recipe.staticlxml
egg = lxml
force = false
build-libxslt = true
build-libxml2 = true
libxslt-url = http://candid-portal.googlecode.com/files/libxslt-1.1.24.tar.gz
libxml2-url = http://candid-portal.googlecode.com/files/libxml2-2.6.32.tar.gz
[svnproducts]
recipe = infrae.subversion
urls =
http://candid-portal.googlecode.com/svn/plone.products/CandidHelpCenter/branches/plone4 CandidHelpCenter
[svneggs]
recipe = infrae.subversion
as_eggs = true
urls =
http://candid-portal.googlecode.com/svn/plone.products/candidcms.plonepas/trunk/ candidcms.plonepas
http://candid-portal.googlecode.com/svn/plone.products/candidcms.policy/trunk/ candidcms.policy
http://candid-portal.googlecode.com/svn/plone.products/candidcms.theme/trunk/ candidcms.theme
http://candid-portal.googlecode.com/svn/plone.products/candidcms.workspaces/trunk/ candidcms.workspaces
http://lotr.googlecode.com/svn/lab/apps/lotr.repository/ lotr.repository
http://lotr.googlecode.com/svn/trunk/products/lotr.templates/ lotr.templates
[paster]
recipe = zc.recipe.egg
eggs = ${instance:eggs}
# !+XAPIAN PATH(mn, apr-2012) hardcoded path to candid xapian installation
extra-paths =
../parts/xapian/lib/python
scripts = paster
[zeo]
recipe = plone.recipe.zeoserver
file-storage = ${buildout:directory}/var/filestorage/Data.fs
blob-storage = ${buildout:directory}/var/blobstorage
eggs = ${instance:eggs}
[worker]
recipe = plone.recipe.zope2instance
user = ${instance:user}
eggs = ${instance:eggs}
zcml = ${instance:zcml}
zserver-threads = 2
debug-mode = on
verbose-security = on
zeo-client = true
blob-storage = ${zeo:blob-storage}
shared-blob = on
eggs = ${instance:eggs}
zcml-additional =
<include package="plone.app.async" file="single_db_worker.zcml" />
environment-vars =
ZC_ASYNC_UUID ${buildout:directory}/var/worker-uuid.txt
[plonesite]
recipe = collective.recipe.plonesite
site-id = plone
admin-user = uadmin
instance = instance
profiles-initial =
Products.CMFPlone:dependencies
Products.CMFPlone:plone-content
lotr.repository:default
candidcms.policy:default
candidcms.theme:default
collective.dynatree:default
candidcms.workspaces:default
lotr.templates:default
Products.FacultyStaffDirectory:default
Products.PlonePopoll:default
Products.PloneFormGen:default
quintagroup.pfg.captcha:default
collective.documentviewer:default
products-initial =
Products.CMFPlone
archetypes.multifile
candidHelpCenter
LinguaPlone
collective.plonetruegallery
collective.tabr
Products.PloneFormGen
quintagroup.pfg.captcha
This would happen if the queues for plone.app.async have not been set up. plone.app.async & zc.async are (over)complicated and actually do require you reading the README ;)
You should have a look at the instructions provided with plone.app.async at their pypi page, in particular the buildout configuration.
Unless you include the necessary zcml (for your "normal", as well as your "worker" instance) your queues will not be setup.
This looks like an issue with collective.documentviewer. I am the author and actually think I fixed this at some point. What version are you running?