I'm new to bazel and trying to automatize updates in the MapIt database and code.
I named the root folder for my bazel project bazel_mapit where there is the WORKSPACE file and the BUILD file for this external dependency as follows.
WORSPACE:
new_git_repository(
name = "mapit_repo",
remote = "https://github.com/mysociety/mapit",
tag = "v2.0",
build_file = "mapit_repo.BUILD",
init_submodules = 1,
)
mapit_reop.BUILD:
package(default_visibility = ["//visibility:public"])
filegroup(
name = "mapit_files",
srcs = glob(
[
"**/*",
],
exclude = [
"**/LICENSE",
"**/*.zip",
],
),
)
Now I want to generate the configuration file conf/general.yml inside the MapIt sources. The problem now is, when I add the following code to the mapit_repo.BUILD file, I getting errors that bazel can't find
'#bazel_mapit//:general_yml.bzl'
mapit_repo.BUILD (extension):
load("#bazel_mapit//:general_yml.bzl", "general_yml")
general_yml(
name = 'generate_general_yml',
bzl_mapit_db_user = 'foo',
)
filegroup (
name = 'mapit_general_yml',
srcs = ['conf/general.yml'],
data = ['conf/general.yml-example'],
)
How can I generate a config file in an external dependency?
Update:
This is the content of the working mapit_repo.BUILD file:
package(default_visibility = ["//visibility:public"])
filegroup(
name = "mapit_files",
srcs = glob(
[
"**/*",
],
exclude = [
"**/LICENSE",
"**/*.zip",
],
),
)
load("#//conf:general_yml.bzl", "general_yml")
general_yml(
name = 'generate_general_yml',
bzl_mapit_db_user = 'foo',
)
filegroup (
name = 'mapit_general_yml',
srcs = ['conf/general.yml'],
data = ['#//conf:general.yml.tmpl'],
)
I think all that needs to change is the load() mapit_repo.BUILD. That build file is going to be evaluated from within the external workspace, so to reference the bzl file in the main workspace, you can use #//:general_yml.bzl. (That is, the empty workspace name refers to the main workspace)
Related
I'm working on a bazel rule (using version 5.2.0) that uses SWIG (version 4.0.1) to make a python library from C++ code, adapted from a rule in the tensorflow library. The problem I've run into is that, depending on the contents of ctx.file.source.path, the swig invocation might produce a necessary .h file. If it does, the rule below works great. If it doesn't, I get:
ERROR: BUILD:31:11: output 'foo_swig_h.h' was not created
ERROR: BUILD:31:11: SWIGing foo.i. failed: not all outputs were created or valid
If the h_out stuff is removed from _py_swig_gen_impl, the rule below works great when swig doesn't produce the .h file. But, if swig does produce one, bazel seems to ignore it and it isn't available for native.cc_binary to compile, resulting in gcc failing with a 'no such file or directory' error on an #include <foo_swig_cc.h> line in foo_swig_cc.cc.
(The presence or absence of the .h file in the output is determined by whether the .i file at ctx.file.source.path uses SWIG's "directors" feature.)
def _include_dirs(deps):
return depset(transitive = [dep[CcInfo].compilation_context.includes for dep in deps]).to_list()
def _headers(deps):
return depset(transitive = [dep[CcInfo].compilation_context.headers for dep in deps]).to_list()
# Bazel rules for building swig files.
def _py_swig_gen_impl(ctx):
module_name = ctx.attr.module_name
cc_out = ctx.actions.declare_file(module_name + "_swig_cc.cc")
h_out = ctx.actions.declare_file(module_name + "_swig_h.h")
py_out = ctx.actions.declare_file(module_name + ".py")
args = ["-c++", "-python", "-py3"]
args += ["-module", module_name]
args += ["-I" + x for x in _include_dirs(ctx.attr.deps)]
args += ["-I" + x.dirname for x in ctx.files.swig_includes]
args += ["-o", cc_out.path]
args += ["-outdir", py_out.dirname]
args += ["-oh", h_out.path]
args.append(ctx.file.source.path)
outputs = [cc_out, h_out, py_out]
ctx.actions.run(
executable = "swig",
arguments = args,
mnemonic = "Swig",
inputs = [ctx.file.source] + _headers(ctx.attr.deps) + ctx.files.swig_includes,
outputs = outputs,
progress_message = "SWIGing %{input}.",
)
return [DefaultInfo(files = depset(direct = [cc_out, py_out]))]
_py_swig_gen = rule(
attrs = {
"source": attr.label(
mandatory = True,
allow_single_file = True,
),
"swig_includes": attr.label_list(
allow_files = [".i"],
),
"deps": attr.label_list(
allow_files = True,
providers = [CcInfo],
),
"module_name": attr.string(mandatory = True),
},
implementation = _py_swig_gen_impl,
)
def py_wrap_cc(name, source, module_name = None, deps = [], copts = [], **kwargs):
if module_name == None:
module_name = name
python_deps = [
"#local_config_python//:python_headers",
"#local_config_python//:python_lib",
]
# First, invoke the _py_wrap_cc rule, which runs swig. This outputs:
# `module_name.cc`, `module_name.py`, and, sometimes, `module_name.h` files.
swig_rule_name = "swig_gen_" + name
_py_swig_gen(
name = swig_rule_name,
source = source,
swig_includes = ["//third_party/swig_rules:swig_includes"],
deps = deps + python_deps,
module_name = module_name,
)
# Next, we need to compile the `module_name.cc` and `module_name.h` files
# from the previous rule. The `module_name.py` file already generated
# expects there to be a `_module_name.so` file, so we name the cc_binary
# rule this way to make sure that's the resulting file name.
cc_lib_name = "_" + module_name + ".so"
native.cc_binary(
name = cc_lib_name,
srcs = [":" + swig_rule_name],
linkopts = ["-dynamic", "-L/usr/local/lib/"],
linkshared = True,
deps = deps + python_deps,
)
# Finally, package everything up as a python library that can be depended
# on. Note that this rule uses the user-given `name`.
native.py_library(
name = name,
srcs = [":" + swig_rule_name],
srcs_version = "PY3",
data = [":" + cc_lib_name],
imports = ["./"],
)
My question, broadly, how I might best handle this with a single rule. I've tried adding a ctx.actions.write before the ctx.actions.run, thinking that I could generate a dummy '.h' file that would be overwritten if needed. That gives me:
ERROR: BUILD:41:11: for foo_swig_h.h, previous action: action 'Writing file foo_swig_h.h', attempted action: action 'SWIGing foo.i.'
My next idea is to remove the h_out stuff and then try to capture the h file for the cc_binary rule with some kind of glob invocation.
I've seen two approaches: add an attribute to indicate whether it applies, or write a wrapper script to generate it unconditionally.
Adding an attribute means something like "has_h": attr.bool(), and then use that in _py_swig_gen_impl to make the ctx.actions.declare_file(module_name + "_swig_h.h") conditional.
The wrapper script option means using something like this for the executable:
#!/bin/bash
set -e
touch the_path_of_the_header
exec swig "$#"
That will unconditionally create the output, and then swig will overwrite it if applicable. If it's not applicable, then passing around an empty header file in the Bazel rules should be harmless.
For posterity, this is what my _py_swig_gen_impl looks like after implementing #Brian's suggestion above:
def _py_swig_gen_impl(ctx):
module_name = ctx.attr.module_name
cc_out = ctx.actions.declare_file(module_name + "_swig_cc.cc")
h_out = ctx.actions.declare_file(module_name + "_swig_h.h")
py_out = ctx.actions.declare_file(module_name + ".py")
include_dirs = _include_dirs(ctx.attr.deps)
headers = _headers(ctx.attr.deps)
args = ["-c++", "-python", "-py3"]
args += ["-module", module_name]
args += ["-I" + x for x in include_dirs]
args += ["-I" + x.dirname for x in ctx.files.swig_includes]
args += ["-o", cc_out.path]
args += ["-outdir", py_out.dirname]
args += ["-oh", h_out.path]
args.append(ctx.file.source.path)
outputs = [cc_out, h_out, py_out]
# Depending on the contents of `ctx.file.source`, swig may or may not
# output a .h file needed by subsequent rules. Bazel doesn't like optional
# outputs, so instead of invoking swig directly we're going to make a
# lightweight executable script that first `touch`es the .h file that may
# get generated, and then execute that. This means we may be propagating
# an empty .h file around as a "dependency" sometimes, but that's okay.
swig_script_file = ctx.actions.declare_file("swig_exec.sh")
ctx.actions.write(
output = swig_script_file,
is_executable = True,
content = "#!/bin/bash\n\nset -e\ntouch " + h_out.path + "\nexec swig \"$#\"",
)
ctx.actions.run(
executable = swig_script_file,
arguments = args,
mnemonic = "Swig",
inputs = [ctx.file.source] + headers + ctx.files.swig_includes,
outputs = outputs,
progress_message = "SWIGing %{input}.",
)
return [
DefaultInfo(files = depset(direct = outputs)),
]
The ctx.actions.write generates the suggested bash script:
#!/bin/bash
set -e
touch %{h_out.path}
exec swig "$#"
Which guarantees that the expected h_out will always be output by ctx.actions.run, whether or not swig generates it.
I've a monorepo that contains a set of Python AWS lambdas and I'm using Bazel for building and packaging the lambdas. I'm now trying to use Bazel to create a zip file that follows the expected AWS Lambdas packaging and that I can upload to Lambda. Wondering what's the best way to do this with Bazel?
Below are a few different things I've tried thus far:
Attempt 1: py_binary
BUILD.bazel
py_binary(
name = "main_binary",
srcs = glob(["*.py"]),
main = "main.py",
visibility = ["//appcode/api/transaction_details:__subpackages__"],
deps = [
requirement("Faker"),
],
)
Problem:
This generates the following:
main_binary (python executable)
main_binary.runfiles
main_binary.runfiles_manifest
Lambda expects the handler to be in the format of lambda_function.lambda_handler. Since main_binary is an executable vs. a python file, it doesn't expose the actual handler method and the lambda blows up because it can't find it. I tried updating the handler configuration to simply point to the main_binary but it blows up because it expects two arguments(i.e. lambda_function.lambda_handler).
Attempt 2: py_library + pkg_zip
BUILD.bazel
py_library(
name = "main",
srcs = glob(["*.py"]),
visibility = ["//appcode/api/transaction_details:__subpackages__"],
deps = [
requirement("Faker"),
],
)
pkg_zip(
name = "main_zip",
srcs =["//appcode/api/transaction_details/src:main" ],
)
Problem:
This generates a zip file with:
main.py
__init__.py
The zip file now includes the main.py but none of its runtime dependencies. Thus the lambda blows up because it can't find Faker.
Other Attempts:
I've also tried using the --build_python_zip flag as well as the #bazel_tools//tools/zip:zipper with a generic rule but they both lead to similar outcomes as the two previous attempts.
We use #bazel_tools//tools/zip:zipper with a custom rule. We also pull serverless in using rules_nodejs and run it through bazel, which causes the package building to happen prior to running sls deploy.
We use pip_parse from rules_python. I'm not sure whether the _short_path function below will work with pip_install or other mechanisms.
File filtering is supported, although it's awkward. Ideally the zip generation would be handled by a separate binary (i.e., a Python script) which would allow filtering using regular expressions/globs/etc. Bazel doesn't support regular expressions in Starlark, so we use our own thing.
I've included an excerpt:
lambda.bzl
"""
Support for serverless deployments.
"""
def contains(pattern):
return "contains:" + pattern
def startswith(pattern):
return "startswith:" + pattern
def endswith(pattern):
return "endswith:" + pattern
def _is_ignored(path, patterns):
for p in patterns:
if p.startswith("contains:"):
if p[len("contains:"):] in path:
return True
elif p.startswith("startswith:"):
if path.startswith(p[len("startswith:"):]):
return True
elif p.startswith("endswith:"):
if path.endswith(p[len("endswith:"):]):
return True
else:
fail("Invalid pattern: " + p)
return False
def _short_path(file_):
# Remove prefixes for external and generated files.
# E.g.,
# ../py_deps_pypi__pydantic/pydantic/__init__.py -> pydantic/__init__.py
short_path = file_.short_path
if short_path.startswith("../"):
second_slash = short_path.index("/", 3)
short_path = short_path[second_slash + 1:]
return short_path
def _py_lambda_zip_impl(ctx):
deps = ctx.attr.target[DefaultInfo].default_runfiles.files
f = ctx.outputs.output
args = []
for dep in deps.to_list():
short_path = _short_path(dep)
# Skip ignored patterns
if _is_ignored(short_path, ctx.attr.ignore):
continue
args.append(short_path + "=" + dep.path)
ctx.actions.run(
outputs = [f],
inputs = deps,
executable = ctx.executable._zipper,
arguments = ["cC", f.path] + args,
progress_message = "Creating archive...",
mnemonic = "archiver",
)
out = depset(direct = [f])
return [
DefaultInfo(
files = out,
),
OutputGroupInfo(
all_files = out,
),
]
_py_lambda_zip = rule(
implementation = _py_lambda_zip_impl,
attrs = {
"target": attr.label(),
"ignore": attr.string_list(),
"_zipper": attr.label(
default = Label("#bazel_tools//tools/zip:zipper"),
cfg = "host",
executable = True,
),
"output": attr.output(),
},
executable = False,
test = False,
)
def py_lambda_zip(name, target, ignore, **kwargs):
_py_lambda_zip(
name = name,
target = target,
ignore = ignore,
output = name + ".zip",
**kwargs
)
BUILD.bazel
load("#npm_serverless//serverless:index.bzl", "serverless")
load(":lambda.bzl", "contains", "endswith", "py_lambda_zip", "startswith")
py_binary(
name = "my_lambda_app",
...
)
py_lambda_zip(
name = "lambda_archive",
ignore = [
contains("/__pycache__/"),
endswith(".pyc"),
endswith(".pyo"),
# Ignore boto since it's provided by Lambda.
startswith("boto3/"),
startswith("botocore/"),
# With the move to hermetic toolchains, the zip gets a lib/ directory containing the
# python runtime. We don't need that.
startswith("lib/"),
],
target = ":my_lambda_app",
# Only allow building on linux, since we don't want to upload a lambda zip file
# with e.g. macos compiled binaries.
target_compatible_with = [
"#platforms//os:linux",
],
)
# The sls command requires that serverless.yml be in its working directory, and that the yaml file
# NOT be a symlink. So this target builds a directory containing a copy of serverless.yml, and also
# symlinks the generated lambda_archive.zip in the same directory.
#
# It also generates a chdir.js script that we instruct node to execute to change to the proper working directory.
genrule(
name = "sls_files",
srcs = [
"lambda_archive.zip",
"serverless.yml",
],
outs = [
"sls_files/lambda_archive.zip",
"sls_files/serverless.yml",
"sls_files/chdir.js",
],
cmd = """
mkdir -p $(#D)/sls_files
cp $(location serverless.yml) $(#D)/sls_files/serverless.yml
cp -P $(location lambda_archive.zip) $(#D)/sls_files/lambda_archive.zip
echo "const fs = require('fs');" \
"const path = require('path');" \
"process.chdir(path.dirname(fs.realpathSync(__filename)));" > $(#D)/sls_files/chdir.js
""",
)
# Usage:
# bazel run //:sls -- deploy <more args>
serverless(
name = "sls",
args = ["""--node_options=--require=./$(location sls_files/chdir.js)"""],
data = [
"sls_files/chdir.js",
"sls_files/serverless.yml",
"sls_files/lambda_archive.zip",
],
)
serverless.yml
service: my-app
package:
artifact: lambda_archive.zip
# ... other config ...
Below are the changes I made to the previous answer to generate the lambda zip. Thanks #jvolkman for the original suggestion.
project/BUILD.bazel: Added rule to generate requirements_lock.txt from project/requirements.txt
load("#rules_python//python:pip.bzl", "compile_pip_requirements")
compile_pip_requirements(
name = "requirements",
extra_args = ["--allow-unsafe"],
requirements_in = "requirements.txt",
requirements_txt = "requirements_lock.txt",
)
project/WORKSPACE.bazel: swap pip_install with pip_parse
workspace(name = "mdc-eligibility")
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_python",
sha256 = "9fcf91dbcc31fde6d1edb15f117246d912c33c36f44cf681976bd886538deba6",
strip_prefix = "rules_python-0.8.0",
url = "https://github.com/bazelbuild/rules_python/archive/refs/tags/0.8.0.tar.gz",
)
load("#rules_python//python:repositories.bzl", "python_register_toolchains")
python_register_toolchains(
name = "python3_9",
python_version = "3.9",
)
load("#rules_python//python:pip.bzl", "pip_parse")
load("#python3_9//:defs.bzl", "interpreter")
pip_parse(
name = "mndc-eligibility-deps",
requirements_lock = "//:requirements_lock.txt",
python_interpreter_target = interpreter,
quiet = False
)
load("#mndc-eligibility-deps//:requirements.bzl", "install_deps")
install_deps()
project/build_rules/lambda_packaging/lambda.bzl: Modified custom rule provided by #jvolkman to include source code in the resulting zip code.
def contains(pattern):
return "contains:" + pattern
def startswith(pattern):
return "startswith:" + pattern
def endswith(pattern):
return "endswith:" + pattern
def _is_ignored(path, patterns):
for p in patterns:
if p.startswith("contains:"):
if p[len("contains:"):] in path:
return True
elif p.startswith("startswith:"):
if path.startswith(p[len("startswith:"):]):
return True
elif p.startswith("endswith:"):
if path.endswith(p[len("endswith:"):]):
return True
else:
fail("Invalid pattern: " + p)
return False
def _short_path(file_):
# Remove prefixes for external and generated files.
# E.g.,
# ../py_deps_pypi__pydantic/pydantic/__init__.py -> pydantic/__init__.py
short_path = file_.short_path
if short_path.startswith("../"):
second_slash = short_path.index("/", 3)
short_path = short_path[second_slash + 1:]
return short_path
# steven chambers
def _py_lambda_zip_impl(ctx):
deps = ctx.attr.target[DefaultInfo].default_runfiles.files
f = ctx.outputs.output
args = []
for dep in deps.to_list():
short_path = _short_path(dep)
# Skip ignored patterns
if _is_ignored(short_path, ctx.attr.ignore):
continue
args.append(short_path + "=" + dep.path)
# MODIFICATION: Added source files to the map of files to zip
source_files = ctx.attr.target[DefaultInfo].files
for source_file in source_files.to_list():
args.append(source_file.basename+"="+source_file.path)
ctx.actions.run(
outputs = [f],
inputs = deps,
executable = ctx.executable._zipper,
arguments = ["cC", f.path] + args,
progress_message = "Creating archive...",
mnemonic = "archiver",
)
out = depset(direct = [f])
return [
DefaultInfo(
files = out,
),
OutputGroupInfo(
all_files = out,
),
]
_py_lambda_zip = rule(
implementation = _py_lambda_zip_impl,
attrs = {
"target": attr.label(),
"ignore": attr.string_list(),
"_zipper": attr.label(
default = Label("#bazel_tools//tools/zip:zipper"),
cfg = "host",
executable = True,
),
"output": attr.output(),
},
executable = False,
test = False,
)
def py_lambda_zip(name, target, ignore, **kwargs):
_py_lambda_zip(
name = name,
target = target,
ignore = ignore,
output = name + ".zip",
**kwargs
)
project/appcode/api/transaction_details/src/BUILD.bazel: Used custom py_lambda_zip rule to zip up py_library
load("#mndc-eligibility-deps//:requirements.bzl", "requirement")
load("#python3_9//:defs.bzl", "interpreter")
load("//build_rules/lambda_packaging:lambda.bzl", "contains", "endswith", "py_lambda_zip", "startswith")
py_library(
name = "main",
srcs = glob(["*.py"]),
visibility = ["//appcode/api/transaction_details:__subpackages__"],
deps = [
requirement("Faker"),
],
)
py_lambda_zip(
name = "lambda_archive",
ignore = [
contains("/__pycache__/"),
endswith(".pyc"),
endswith(".pyo"),
# Ignore boto since it's provided by Lambda.
startswith("boto3/"),
startswith("botocore/"),
# With the move to hermetic toolchains, the zip gets a lib/ directory containing the
# python runtime. We don't need that.
startswith("lib/"),
],
target = ":main",
)
I've already trained a UBM model and now I'm trying to implement the speaker-adaptation when I got following error.
Exception: show enroll/something.wav is not in the HDF5 file
I got two files "enroll" and "test" under the file "feat" which contains respectively features(.h5) for training and test, and my enroll_idmap is generated with the audios(.wav) only for training. And, my wav files and feat files are separated. I think I got a problem of idmap. "enroll/something.wav" is the rightid of my enroll_idmap, but what does that "HDF5 file" refer to?
Could anyone tell me what this error means and how to fix it?
Here's the code of my enroll_idmap
def __init__(self):
BASE_DIR = "./Database/sidekit_data"
self.AUDIO_DIR = os.path.join(BASE_DIR, "audio")
self.FEATURE_DIR = os.path.join(BASE_DIR, "feat")
self.TASK_DIR = os.path.join(BASE_DIR, "task")
def create_idMap(self, group):
# Make enrollment (IdMap) file list
group_dir = os.path.join(self.AUDIO_DIR, group) # enrollment data directory
group_files = os.listdir(group_dir)
group_models = [files.split('_')[0] for files in group_files] # list of model IDs
group_segments = [group+"/"+f for f in group_files]
# Generate IdMap
group_idmap = sidekit.IdMap()
group_idmap.leftids = np.asarray(group_models)
group_idmap.rightids = np.asarray(group_segments)
group_idmap.start = np.empty(group_idmap.rightids.shape, '|O')
group_idmap.stop = np.empty(group_idmap.rightids.shape, '|O')
if group_idmap.validate():
group_idmap.write(os.path.join(self.TASK_DIR, group+'_idmap.h5'))
else:
raise RuntimeError('Problems with creating idMap file')
And after that I got enroll_idmap and test_idmap with :
create_idMap("enroll")
create_idMap("test")
And here's the code of speaker-adaptation, the error above comes out during the execution of enroll_stat.accumulate_stat(…):
BASE_DIR = "./Database/sidekit_data"
enroll_idmap = sidekit.IdMap.read(os.path.join(BASE_DIR, "task", "enroll_idmap.h5"))
ubm = sidekit.Mixture()
model_name = "ubm_{}.h5".format(NUM_GUASSIANS)
ubm.read(os.path.join(BASE_DIR, "ubm", model_name))
server_eval = sidekit.FeaturesServer(feature_filename_structure="./Database/sidekit_data/feat/{}.h5",
...
...)
print("Compute the sufficient statistics")
enroll_stat.accumulate_stat(ubm=ubm,
feature_server=server_eval,
seg_indices=range(enroll_stat.segset.shape[0]),
num_thread=nbThread
)
This seems not to be a big problem but it stops me for a few days, help please.
I finally got this problem fixed by changing the path of training and test features, making it outside the "BASEDIR"
server_eval = sidekit.FeaturesServer(feature_filename_structure="./enroll/{}.h5",
...)
I am using Windows 10 and running the code in Jupyter Notebook (in Chrome).
This is my code:
if __name__ == '__main__':
import itertools
MOD03_path = r"C:\Users\saviosebastian\MYD03.A2008001.0000.006.2012066122450.hdf"
MOD06_path = r"C:\Users\saviosebastian\MYD06_L2.A2008001.0000.006.2013341193524.hdf"
satellite = 'Aqua'
yr = [2008]
mn = [1] #np.arange(1,13)
dy = [1]
# latitude and longtitude boundaries of level-3 grid
lat_bnd = np.arange(-90,91,1)
lon_bnd = np.arange(-180,180,1)
nlat = 180
nlon = 360
TOT_pix = np.zeros(nlat*nlon)
CLD_pix = np.zeros(nlat*nlon)
### To use Spark in Python
spark = SparkSession\
.builder\
.appName("Aggregation")\
.getOrCreate()
filenames0=['']*500
i=0
for y,m,d in itertools.product(yr,mn,dy):
#-------------find the MODIS prodcts--------------#
date = datetime.datetime(y,m,d)
JD01, JD02 = gcal2jd(y,1,1)
JD1, JD2 = gcal2jd(y,m,d)
JD = np.int((JD2+JD1)-(JD01+JD02) + 1)
granule_time = datetime.datetime(y,m,d,0,0)
while granule_time <= datetime.datetime(y,m,d,23,55): # 23,55
print('granule time:',granule_time)
**[MOD03_fp = 'MYD03.A{:04d}{:03d}.{:02d}{:02d}.006.?????????????.hdf'.format(y,JD,granule_time.hour,granule_time.minute)][1]**
MOD06_fp = 'MYD06_L2.A{:04d}{:03d}.{:02d}{:02d}.006.?????????????.hdf'.format(y,JD,granule_time.hour,granule_time.minute)
MOD03_fn, MOD06_fn =[],[]
for MOD06_flist in os.listdir(MOD06_path):
if fnmatch.fnmatch(MOD06_flist, MOD06_fp):
MOD06_fn = MOD06_flist
for MOD03_flist in os.listdir(MOD03_path):
if fnmatch.fnmatch(MOD03_flist, MOD03_fp):
MOD03_fn = MOD03_flist
if MOD03_fn and MOD06_fn: # if both MOD06 and MOD03 products are in the directory
I am getting the following error:
Do you know any solution to this problem?
I can't give you a specific answer without knowledge of the directory system on your computer, but for now it's obvious that there is something wrong with the name of the directory that you are referencing. Use File Explorer to make sure that the directory actually exists, and also make sure that you haven't misspelled the name of the file, which could easily happen given the filename.
You are giving the full path along with file name. The os.listdir(path) method in python is used to get the list of all files and directories in the specified directory. If we don’t specify any directory, then list of files and directories in the current working directory will be returned.
You can just write "C:/Users/saviosebastian" in path.
Same goes for os.chdir("C:/Users/saviosebastian").
I have made a python file and I want to convert it to an MSI file. Here is my code:
from cx_Freeze import *
includefiles=[]
excludes=[]
packages=["tkinter", "openpyxl"]
base = None
if sys.platform == "win32":
base = "Win32GUI"
shortcut_table = []
company_name = 'Bills Programs'
product_name = 'BananaCell'
msi_data = {"Shortcut": shortcut_table}
bdist_msi_options = { 'upgrade_code': '{Banana-rama-30403344939493}',
'add_to_path': False, 'initial_target_dir': r'[ProgramFilesFolder]\%s\%s' %
(company_name, product_name), }
setup(
version = "0.1",
description = "Lightweight excel comparing program",
author = "Bill",
name = "BananaCell",
options = {'build_exe': {'include_files':includefiles}, "bdist_msi": bdist_msi_options,},
executables = [
Executable(
script="comparing.py",
base=base,
shortcutName="BananaCell",
shortcutDir="DesktopFolder",
icon='icon.ico',
)
]
)
But when I run the following code, the installation completes but I get the following error message
What am I doing wrong? I have full access ti the directory. Any help would be greatly appreciated.