I have a project in which we have some documentation that is generated in Visual Studio which is parsed by the runtime of Python so that our users can do things like help (our.module) in the REPL window and get meaningful data. We are switching to autotools for our build environment in Linux. I must find a means of ensuring that the 2 xml files necessary are installed to $(libdir) by make.
My first attempt was to do something like this in Makefile.am
dist_lib_DATA = file1.xml file2.xml
However, this produced this result:
[afalanga#afalanga4 BuildSystemWork]$ autoreconf
Makefile.am:5: error: 'libdir' is not a legitimate directory for 'DATA'
autoreconf: automake failed with exit status: 1
It was told to me that there is a workaround mentioned somewhere in the autotools manuals. I'm not finding it as yet. The only method I can think of at this time for a "workaround" is to write a special rule in my Makefile.am which would copy the files. Something like this
install_docs : file1.xml file2.xml
cp $^ $(libdir)
Does this look correct or is there a better method of doing this? Another method might be if python has other places where it looks for documentation, I might be able to install there? Is this possible? I'm not that familiar with the internals of python for "in-line" help.
Related
I have been using the flake 8 python extension, which when ran will tell me whether a variable is not defined, if there are too many white spaces, etc. But flake8 will not produce an error if I call a nonexistent object from some package. For example, the following will not produce an error with flake8:
import numpy as np
x = np.aa_bb_cc()
np.aa_bb_cc() does not exist so I would like to have a plug in that would tell me so before I run my python script. Is there a plugin that will produce an error for the above? For example, this feature is built into Visual Studio Code but I would like to also be able to have this same feature in vim if possible.
Install pylint then create a vim map to run it from within vim.
nnoremap <leader>l :!python3 -m pylint % <bar> grep no-member<cr>
Notes: I'm using <leader>l but it could be anything else. Also, <bar> grep no-member will only output the error you're looking for. Remove it to see other pylint warnings.
I am new with programming, so it is maybe harder for me to understand but I have the following issue:
I have a script that imports "eurostag.dll", which according to its manual, should work until Python 3.6. (but the manual is not updated, so it may work also with later updates, I assume).\ The issue is that I have Python 3.8. and when running the script I receive the following message:
"Failed to load EUROSTAG library (Could not find module 'D:\Eurostag\eustag_esg.dll' (or one of its dependencies). Try using the full path with constructor syntax.)"
I have tried to move the .dll library where the script is, but nothing changed. I tried also changing the directory with os.chdir, but the same message appears (with the same 'D:\Eurostag\eustag_esg.dll', so the directory was not changed.
Does anybody know if there is any workaround for this?
Thank you!
I am working on an application which involves route finding (a completely different subject), but for testing I need example mazes to test on. A colleague suggested I use pydaedalus to generate large scale mazes in the format I need. I am using the following code to try and install the module:
$pip3.6 install pydaedalus
This returns the following error:
-Wno-error=format-security
In file included from daedalus/_maze.cpp:467:
In file included from daedalus/wrapper.h:8:
daedalus/src/util.h:31:10: fatal error: 'cstdint' file not found
#include <cstdint>
^
1 error generated.
error: command '/usr/bin/clang' failed with exit status 1
I have done some research and have found nothing which addresses this. I have also done some (limited) C++ development using cstdint, which has always worked.
I came across this question, but it appears to address a separate issue.
I am developing in OSX 10.10.5
Any help that you can provide is much appreciated!
These compile errors are down to daedalus's requirement of the C++11 standard, which is sometimes a bit tricky to get working on Mac OS X. One idea might be to check to make sure your Xcode is completely up to date.
The page you linked also suggests to try linking against clang's standard library instead of the GCC standard library. I'm not sure if this will work, or if it will give you linking errors on build or when you import daedalus into python, but you could give it a shot anyway:
CFLAGS='-stdlib=libc++' pip3.6 install pydaedalus
Another idea would be to encourage pip to use the clang++ frontend, which your link also suggests might help. You should be able to set this with the environment variable CXX (or, just possibly, CC).
CXX=clang++ pip3.6 install pydaedalus
Try various combinations of those environment settings (e.g., CXX and CFLAGS), and hopefully something will work eventually.
I have tried editing SConstruct to point to a different gcc compiler but it always seems to use the one defined in /usr/bin/gcc and /usr/bin/g++.
env = DefaultEnvironment()
env['CC'] = '/home/aaron/devel/bin/gcc'
env['CXX'] = '/home/aaron/devel/bin/g++'
What am I doing wrong? Also, is there a way to specify a different compiler on the command line using something like:
scons cxx=/home/aaron/devel/bin/g++
I've gone crazy trying to make this work. Thanks!
There is a suggestion in "Why doesn't SCons find my compiler/linker/etc.?" in the SCons wiki? For your case, that would be
path = ['/path/to/other/compiler/bin', '/bin', '/usr/bin',]
env = Environment(ENV = {'PATH' : path})
i.e., make your own environment with exactly the content you want, such as the $PATH (other bits of useful advice about environments are close by in the same wiki page).
To add your own options to scons' command line, per the docs, you should be able to use AddOption, see section 12.1.5 (but, I have not tried this one myself).
A "dirty trick" is "just" to make a symbolic link to the new interpretor in the folder where you issue the scons command
I'm using mr.developer to track some packages on github. When I rerun my buildout, I get:
The package 'django-quoteme' is dirty.
Do you want to update it anyway? [yes/No/all] y
What is meant by "dirty" exactly?
From http://github.com/fschulze/mr.developer:
Dirty SVN
You get an error like::
ERROR: Can't switch package 'foo'
from
'https://example.com/svn/foo/trunk/',
because it's dirty.
If you have not modified the package
files under src/foo, then you can
check what's going on with status
-v. One common cause is a *.egg-info folder which gets
generated every time you run buildout
and this shows up as an untracked item
in svn status.
You should add .egg-info to your
global Subversion ignores in
~/.subversion/config, like this::
global-ignores = *.o *.lo *.la *.al .libs *.so .so.[0-9] *.a *.pyc *.pyo *.rej ~ ## .#* .*.swp .DS_Store *.egg-info
So it looks like you should use status -v to see what they mean by "dirty" in your case.
I don't know what it means specifically in this context, but in the computing science world, "dirty" usually means its been modified. Maybe one the files in the package has been edited, and by updating it, you'll lose those changes, hence the warning.
http://en.wikipedia.org/wiki/Dirty_%28computer_science%29