I'm having trouble importing numpy into my script because of some path-name issues.
I'm running my script with python 2.7, rather than the default 2.6 on my server because I need some of the updates in the Collections module. I cant seem to import numpy like:
from numpy.random import poisson
So I am trying to use the python2.7 specific links to numpy on my server, which are installed in:
/opt/lib/python2.7/numpy
But this period in the path is really making this difficult. I cannot change the path in anyway.
I've found a similar problem here, but frankly the code just doesn't make enough sense to me for me to feel safe using it (plus several commenters seemed to suggest it was a bad idea.). If someone has another suggestion, or if you can give me a good explanation of the code there I would appreciate it.
Try setting PYTHONPATH to point to /opt/lib/python2.7, after which import numpy et cetera should pull libraries from there.
$ PYTHONPATH=/opt/lib/python2.7 python27 my_script.py
Related
all.
I recently started working with Jenkins, in an attempt to replace cronjob with Jenkins pipeline. I have really a bit knowledge of programming jargon. I learned what I learned from questions on stackoverflow. So, if you guys need any more info, I would really appreciate if you use plain English.
So, I installed the lastest version of Jenkins and suggested plugins plus all the plugins that I could find useful to python running.
Afterwards, I searched stackoverflow and other websites to make this work, but all I could do was
#!/usr/bin/env python
from __future__ import print_function
print('Hello World')
And it succeeded.
Currently, Jenkins is running on Ubuntu 16.04, and I am using anaconda3's python (~/anaconda3/bin/python).
When I tried to run a bit more complicated python code (by that I mean import pandas), it gives me import error.
What I have tried so far is
execute python script build: import pandas - import error
execute shell build: import pandas (import pandas added to the code that worked above)
python builder build: import pandas - invalid interpreter error
pipeline job: sh python /path_to_python_file/*.py - import error
All gave errors. Since 'hello world' works, I believe that using anaconda3's python is not an issue. Also, it imported print_function just fine, so I want to know what I should do from here. Change workspace setting? workdirectory setting? code changes?
Thanks.
Since 'hello world' works, I believe that using anaconda3's python is not an issue.
Your assumption is wrong.
There are multiple ways of solving the issue but they all come down to using the correct python interpreter with installed pandas. Usually in ubuntu you'll have at least two interpreters. One for python 2 and one for python 3 and you'll use them in shell by calling either python pth/to/myScript.py or python3 pth/to/myScript.py. python and python3 are in this case just a sort of labels which point to the correct executables, using environmental variable PATH.
By installing anaconda3 you are adding one more interpreter with pandas and plenty of other preinstalled packages. If you want to use it, you need to tell somehow your shell or Jenkins about it. If import pandas gives you an error then you're probably using a different interpreter or a different python environment (but this is out of scope here).
Coming back to your script
Following this stack overflow answer, you'll see that all the line #!/usr/bin/env python does, is to make sure that you're using the first python interpreter on your Ubuntu's environment path. Which almost for sure isn't the one you installed with anaconda3. Most likely it will be the default python 2 distributed with ubuntu. If you want to make sure which interpreter exactly is running your script, instead of 'Hello World' put inside:
#!/usr/bin/env python
import sys
print(sys.executable) # this line will give you the exact path to the interpreter
print(sys.version) # this one will give you the version
Ok, so what to do?
Well, run your script using the correct interpreter. Remove #!/usr/bin/env python from your file and if you have a pipeline, add there:
sh "/home/yourname/anaconda3/bin/python /path_to_python_file/myFile.py"
It will most likely solve the issue. It's also quite flexible in the sense that if you ever want to use this python file on a different machine, you won't have your username hardcoded inside.
I am using f2py to wrap my PETSc-based fortran analysis code for use in OpenMDAO (as suggested in this post). Rather than use f2py directly, I'm instead using it to generate the relevant .c, .pyc, etc. files and then linking them myself using mpif90.
In a simple python environment, I can import my .so and run the code without any problems:
>>> import module_name
>>> module_name.execute()
expected code output...
However, when trying to do the same thing in an OpenMDAO component, I get the following error:
At line 72 of file driver.F90
Internal Error: list_formatted_write(): Bad type
This happens even when running in serial and the error appears to the first place in the fortran code where I use write(*,*). What could be different about running under OpenMDAO that might cause this issue? Might it have something to do with the need to pass a comm object, as mentioned in the answer to my original question? I am not doing that at the moment as it was not clear to me from the the relevant OpenMDAO example how that should be done in my case.
When I try to find specific information about the error I'm getting, search results almost always point to the mpif90 or gfortran libraries and possibly needing to recompile or update the libraries. However, that doesn't explain to me why my analysis would work perfectly well in a simple python code but not in OpenMDAO.
UPDATE: Per some others' suggestions, I've tried a few more things. Firstly, I get the error regardless of if I'm running using mpiexec python <script> or merely python <script>. I do have the PETSc implementation set up, assuming that doesn't refer to anything beyond the if MPI block in this example.
In my standalone test, I am able to successfully import a handful of things, including
from mpi4py import MPI
from petsc4py import PETSc
from openmdao.core.system import System
from openmdao.core.component import Component
from openmdao.core.basic_impl import BasicImpl
from openmdao.core._checks import check_connections, _both_names
from openmdao.core.driver import Driver
from openmdao.core.mpi_wrap import MPI, under_mpirun, debug
from openmdao.components.indep_var_comp import IndepVarComp
from openmdao.solvers.ln_gauss_seidel import LinearGaussSeidel
from openmdao.units.units import get_conversion_tuple
from openmdao.util.string_util import get_common_ancestor, nearest_child, name_relative_to
from openmdao.util.options import OptionsDictionary
from openmdao.util.dict_util import _jac_to_flat_dict
Not too much rhyme or reason to what I tested, just went down a few random rabbit holes (more direction would be fantastic). Here are some of the things that do result in error if they are imported in the same script:
from openmdao.core.group import Group
from openmdao.core.parallel_group import ParallelGroup
from openmdao.core.parallel_fd_group import ParallelFDGroup
from openmdao.core.relevance import Relevance
from openmdao.solvers.scipy_gmres import ScipyGMRES
from openmdao.solvers.ln_direct import DirectSolver
So it doesn't seem that the MPI imports are a problem? However, not knowing the OpenMDAO code too well, I am having trouble seeing the common thread in the problematic imports.
UPDATE 2: I should add that I'm becoming particularly suspicious of the networkx package. If my script is simply
import networkx as nx
import module_name
module_name.execute()
then I get the error. If I import my module before networkz, however (i.e. switch lines 1 and 2 in the above block), I don't get the error. More strangely, if I also import PETSc:
from petsc4py import PETSc
import networkx as nx
import module_name
module_name.execute()
Then everything works...
UPDATE 3: I'm running OS X El Capitan 10.11.6. I genuinely don't remember how I installed the python2.7 (need to use this rather than 3.x at the moment) I was using. Installed years ago and was located in /usr/local/bin. However, I switched to an anaconda installation, re-installed networkx, and still get the same error.
I've discovered that if I compile the f2py-wrapped stuff using gfortran (I assume this is what you guys do, yes?) rather than mpif90, I don't get the errors. Unfortunately, this causes the PETSc stuff in my fortran code yield some strange errors, probably because those .f90/.F90 files, according to the PETSc compilation rules, are compiled by mpif90 even if I force the final compile to use gfortran.
UPDATE 4: I was finally able to solve the Internal Error: list_formatted_write() issue. By using mpif90 --showme I could see what flags mpif90 is using (since it's essentially just gfortran plus some flags). It turns omitting the flag -Wl,-flat_namespace got rid of those print-related errors.
Now I can import most things and run my code without a problem, with one important exception. If I have a petsc-based fortran module (pc_fort_mod), then also importing PETSc into the python environment, i.e.
from petsc4py import PETSc
import pc_fort_mod
pc_fort_mod.execute()
results in PETSc errors in the fortran analysis (invalid matrices, unsuccessful preallocation). This seems reasonable to me since both would appear to be attempting to use the same PETSc libraries. Any idea if there is a way to do this so that the pc_fort_mod PETSc and petsc4py PETSC don't clash? I guess a workaround may be to have two PETSc builds...
SOLVED: I'm told that the problem described in Update 4 ultimately should not be a problem--it should be possible to simultaneously use PETSc in python and fortran. I was ultimately able to resolve my error by using a self-compiled PETSc build rather than the Homebrew recipe.
I've never quite seen anything like this before and we've used network-X with compiled fortran wrapped in F2py, running under MPI many many times.
I suggest that you remove and re-install your network-x package.
Which python are you using and what os are you running on? We've had very good luck running the anaconda python installation. You have to be a bit careful when installing petsc though. Building from source and running the PETSc tests is the safest way.
I don't have a lot of programming experience, so please try to keep answers relatively noob-friendly! :) Basically, I have a Python...library, I guess? or module? that I need to run in Spyder through Anaconda. This seems to be a very obscure module called PySPM, used for analyzing scanning probe microscopy data, that I received from a colleague my lab collaborates with. When I try to run a piece of software which uses this module, it gives this error:
ImportError: No module named 'PySPM.io'
The code itself which triggers this reads as follows:
from os import path
from PySPM.io.Translators.Utils import uiGetFile
from PySPM.io.Translators.BEPSndfTranslator import BEPSndfTranslator
from PySPM.io.Translators.BEodfTranslator import BEodfTranslator
from PySPM.analysis.BESHOFitter import BESHOFitter
The first line that says from PySPM.io.Translators.Utils import uiGetFile is what's triggering the error. I'm really stuck scratching my head here. What's going on and how can I solve it?
Pycroscopy is the (thoroughly) reorganized version of PySPM.
#Martensite try:
$ pip install pycroscopy
I'm self-taught in the Python world, so some of the structural conventions are still a little hazy to me. However, I've been getting very close to what I want to accomplish, but just ran into a larger problem.
Basically, I have a directory structure like this, which will sit outside of the normal python installation (this is to be distributed to people who should not have to know what a python installation is, but will have the one that comes standard with ArcGIS):
top_directory/
ArcToolbox.tbx
scripts/
ArcGIStool.py (script for the tool in the .tbx)
pythonmod/
__init__.py
general.py
xlrd/ (copied from my own python installation)
xlwt/ (copied from my own python installation)
xlutils/ (copied from my own python installation)
So, I like this directory structure, because all of the ArcGIStool.py scripts call functions within the pythonmod package (like those within general.py), and all of the general.py functions can call xlrd and xlwt functions with simple "import xlrd" statements. This means that if the user desired, he/she could just move the pythonmod folder to the python site-packages folder, and everything would run fine, even if xlrd/xlwt/xlutils are already installed.
THE PROBLEM:
Everything is great, until I try to use xlutils in general.py. Specifically, I need to "from xlutils.copy import copy". However, this sets off a cascade of import errors. One is that xlutils/copy.py uses "from xlutils.filter import process,XLRDReader,XLWTWriter". I solved this by modifying xlutils/copy.py like this:
try:
from xlutils.filter import process,XLRDReader,XLWTWriter
except ImportError:
from filter import process,XLRDReader,XLWTWriter
I thought this would work fine for other situations, but there are modules in the xlutils package that need to import xlrd. I tried following this advice, but when I use
try:
import xlrd
except ImportError:
import os, sys, imp
path = os.path.dirname(os.path.dirname(sys.argv[0]))
xlrd = imp.load_source("pythonmod.xlrd",os.path.join(path,"xlrd","__init__.py"))
I get a new import error: In xlrd/init.py, the info module is called (from xlrd/info.py), BUT when I use the above code, I get an error saying that the name "info" is not defined.
This leads me to believe that I don't really know what is going on, because I thought that when the init.py file was imported it would run just like normal and look within its containing folder for info.py. This does not seem to be the case, unfortunately.
Thanks for your interest, and any help would be greatly appreciated.
p.s. I don't want to have to modify the path variables, as I have no idea who will be using this toolset, and permissions are likely to be an issue, etc.
I realized I was using imp.load_source incorrectly. The correct syntax for what I wanted to do should have been:
imp.load_source("xlrd",os.path.join(path,"xlrd","__init__.py"))
In the end though, I ended up rewriting my code to not need xlutils at all, because I continued to have import errors that were causing many more problems than were worth dealing with.
I am trying to run a program which it specification say for python 2.6 I am running it with python 2.6.6, so it should work but I found that the importation fails see this question, and this sample:
from rnaspace.dao.storage_configuration_reader import storage_configuration_reader
This is due to a version change (I doubt) or of some of the environment on the original server? A solution is in the question cited, but I there is another way to solve this problem that it doesn't involve to change each file with this kind of importation?
Your import statement assumes python knows where the 'rnaspace' package is. Maybe you need to add the path to the package rnaspace in your include path?
import sys
pathToRnaspace = "/path/to/the/rnaspace/package"
sys.path.append(pathToRnaspace)
from rnaspace.core.putative_rna import putative_rna