I'm trying to build a python script that is suppose to feed into another Matlab program. the script uses (among other things) numpy and pandas.
Here's the matlab code when I try to load the script:
path='C:\XXXXXX\Local\Continuum\anaconda3\python.exe';
pyversion(path)
algo=py.importlib.import_module('Algo_Pres');
When I try to load the script into matlab, I get an import error that seems to originate from python:
I understand the error as: pandas is missing a numpy dependency.
And yet when I turn back to python and run the script in python it works smoothly...
Where do you think the problem comes from?
PS: I checked my Library using conda list in the Prompt.
For some reason numpy is listed in the anaconda channel, whereas anything else is listed without any channel. Do you think it could be linked?
Related
I have a python code that I want to run in Matlab. It has an `import NumPy statement in it. The code runs without a problem in the terminal. But when I use Matlab system function It gives me the import error below.
Import error: No module named numpy
I'm using Python 3.9 on Matlab 2020b
My question is, why does the system function gives a different result than the Terminal itself?
I've tried to add the Python folder to MATLABPATH. But nothing changed.
Any suggestions?
Thanks in advance.
I am trying to run a script in Python 3.7 that makes use of DeepPavlov and Tensorflow==2.4.1.
It seems that there is an incompatibility, because:
DeepPavlov can work only with Numpy==1.18
Tensorflow==2.4.1 needs at least Numpy==1.19.2
If I downgrade Tensorflow, some of the instructions in the script fail, in particular I can not import "transpose_shape".
I may try to change the script, and use different functions.
However, I know for sure that this script has been successfully used by the person who gave it to me.
Is there a way around this apparent incompatibility?
all.
I recently started working with Jenkins, in an attempt to replace cronjob with Jenkins pipeline. I have really a bit knowledge of programming jargon. I learned what I learned from questions on stackoverflow. So, if you guys need any more info, I would really appreciate if you use plain English.
So, I installed the lastest version of Jenkins and suggested plugins plus all the plugins that I could find useful to python running.
Afterwards, I searched stackoverflow and other websites to make this work, but all I could do was
#!/usr/bin/env python
from __future__ import print_function
print('Hello World')
And it succeeded.
Currently, Jenkins is running on Ubuntu 16.04, and I am using anaconda3's python (~/anaconda3/bin/python).
When I tried to run a bit more complicated python code (by that I mean import pandas), it gives me import error.
What I have tried so far is
execute python script build: import pandas - import error
execute shell build: import pandas (import pandas added to the code that worked above)
python builder build: import pandas - invalid interpreter error
pipeline job: sh python /path_to_python_file/*.py - import error
All gave errors. Since 'hello world' works, I believe that using anaconda3's python is not an issue. Also, it imported print_function just fine, so I want to know what I should do from here. Change workspace setting? workdirectory setting? code changes?
Thanks.
Since 'hello world' works, I believe that using anaconda3's python is not an issue.
Your assumption is wrong.
There are multiple ways of solving the issue but they all come down to using the correct python interpreter with installed pandas. Usually in ubuntu you'll have at least two interpreters. One for python 2 and one for python 3 and you'll use them in shell by calling either python pth/to/myScript.py or python3 pth/to/myScript.py. python and python3 are in this case just a sort of labels which point to the correct executables, using environmental variable PATH.
By installing anaconda3 you are adding one more interpreter with pandas and plenty of other preinstalled packages. If you want to use it, you need to tell somehow your shell or Jenkins about it. If import pandas gives you an error then you're probably using a different interpreter or a different python environment (but this is out of scope here).
Coming back to your script
Following this stack overflow answer, you'll see that all the line #!/usr/bin/env python does, is to make sure that you're using the first python interpreter on your Ubuntu's environment path. Which almost for sure isn't the one you installed with anaconda3. Most likely it will be the default python 2 distributed with ubuntu. If you want to make sure which interpreter exactly is running your script, instead of 'Hello World' put inside:
#!/usr/bin/env python
import sys
print(sys.executable) # this line will give you the exact path to the interpreter
print(sys.version) # this one will give you the version
Ok, so what to do?
Well, run your script using the correct interpreter. Remove #!/usr/bin/env python from your file and if you have a pipeline, add there:
sh "/home/yourname/anaconda3/bin/python /path_to_python_file/myFile.py"
It will most likely solve the issue. It's also quite flexible in the sense that if you ever want to use this python file on a different machine, you won't have your username hardcoded inside.
Working with R in Python using rpy2 on windows 7.
I need to open some rasters as RasterLayer using the function raster() from the raster package. I manage to install the package, but not to use its function.
I install the packages that I need (rgdal, sp, raster, lidR, io) using
utils.install_packages(StrVector(names_to_install))
names_to_install is a list of the packages that are still not installed. This works fine.
I know how to try the "basic" functions, like sum, and it works:
import rpy2.robjects as robjects
function_sum = robjects.r['sum']
But the same doesn't seem to work with the raster function from the raster package:
function_raster = robjects.r['raster']
since I get the error:
LookupError: 'raster' not found
I also tried the following:
raster_package = importr('raster')
with the intention to be able to run the next and load my raster file:
raster_package.raster(my_raster_file)
but the first line (import('raster')) causes the crash of python and I get the error:
Process finished with exit code -1073741819 (0xC0000005)
This doesn't happen with other loaded packages like rgdal, but with the raster package and with the lidR package I get the error.
I looked up this error, seems to be access violation, but I don't know what I can do about it and why it only happens with certain packages.
I expect to be able to call the raster function from the package raster.
Edit
I tried it on a computer with windows 10 and the error doesn't show anymore when running
raster_package = importr('raster')
Still would be nice to know what is the problem with Windows 7 and if there is any solution.
rpy2 does not currently have Windows support. This is not a final situation, most of what is likely needed is contributions to finalize this: https://github.com/rpy2/rpy2/blob/master/rpy2/rinterface_lib/embedded_mswin.py
I am using f2py to wrap my PETSc-based fortran analysis code for use in OpenMDAO (as suggested in this post). Rather than use f2py directly, I'm instead using it to generate the relevant .c, .pyc, etc. files and then linking them myself using mpif90.
In a simple python environment, I can import my .so and run the code without any problems:
>>> import module_name
>>> module_name.execute()
expected code output...
However, when trying to do the same thing in an OpenMDAO component, I get the following error:
At line 72 of file driver.F90
Internal Error: list_formatted_write(): Bad type
This happens even when running in serial and the error appears to the first place in the fortran code where I use write(*,*). What could be different about running under OpenMDAO that might cause this issue? Might it have something to do with the need to pass a comm object, as mentioned in the answer to my original question? I am not doing that at the moment as it was not clear to me from the the relevant OpenMDAO example how that should be done in my case.
When I try to find specific information about the error I'm getting, search results almost always point to the mpif90 or gfortran libraries and possibly needing to recompile or update the libraries. However, that doesn't explain to me why my analysis would work perfectly well in a simple python code but not in OpenMDAO.
UPDATE: Per some others' suggestions, I've tried a few more things. Firstly, I get the error regardless of if I'm running using mpiexec python <script> or merely python <script>. I do have the PETSc implementation set up, assuming that doesn't refer to anything beyond the if MPI block in this example.
In my standalone test, I am able to successfully import a handful of things, including
from mpi4py import MPI
from petsc4py import PETSc
from openmdao.core.system import System
from openmdao.core.component import Component
from openmdao.core.basic_impl import BasicImpl
from openmdao.core._checks import check_connections, _both_names
from openmdao.core.driver import Driver
from openmdao.core.mpi_wrap import MPI, under_mpirun, debug
from openmdao.components.indep_var_comp import IndepVarComp
from openmdao.solvers.ln_gauss_seidel import LinearGaussSeidel
from openmdao.units.units import get_conversion_tuple
from openmdao.util.string_util import get_common_ancestor, nearest_child, name_relative_to
from openmdao.util.options import OptionsDictionary
from openmdao.util.dict_util import _jac_to_flat_dict
Not too much rhyme or reason to what I tested, just went down a few random rabbit holes (more direction would be fantastic). Here are some of the things that do result in error if they are imported in the same script:
from openmdao.core.group import Group
from openmdao.core.parallel_group import ParallelGroup
from openmdao.core.parallel_fd_group import ParallelFDGroup
from openmdao.core.relevance import Relevance
from openmdao.solvers.scipy_gmres import ScipyGMRES
from openmdao.solvers.ln_direct import DirectSolver
So it doesn't seem that the MPI imports are a problem? However, not knowing the OpenMDAO code too well, I am having trouble seeing the common thread in the problematic imports.
UPDATE 2: I should add that I'm becoming particularly suspicious of the networkx package. If my script is simply
import networkx as nx
import module_name
module_name.execute()
then I get the error. If I import my module before networkz, however (i.e. switch lines 1 and 2 in the above block), I don't get the error. More strangely, if I also import PETSc:
from petsc4py import PETSc
import networkx as nx
import module_name
module_name.execute()
Then everything works...
UPDATE 3: I'm running OS X El Capitan 10.11.6. I genuinely don't remember how I installed the python2.7 (need to use this rather than 3.x at the moment) I was using. Installed years ago and was located in /usr/local/bin. However, I switched to an anaconda installation, re-installed networkx, and still get the same error.
I've discovered that if I compile the f2py-wrapped stuff using gfortran (I assume this is what you guys do, yes?) rather than mpif90, I don't get the errors. Unfortunately, this causes the PETSc stuff in my fortran code yield some strange errors, probably because those .f90/.F90 files, according to the PETSc compilation rules, are compiled by mpif90 even if I force the final compile to use gfortran.
UPDATE 4: I was finally able to solve the Internal Error: list_formatted_write() issue. By using mpif90 --showme I could see what flags mpif90 is using (since it's essentially just gfortran plus some flags). It turns omitting the flag -Wl,-flat_namespace got rid of those print-related errors.
Now I can import most things and run my code without a problem, with one important exception. If I have a petsc-based fortran module (pc_fort_mod), then also importing PETSc into the python environment, i.e.
from petsc4py import PETSc
import pc_fort_mod
pc_fort_mod.execute()
results in PETSc errors in the fortran analysis (invalid matrices, unsuccessful preallocation). This seems reasonable to me since both would appear to be attempting to use the same PETSc libraries. Any idea if there is a way to do this so that the pc_fort_mod PETSc and petsc4py PETSC don't clash? I guess a workaround may be to have two PETSc builds...
SOLVED: I'm told that the problem described in Update 4 ultimately should not be a problem--it should be possible to simultaneously use PETSc in python and fortran. I was ultimately able to resolve my error by using a self-compiled PETSc build rather than the Homebrew recipe.
I've never quite seen anything like this before and we've used network-X with compiled fortran wrapped in F2py, running under MPI many many times.
I suggest that you remove and re-install your network-x package.
Which python are you using and what os are you running on? We've had very good luck running the anaconda python installation. You have to be a bit careful when installing petsc though. Building from source and running the PETSc tests is the safest way.