I call the R script from the python script by using subprocess.
In R script, it import the keras library.
When R script call model2<-keras_model_sequential(), I got the following error
1 DLModeling(TrainX, TrainY)
2 keras_model_sequential()
3 keras$models
4 `$.python.builtin.module`(keras, "models")
5 py_resolve_module_proxy(x)
6 on_load()
7 check_implementation_version()
8 tf_version()
9 tf_config()
10 reticulate::py_has_attr(tf, "version")
11 py_resolve_module_proxy(x)
12 on_load()
13 emit("Loaded Tensorflow version ", tf$version$VERSION)
14 .makeMessage(..., domain = domain, appendLF = appendLF)
15 lapply(list(...), as.character)
16 tf$version
17 `$.python.builtin.module`(tf, "version")
18 `$.python.builtin.object`(x, name)
19 py_get_attr_or_item(x, name, TRUE)
20 py_get_attr(x, name)
21 py_get_attr_impl(x, name, silent)
22 stop(list("AttributeError: module 'tensorflow' has no attribute 'version'\n
May I know how to solve the module 'tensorflow' has no attribute 'version' error?
Related
This question already has answers here:
How do you fix "runtimeError: package fails to pass a sanity check" for numpy and pandas?
(9 answers)
Closed 1 year ago.
Trying the following code in jupyter notebook (pip install pandas - pip install pyarrow > are installed)
import pandas as pd
parquet_file = r'C:\Users\Future\Desktop\userdata1.parquet'
df = pd.read_parquet(parquet_file, engine='auto')
print(df.head())
When trying the code in jupyter notebook, the kernel appears to have died. I restarted the kernel and tried again but the same error.
I even tried to put the code in .py file and run the code from the terminal but I didn't get any output.
the engine is auto and i tried too pyarrow engine ..
The link the parquet_file https://github.com/Teradata/kylo/blob/master/samples/sample-data/parquet/userdata1.parquet
** I have installed python 3.8.6 and pandas 1.1.4 and pyarrow 2.0.0 and when trying to run the code I encountered the following error
** On entry to DGEBAL parameter number 3 had an illegal value
** On entry to DGEHRD parameter number 2 had an illegal value
** On entry to DORGHR DORGQR parameter number 2 had an illegal value
** On entry to DHSEQR parameter number 4 had an illegal value
Traceback (most recent call last):
File "demo.py", line 1, in <module>
import pandas as pd
File "C:\Users\Future\AppData\Local\Programs\Python\Python38\lib\site-packages\pandas\__init__.py", line 11, in <module>
__import__(dependency)
File "C:\Users\Future\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\__init__.py", line 305, in <module>
_win_os_check()
File "C:\Users\Future\AppData\Local\Programs\Python\Python38\lib\site-packages\numpy\__init__.py", line 302, in _win_os_check
raise RuntimeError(msg.format(__file__)) from None
RuntimeError: The current Numpy installation ('C:\\Users\\Future\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\numpy\\__init__.py') fails to pass a sanity check due to a bug in the windows runtime. See this issue for more information: https:// tiny url.com/y3dm3h86
Running
import pandas as pd
parquet_file = r'userdata1.parquet'
df = pd.read_parquet(parquet_file, engine='auto')
print(df.head())
returns
registration_dttm id first_name last_name email \
0 2016-02-03 07:55:29 1 Amanda Jordan ajordan0#com.com
1 2016-02-03 17:04:03 2 Albert Freeman afreeman1#is.gd
2 2016-02-03 01:09:31 3 Evelyn Morgan emorgan2#altervista.org
3 2016-02-03 00:36:21 4 Denise Riley driley3#gmpg.org
4 2016-02-03 05:05:31 5 Carlos Burns cburns4#miitbeian.gov.cn
gender ip_address cc country birthdate \
0 Female 1.197.201.2 6759521864920116 Indonesia 3/8/1971
1 Male 218.111.175.34 Canada 1/16/1968
2 Female 7.161.136.94 6767119071901597 Russia 2/1/1960
3 Female 140.35.109.83 3576031598965625 China 4/8/1997
4 169.113.235.40 5602256255204850 South Africa
salary title comments
0 49756.53 Internal Auditor 1E+02
1 150280.17 Accountant IV
2 144972.51 Structural Engineer
3 90263.05 Senior Cost Accountant
4 NaN
using pyarrow 2.0.0 on python 3.8.6 and pandas 1.1.4
with df.shape giving (1000, 13)
I have been using jupyter notebook in Anaconda for my research work for few months. for Data preprocessing I am importing pandas every time. But all of a sudden a couple days back I have started getting Importerror, which I never faced before.
import pandas as pd
from pandas import DataFrame
The error I am getting is as follows,
ImportError Traceback (most recent call last)
<ipython-input-5-7dd3504c366f> in <module>
----> 1 import pandas as pd
C:\ProgramData\Anaconda3\lib\site-packages\pandas\__init__.py in <module>
53 import pandas.core.config_init
54
---> 55 from pandas.core.api import (
56 # dtype
57 Int8Dtype,
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\api.py in <module>
13
14 from pandas.core.algorithms import factorize, unique, value_counts
---> 15 from pandas.core.arrays import Categorical
16 from pandas.core.arrays.boolean import BooleanDtype
17 from pandas.core.arrays.integer import (
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\__init__.py in <module>
5 try_cast_to_ea,
6 )
----> 7 from pandas.core.arrays.boolean import BooleanArray
8 from pandas.core.arrays.categorical import Categorical
9 from pandas.core.arrays.datetimes import DatetimeArray
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\arrays\boolean.py in <module>
26 from pandas.core.dtypes.missing import isna, notna
27
---> 28 from pandas.core import nanops, ops
29 from pandas.core.indexers import check_array_indexer
30
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\nanops.py in <module>
33 from pandas.core.dtypes.missing import isna, na_value_for_dtype, notna
34
---> 35 bn = import_optional_dependency("bottleneck", raise_on_missing=False, on_version="warn")
36 _BOTTLENECK_INSTALLED = bn is not None
37 _USE_BOTTLENECK = False
C:\ProgramData\Anaconda3\lib\site-packages\pandas\compat\_optional.py in import_optional_dependency(name, extra, raise_on_missing, on_version)
96 minimum_version = VERSIONS.get(name)
97 if minimum_version:
---> 98 version = _get_version(module)
99 if distutils.version.LooseVersion(version) < minimum_version:
100 assert on_version in {"warn", "raise", "ignore"}
C:\ProgramData\Anaconda3\lib\site-packages\pandas\compat\_optional.py in _get_version(module)
41
42 if version is None:
---> 43 raise ImportError(f"Can't determine version for {module.__name__}")
44 return version
45
ImportError: Can't determine version for bottleneck
I have never imported bottleneck for my work. And there are other users who work on this same device, but I am not sure if any update or change from other users would cause this error or not. In any case how can I get rid of this error?
Edit:
when I run conda list bottleneck it opens a text file named conda-script.py
with following
import sys
# Before any more imports, leave cwd out of sys.path for internal 'conda shell.*' commands.
# see https://github.com/conda/conda/issues/6549
if len(sys.argv) > 1 and sys.argv[1].startswith('shell.') and sys.path and sys.path[0] == '':
# The standard first entry in sys.path is an empty string,
# and os.path.abspath('') expands to os.getcwd().
del sys.path[0]
if __name__ == '__main__':
from conda.cli import main
sys.exit(main())
I encoutered this issue. Here's what worked for me.
Try to update pandas
conda update pandas
Remove and install bottleneck
conda remove bottleneck
conda install bottleneck
I encountered this after doing a conda update pandas after which conda list bottleneck showed nothing so I simply did conda install bottleneck.
(Not enough reptutation to simply upvote nav's answer.)
Basically I'm unable to get the value of a cell from a Excel file with openpyxl.
import openpyxl
book = openpyxl.load_workbook('Inputs.xlsx')
sheet = book.active
print(sheet['A2'])
The column 'A' has theses values
Velocidad
3
5
7
9
11
13
15
17
19
21
23
25
27
And instead of getting '3' I get Cell 'Hoja1'.A2 being Hoja1 the name of the sheet.
Thank you!
sheet['A2'] is an object, namely an instance of the class Cell (docs).
A cell object has a property value, so instead of
print(sheet['A2'])
use
print(sheet['A2'].value)
Here's another usage example from the docs: https://openpyxl.readthedocs.io/en/stable/usage.html
I'm trying to create a Series in Pandas from a list of dates presented as strings, thus:
['2016-08-09',
'2015-08-03',
'2017-08-15',
'2017-12-14',
...
but when I apply pd.Series from within the Pandas module the result in Jupyter notebook displays as:
0 [[[2016-08-09]]]
1 [[[2015-08-03]]]
2 [[[2017-08-15]]]
3 [[[2017-12-14]]]
...
Is there a simple way to fix it? The data has come from an Xml feed parsed using lxml.objectify.
I don't normally get these problems when reading from csv and just curious what I might be doing wrong.
UPDATE:
The code to grab the data and an example site:
import lxml.objectify
import pandas as pd
def parse_sitemap(url):
root = lxml.objectify.parse(url)
rooted = root.getroot()
output_1 = [child.getchildren()[0] for child in rooted.getchildren()]
output_0 = [child.getchildren()[1] for child in rooted.getchildren()]
return output_1
results = parse_sitemap("sitemap.xml")
pd.Series(results)
If you print out type(result[0]), you'll understand, it's not a string you get:
print(type(results[0]))
Output:
lxml.objectify.StringElement
This is not a string, and pandas doesn't seem to be playing nice with it. But the fix is easy. Just convert to string using pd.Series.astype:
s = pd.Series(results).astype(str)
print(s)
0 2017-08-09T11:20:38Z
1 2017-08-09T11:10:55Z
2 2017-08-09T15:36:20Z
3 2017-08-09T16:36:59Z
4 2017-08-02T09:56:50Z
5 2017-08-02T19:33:31Z
6 2017-08-03T07:32:24Z
7 2017-08-03T07:35:35Z
8 2017-08-03T07:54:12Z
9 2017-07-31T16:38:34Z
10 2017-07-31T15:42:24Z
11 2017-07-31T15:44:56Z
12 2017-07-31T15:23:25Z
13 2017-08-01T08:30:27Z
14 2017-08-01T11:01:57Z
15 2017-08-03T13:52:39Z
16 2017-08-03T14:29:55Z
17 2017-08-03T13:39:24Z
18 2017-08-03T13:39:00Z
19 2017-08-03T15:30:58Z
20 2017-08-06T11:29:24Z
21 2017-08-03T10:19:43Z
22 2017-08-14T18:42:49Z
23 2017-08-15T15:42:04Z
24 2017-08-17T08:58:19Z
25 2017-08-18T13:37:52Z
26 2017-08-18T13:38:14Z
27 2017-08-18T13:45:42Z
28 2017-08-03T09:56:42Z
29 2017-08-01T11:01:22Z
dtype: object
i think all you need to do is:
pd.Series(dates)
but there's not enough info in the question to say for sure.
additionally, if you want to use datetime64 objects, you can do:
pd.Series(pd.to_datetime(dates))
I am trying to use GraphLab Create with Enthought Canopy iPython but I'm getting an ImportError that seems to be related to unicode. The line is:
ImportError: /home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/cython/cy_ipc.so: undefined symbol: PyUnicodeUCS4_DecodeUTF8
and this is preceded by:
In [1]: import graphlab
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-4b66ad388e97> in <module>()
----> 1 import graphlab
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/__init__.py in <module>()
5 """
6
----> 7 import graphlab.connect.aws as aws
8
9 import graphlab.deploy
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/connect/aws/__init__.py in <module>()
3 This module defines classes and global functions for interacting with Amazon Web Services.
4 """
----> 5 from _ec2 import get_credentials, launch_EC2, list_instances, set_credentials, status, terminate_EC2
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/connect/aws/_ec2.py in <module>()
15
16 import graphlab.product_key
---> 17 import graphlab.connect.server as glserver
18 import graphlab.connect.main as glconnect
19 from graphlab.connect.main import __catch_and_log__
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/connect/server.py in <module>()
4 """
5
----> 6 from graphlab.cython.cy_ipc import PyCommClient as Client
7 from graphlab.cython.cy_ipc import get_public_secret_key_pair
8 from graphlab_util.config import DEFAULT_CONFIG as default_local_conf
In [1]: import graphlab
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-4b66ad388e97> in <module>()
----> 1 import graphlab
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/__init__.py in <module>()
5 """
6
----> 7 import graphlab.connect.aws as aws
8
9 import graphlab.deploy
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/connect/aws/__init__.py in <module>()
3 This module defines classes and global functions for interacting with Amazon Web Services.
4 """
----> 5 from _ec2 import get_credentials, launch_EC2, list_instances, set_credentials, status, terminate_EC2
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/connect/aws/_ec2.py in <module>()
15
16 import graphlab.product_key
---> 17 import graphlab.connect.server as glserver
18 import graphlab.connect.main as glconnect
19 from graphlab.connect.main import __catch_and_log__
/home/aaron/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/graphlab/connect/server.py in <module>()
4 """
5
----> 6 from graphlab.cython.cy_ipc import PyCommClient as Client
7 from graphlab.cython.cy_ipc import get_public_secret_key_pair
8 from graphlab_util.config import DEFAULT_CONFIG as default_local_conf
The GraphLab forum http://forum.graphlab.com/discussion/84/importerror-undefined-symbol-pyunicodeucs4-decodeutf8 suggests that this is due to Enthought Python being compiled with 2-byte-wide unicode chars. Is there a way to get Enthought to use 4-byte chars since I can't recompile?