I run this simple code on google colab.
###cell 1 : `!pip install syft`
###cell 2 : `import syft as sy`
and i got this error :
ModuleNotFoundError: No module named 'syft_proto.messaging.v1.protocol_pb2'
There is the full error message
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-3-9aeadc8ee442> in <module>()
----> 1 import syft as sy
7 frames
/usr/local/lib/python3.6/dist-packages/syft/__init__.py in <module>()
41
42 # Import grids
---> 43 from syft.grid.private_grid import PrivateGridNetwork
44 from syft.grid.public_grid import PublicGridNetwork
45
/usr/local/lib/python3.6/dist-packages/syft/grid/private_grid.py in <module>()
9 # Syft imports
10 from syft.grid.abstract_grid import AbstractGrid
---> 11 from syft.workers.node_client import NodeClient
12 from syft.messaging.plan.plan import Plan
13 from syft.frameworks.torch.tensors.interpreters.additive_shared import AdditiveSharingTensor
/usr/local/lib/python3.6/dist-packages/syft/workers/node_client.py in <module>()
5
6 # Syft imports
----> 7 from syft.serde import serialize
8 from syft.messaging.plan import Plan
9 from syft.codes import REQUEST_MSG, RESPONSE_MSG
/usr/local/lib/python3.6/dist-packages/syft/serde/__init__.py in <module>()
----> 1 from syft.serde.serde import *
/usr/local/lib/python3.6/dist-packages/syft/serde/serde.py in <module>()
10 from syft.workers.abstract import AbstractWorker
11
---> 12 from syft.serde import msgpack
13
14 ## SECTION: High Level Public Functions (these are the ones you use)
/usr/local/lib/python3.6/dist-packages/syft/serde/msgpack/__init__.py in <module>()
----> 1 from syft.serde.msgpack import serde
2 from syft.serde.msgpack import native_serde
3 from syft.serde.msgpack import torch_serde
4 from syft.serde.msgpack import proto
5
/usr/local/lib/python3.6/dist-packages/syft/serde/msgpack/serde.py in <module>()
57 from syft.messaging.plan import Plan
58 from syft.messaging.plan.state import State
---> 59 from syft.messaging.protocol import Protocol
60 from syft.messaging.message import Message
61 from syft.messaging.message import Operation
/usr/local/lib/python3.6/dist-packages/syft/messaging/protocol.py in <module>()
11 from syft.workers.abstract import AbstractWorker
12 from syft.workers.base import BaseWorker
---> 13 from syft_proto.messaging.v1.protocol_pb2 import Protocol as ProtocolPB
14
15
ModuleNotFoundError: No module named 'syft_proto.messaging.v1.protocol_pb2'
Hope that you can help me. Thank you
Thank you for answers. It works when i downgrade these two packages, it is a temporary problem due to pysyft developers.
!pip install syft=="0.2.2a1"
!pip install syft_proto=="0.1.1a1.post17"
It looks the module you are trying to use is either deprecated or has a new version. See here the file is not active.
Try searching if a newer version of this protocol exists or any similar protocol that can cater to your needs.
Edit:
It may also be that you are using an older version Syft and so I recommend you to upgrade your pip and Syft. Follow the instructions here
you can upgrade both to their latest version which are kept compatible!
!pip install --upgrade syft
!pip install --upgrade syft_proto
Related
I am trying to do a regular import in Google Colab.
This import worked up until now.
If I try:
import plotly.express as px
or
import pingouin as pg
I get an error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-19-86e89bd44552> in <module>()
----> 1 import plotly.express as px
9 frames
/usr/local/lib/python3.7/dist-packages/plotly/express/__init__.py in <module>()
13 )
14
---> 15 from ._imshow import imshow
16 from ._chart_types import ( # noqa: F401
17 scatter,
/usr/local/lib/python3.7/dist-packages/plotly/express/_imshow.py in <module>()
9
10 try:
---> 11 import xarray
12
13 xarray_imported = True
/usr/local/lib/python3.7/dist-packages/xarray/__init__.py in <module>()
1 import pkg_resources
2
----> 3 from . import testing, tutorial, ufuncs
4 from .backends.api import (
5 load_dataarray,
/usr/local/lib/python3.7/dist-packages/xarray/tutorial.py in <module>()
11 import numpy as np
12
---> 13 from .backends.api import open_dataset as _open_dataset
14 from .backends.rasterio_ import open_rasterio as _open_rasterio
15 from .core.dataarray import DataArray
/usr/local/lib/python3.7/dist-packages/xarray/backends/__init__.py in <module>()
4 formats. They should not be used directly, but rather through Dataset objects.
5
----> 6 from .cfgrib_ import CfGribDataStore
7 from .common import AbstractDataStore, BackendArray, BackendEntrypoint
8 from .file_manager import CachingFileManager, DummyFileManager, FileManager
/usr/local/lib/python3.7/dist-packages/xarray/backends/cfgrib_.py in <module>()
14 _normalize_path,
15 )
---> 16 from .locks import SerializableLock, ensure_lock
17 from .store import StoreBackendEntrypoint
18
/usr/local/lib/python3.7/dist-packages/xarray/backends/locks.py in <module>()
11
12 try:
---> 13 from dask.distributed import Lock as DistributedLock
14 except ImportError:
15 DistributedLock = None
/usr/local/lib/python3.7/dist-packages/dask/distributed.py in <module>()
1 # flake8: noqa
2 try:
----> 3 from distributed import *
4 except ImportError:
5 msg = (
/usr/local/lib/python3.7/dist-packages/distributed/__init__.py in <module>()
1 from __future__ import print_function, division, absolute_import
2
----> 3 from . import config
4 from dask.config import config
5 from .actor import Actor, ActorFuture
/usr/local/lib/python3.7/dist-packages/distributed/config.py in <module>()
18
19 with open(fn) as f:
---> 20 defaults = yaml.load(f)
21
22 dask.config.update_defaults(defaults)
TypeError: load() missing 1 required positional argument: 'Loader'
I think it might be a problem with Google Colab or some basic utility package that has been updated, but I can not find a way to solve it.
Now, the load() function requires parameter loader=Loader.
If your YAML file contains just simple YAML (str, int, lists), try to use yaml.safe_load() instead of yaml.load().
And If you need FullLoader, you can use yaml.full_load().
Starting from pyyaml>=5.4, it doesn't have any discovered critical vulnerabilities, pyyaml status.
source: https://stackoverflow.com/a/1774043/13755823
yaml.safe_load() should always be preferred unless you explicitly need
the arbitrary object serialization/deserialization provided in order
to avoid introducing the possibility for arbitrary code execution.
More about yaml.load(input) here.
Found the problem.
I was installing pandas_profiling, and this package updated pyyaml to version 6.0 which is not compatible with the current way Google Colab imports packages.
So just reverting back to pyyaml version 5.4.1 solved the problem.
For more information check versions of pyyaml here.
See this issue and formal answers in GitHub
##################################################################
For reverting back to pyyaml version 5.4.1 in your code, add the next line at the end of your packages installations:
!pip install pyyaml==5.4.1
It is important to put it at the end of the installation, some of the installations will change the pyyaml version.
this worked for me
config = yaml.load(ymlfile, Loader=yaml.Loader)
The Python "TypeError: load() missing 1 required positional argument: 'Loader'" occurs when we use the yaml.load() method without specifying the Loader keyword argument.
To solve the error, use the yaml.full_load() method instead or explicitly set the Loader keyword arg.
config = yaml.full_load(ymlfile)
or
config = yaml.load(ymlfile, Loader=yaml.FullLoader)
I'm trying to set up a GluonCV in a jupyter notebook in a virtual environment. For some reason whenever I try to import GluonCV I get this error:
ImportError Traceback (most recent call last)
<ipython-input-2-9a2bc396118f> in <module>
----> 1 import gluoncv
~\anaconda3\envs\mxnet\lib\site-packages\gluoncv\__init__.py in <module>
10 _require_mxnet_version('1.4.0', '2.0.0')
11
---> 12 from . import data
13 from . import model_zoo
14 from . import nn
~\anaconda3\envs\mxnet\lib\site-packages\gluoncv\data\__init__.py in <module>
29 from .sampler import SplitSampler, ShuffleSplitSampler
30 from .otb.tracking import OTBTracking
---> 31 from .kitti.kitti_dataset import KITTIRAWDataset, KITTIOdomDataset
32
33 datasets = {
~\anaconda3\envs\mxnet\lib\site-packages\gluoncv\data\kitti\__init__.py in <module>
1 # pylint: disable=missing-module-docstring
----> 2 from .kitti_dataset import *
3 from .kitti_utils import *
~\anaconda3\envs\mxnet\lib\site-packages\gluoncv\data\kitti\kitti_dataset.py in <module>
19
20 from ...utils.filesystem import try_import_skimage
---> 21 from .kitti_utils import generate_depth_map
22 from .mono_dataset import MonoDataset
23
~\anaconda3\envs\mxnet\lib\site-packages\gluoncv\data\kitti\kitti_utils.py in <module>
10
11 import mxnet as mx
---> 12 from mxnet.util import is_np_array
13
14
ImportError: cannot import name 'is_np_array'
I've tried using the same files that work on Google Collaboratory but I still get that error. I've tried reinstalling gluon and all that stuff in all manners. No idea what's going on. For convenience I really need this to work.
I resolved this error by installing the compatible versions of mxnet and gluoncv.In my case I had installed mxnet with with gluoncv native, that resolved the error.
My code
!pip install stldecompose
from stldecompose import decompose
Error Msg
ImportError Traceback (most recent call last)
in
2 # Install the library via PIP
3 get_ipython().system('pip install stldecompose')
----> 4 from stldecompose import decompose, forecast
~/opt/anaconda3/lib/python3.7/site-packages/stldecompose/init.py in
----> 1 from .stl import decompose, forecast
~/opt/anaconda3/lib/python3.7/site-packages/stldecompose/stl.py in
3 from pandas.core.nanops import nanmean as pd_nanmean
4 from statsmodels.tsa.seasonal import DecomposeResult
----> 5 from statsmodels.tsa.filters._utils import _maybe_get_pandas_wrapper_freq
6 import statsmodels.api as sm
7
ImportError: cannot import name '_maybe_get_pandas_wrapper_freq' from 'statsmodels.tsa.filters._utils' (/Users/georgeng/opt/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/filters/_utils.py)
You have two pathway to go about this:
If you are running statsmodels==0.11.0, statsmodels.tsa.filters._utils function was removed from the library.
Alternatively you may use statsmodels.tsa.seasonal.STL, which gives similar functionality. See its documentation:
https://www.statsmodels.org/stable/generated/statsmodels.tsa.seasonal.STL.html#statsmodels.tsa.seasonal.STL
Downgrade to pip install statsmodels==0.10.2
ImportError Traceback (most recent call last)
<ipython-input-1-76a01d9c502b> in <module>
----> 1 import spacy
~\Anaconda3\envs\nlp_course\lib\site-packages\spacy\__init__.py in <module>
8 from thinc.neural.util import prefer_gpu, require_gpu
9
---> 10 from .cli.info import info as cli_info
11 from .glossary import explain
12 from .about import __version__
~\Anaconda3\envs\nlp_course\lib\site-packages\spacy\cli\__init__.py in <module>
----> 1 from .download import download
2 from .info import info
3 from .link import link
4 from .package import package
5 from .profile import profile
~\Anaconda3\envs\nlp_course\lib\site-packages\spacy\cli\download.py in <module>
9
10 from ._messages import Messages
---> 11 from .link import link
12 from ..util import prints, get_package_path
13 from .. import about
~\Anaconda3\envs\nlp_course\lib\site-packages\spacy\cli\link.py in <module>
7 from ._messages import Messages
8 from ..compat import symlink_to, path2str
----> 9 from ..util import prints
10 from .. import util
11
~\Anaconda3\envs\nlp_course\lib\site-packages\spacy\util.py in <module>
25 # Import these directly from Thinc, so that we're sure we always have the
26 # same version.
---> 27 from thinc.neural._classes.model import msgpack
28 from thinc.neural._classes.model import msgpack_numpy
29
ImportError: cannot import name 'msgpack' from 'thinc.neural._classes.model' (C:\Users\salwa\Anaconda3\envs\nlp_course\lib\site-packages\thinc\neural\_classes\model.py)
The problem is with thinc, a dependency of spaCy, as you can see here: ImportError: cannot import name 'msgpack' from 'thinc.neural._classes.model'
Follow Ines's (a core developer of spaCy) suggestion that you can find here,
It looks like you might have ended up with conflicting installationds
and dependencies – for example, the latest version of spaCy, but an
older version of its dependency, Thinc. In cases like this, it often
helps to just start out with a clean environment and reinstall from
scratch.
I just installed Python 3.5 on windows 10 and was trying to run the startup example from the IDE I choose (Rodeo). The example gives an error when trying to import ggplot. Specifically, at this call
from ggplot import ggplot, aes, geom_bar
which gives me:
ImportErrorTraceback (most recent call last)
in ()
----> 1 from ggplot import ggplot, aes, geom_bar
C:\Anaconda3\lib\site-packages\ggplot\__init__.py in ()
17
18
---> 19 from .geoms import geom_area, geom_blank, geom_boxplot, geom_line, geom_point, geom_jitter, geom_histogram, geom_density, geom_hline, geom_vline, geom_bar, geom_abline, geom_tile, geom_rect, geom_bin2d, geom_step, geom_text, geom_path, geom_ribbon, geom_now_its_art, geom_violin, geom_errorbar, geom_polygon
20 from .stats import stat_smooth, stat_density
21
C:\Anaconda3\lib\site-packages\ggplot\geoms\__init__.py in ()
----> 1 from .geom_abline import geom_abline
2 from .geom_area import geom_area
3 from .geom_bar import geom_bar
4 from .geom_bin2d import geom_bin2d
5 from .geom_blank import geom_blank
C:\Anaconda3\lib\site-packages\ggplot\geoms\geom_abline.py in ()
----> 1 from .geom import geom
2
3 class geom_abline(geom):
4 """
5 Line specified by slope and intercept
C:\Anaconda3\lib\site-packages\ggplot\geoms\geom.py in ()
1 from __future__ import (absolute_import, division, print_function,
2 unicode_literals)
----> 3 from ..ggplot import ggplot
4 from ..aes import aes
5
C:\Anaconda3\lib\site-packages\ggplot\ggplot.py in ()
19 from . import discretemappers
20 from .utils import format_ticks
---> 21 import StringIO
22 import urllib
23 import base64
ImportError: No module named 'StringIO'
So StringIO cannot be imported. I read here that StringIO doens't exist in that form anymore, but the fixes over there did not help me out. Any tips? What might be relevant (although I cannot judge that) is that I'm unable to update Scipy or ggplot via pip install ggplot --upgrade, but I thought/ read somewhere that this happens because I do not have a built in compiler on my windows machine. Many thanks in advance!
Well, I found a solution myself. pip install ggplot --upgrade failed for me, but
conda install -c conda-forge ggplot did the trick.