I will try to be as much help as I can, but this is certainly a bit out of my depth.
I am trying to run the metagenomics package 'DeepVirFinder' on my fasta file 'my_seqs.fa' within terminal on my Mac. I have followed the GitHub repository instructions (as found here https://github.com/jessieren/DeepVirFinder). I have created a conda environment with all the necessary packages.
Into my terminal I have inputted
python dvf.py -i ~/Documents/PairwiseANI/my_seqs.fna -o ~/Documents/DeepVirFinder/ -l 1000 -c 2
this receives an output error of
Using Theano backend.
1. Loading Models.
model directory /data2/joshcole/DeepVirFinder/models
Traceback (most recent call last):
File "dvf.py", line 131, in <module>
modDict[contigLengthk] = load_model(os.path.join(modDir, modName))
File "/home/ggb_joshcole/miniconda3/envs/dvf/lib/python3.6/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/home/ggb_joshcole/miniconda3/envs/dvf/lib/python3.6/site-packages/keras/engine/saving.py", line 224, in _deserialize_model
model_config = json.loads(model_config.decode('utf-8'))
AttributeError: 'str' object has no attribute 'decode'
From the GitHub repository, what it should return upon a successful run (using example template names) is as follows:
Using Theano backend.
1. Loading Models.
model directory /auto/cmb-panasas2/renj/software/DeepVirFinder/models
2. Encoding and Predicting Sequences.
processing line 1
processing line 1389
3. Done. Thank you for using DeepVirFinder.
output in ./test/crAssphage.fa_gt300bp_dvfpred.txt
Any help for how to fix this error would be greatly appreciated. I have tried to download and jig around with potential conda fixes, but it doesn't appear to be a problem with any dependancies + python is fully up to date.
Thank you for reading
Apologies - I found out it was an error between h5py and tensorflow. Had to downgrade h5py to 2.10.0.
Related
For context, I am running an apple silicon mac and have used the rosetta terminal + miniconda to create a venv that runs python 3.7.
Here is the code I am trying to run.
from pixellib.instance import instance_segmentation
segment_image = instance_segmentation()
segment_image.load_model("mask_rcnn_coco.h5")
And this is the error below. I think it may be due to issues with the access to the GPU but I cannot be sure. Have been working on it for a few days.
If using Keras pass *_constraint arguments to layers.
Traceback (most recent call last):
File "/Users/USERNAME/PycharmProjects/test/main.py", line 16, in <module>
segment_image.load_model("mask_rcnn_coco.h5")
File "/Users/USERNAME/miniconda3/envs/cowsUpdate/lib/python3.7/site-packages/pixellib/instance/__init__.py", line 65, in load_model
self.model.load_weights(model_path, by_name= True)
File "/Users/USERNAME/miniconda3/envs/cowsUpdate/lib/python3.7/site-packages/pixellib/instance/mask_rcnn.py", line 2110, in load_weights
hdf5_format.load_weights_from_hdf5_group_by_name(f, layers)
File "/Users/USERNAME/miniconda3/envs/cowsUpdate/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 718, in load_weights_from_hdf5_group_by_name
original_keras_version = f.attrs['keras_version'].decode('utf8')
AttributeError: 'str' object has no attribute 'decode'
I encountered similar issue when using tf.keras.models.load_weights(), and I downgrade h5py from 2.10 to 2.8.0 in tensorflow 2.0.0, then it works, maybe you can have a try.
I am using TensorFlow 1.7 with Python 3.6.5 on a Mac with High Sierra.
I have trained my first MNIST model, so I basically have
a graph.pbtxt file with the CNN graph structure
some model.ckpt-21000 files (.meta, .index .data)
I tried to freeze the graph using the command line freeze_graph command on my bash:
freeze_graph
--input_graph=/…/graph.pbtxt
--input_checkpoint=/…/model.ckpt-21000
--input_binary=false
--output_graph=/…/frozen_mnist.pb
--output_node_names=softmax_tensor
But I got this error:
Traceback (most recent call last):
File “/usr/local/bin/freeze_graph”, line 11, in <module>
sys.exit(main())
TypeError: main() missing 1 required positional argument: ‘unused_args’
I am not really sure what I am missing there.
I am quite sure I am using the correct syntax.
I have found a workaround to freeze my graph.
I am posting it here so if anyone encounters the same issue, they can use this.
Instead of
freeze_graph
--input_graph=/…/graph.pbtxt
--input_checkpoint=/…/model.ckpt-21000
--input_binary=false
--output_graph=/…/frozen_mnist.pb
--output_node_names=softmax_tensor
Use
python3 -m tensorflow.python.tools.freeze_graph
--input_graph=/…/graph.pbtxt
--input_checkpoint=/…/model.ckpt-21000
--input_binary=false
--output_graph=/…/frozen_mnist.pb
--output_node_names=softmax_tensor
So basically instead of the command freeze_graph I just used python3 -m tensorflow.python.tools.freeze_graph.
Still I would really like to understand why the command line did not work for me :(
I am trying to train a pretrained "faster_rcnn_resnet101_kitti" model for the tensorflow object detection API.
But everytime I try to run
python3 train.py --logtostderr --train_dir='/training/' --pipeline_config_path='/training/faster_rcnn_resnet101_kitti.config'
I receive the following error
Traceback (most recent call last):
File "train.py", line 167, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist- packages/tensorflow/python/platform/app.py", line 126, in run
_sys.exit(main(argv))
File "train.py", line 163, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "/usr/local/lib/python3.5/dist-packages/object_detection-0.1-py3.5.egg/object_detection/trainer.py", line 211, in train
detection_model = create_model_fn()
File "/usr/local/lib/python3.5/dist-packages/object_detection-0.1-py3.5.egg/object_detection/builders/model_builder.py", line 96, in build
add_summaries)
File "/usr/local/lib/python3.5/dist-packages/object_detection-0.1-py3.5.egg/object_detection/builders/model_builder.py", line 272, in _build_faster_rcnn_model
frcnn_config.inplace_batchnorm_update)
AttributeError: 'FasterRcnn' object has no attribute 'inplace_batchnorm_update'
I had this error too, and for me it was because I had not re-compiled my .proto-files after I pulled the last updates from the TF models repository.
To recompile (on Linux):
# From tensorflow/models/research/ folder
protoc object_detection/protos/*.proto --python_out=.
I assume that the failing code tries to read the attribute/field inplace_batchnorm_update from the faster rcnn config, which (assumable) does not exist in the older versions. I hope this helps you too.
My versions are: tensorflow-gpu 1.7.0 and have the TF models commit hash 77d3bbefeb33e89bfa1eee707151e5d794d1222b with message "Merge pull request #3888 from hsm207/patch-3 Fix typo".
Recompiling on Windows
I know from own experience that, compared to Windows, compiling many files as above is easy in Linux as a one-liner. For Windows, here is something to make the process less cumbersome:
In this issue, davemers0160 has shared
a script for compiling on Windows.
Just save this file as a .bat-file:
#echo off
setlocal
echo Searching for new .proto files...
for %%F in (object_detection\protos\*.proto) do (
echo %%F
protoc %%F --python_out=.
)
echo Complete!
Run that file from the same folder as mentioned above. As the question was in Linux, I've just added this to the bottom in case a Windows user come along to read this too.
I had the same error after I updated the models repository.
I re-compiled .proto files, but it still has the error.
According to the log:
File "/home/duane/anaconda3/lib/python3.6/site-packages/object_detection-0.1-py3.6.egg/object_detection/builders/model_builder.py", line 164, in _build_ssd_model
inplace_batchnorm_update=ssd_config.inplace_batchnorm_update)
I think maybe it caused by the version of object_detection-0.1-py3.6.egg is too old , So I re-installed models/research/setup.py:
# Form /models/research/
python setup.py build
python setup.py install
Then it has no error.
NOTE: I did re-compile .proto files before I re-install setup.py.
The more details you can see #3968
Hope this can help you.
When I run my py program it works the way I intended it to. If I am on a Linux box and build an executable using Pyinstaller, it builds without issue and executes without issue. I have scoured the Pyinstaller docs, git, etc. none of the posted fixes helped
I am still very new at python and feel like it might be a simple fix and might be over thinking the issue
Why can I no build a functional .exe on a windows based system using pyinstaller?
Windows 10 system
Pyinstaller version 3.2
Python version 3.5.2
This is a GUI program using appJar which is also up to date.
The file does build, but errors "Could not execute script"
EDIT
Not sure if this is best to edit in line like this but...
So studying the output and making adjustments, the issue seems to be appJar.py. For some reason it is missing assets, I am looking into it. The trouble is that I am still not used to looking at this kind of output and am not sure where to start.
C:\Users\_User_>C:\temp\fileCreatorGUI\fileCreatorGUI.exe
Traceback (most recent call last):
File "F:\Users\_User_\python_working\fileCreatorGUI.py", line 73, in <module>
app = gui()
File "C:\Users\_User_\AppData\Local\Programs\Python\Python35\lib\site-packages\appJar\appjar.py", line 509, in __init__
self.topLevel.wm_iconbitmap(self.appJarIcon)
File "C:\Users\_User_\AppData\Local\Programs\Python\Python35\lib\tkinter\__init__.py", line 1716, in wm_iconbitmap
return self.tk.call('wm', 'iconbitmap', self._w, bitmap)
_tkinter.TclError: bitmap "C:\temp\fileCreatorGUI\appJar\resources\icons\favicon.ico" not defined
Failed to execute script fileCreatorGUI
Edit 2
See answer below, but I was barking up the wrong tree on this one,
The Pyinstaller output chokes on the .dll's:
api-ms-win-core-console-l1-1-0.dll
api-ms-win-core-datetime-l1-1-0.dll
(There are like ~40 of these)
I added those .dll's to the python path, I declared them in the bianaries in the .spec file.
here is a truncated log:
2414 WARNING: Can not get binary dependencies for file: C:\Windows\system32\api-
ms-win-crt-stdio-l1-1-0.dll
Traceback (most recent call last):
File "C:\Users\_USER_NAME\AppData\Local\Programs\Python\Python35-32\lib\site-pa
ckages\PyInstaller\depend\bindepend.py", line 695, in getImports
return _getImports_pe(pth)
File "C:\Users\_USER_NAME\AppData\Local\Programs\Python\Python35-32\lib\site-pa
ckages\PyInstaller\depend\bindepend.py", line 122, in _getImports_pe
dll, _ = sym.forwarder.split('.')
TypeError: a bytes-like object is required, not 'str'
2423 WARNING: Can not get binary dependencies for file: C:\Windows\system32\api-
ms-win-crt-heap-l1-1-0.dll
I tried the fix listed here:
https://github.com/pyinstaller/pyinstaller/pull/1981
but it did not seem to make a difference.
Someone recommended adding the sys.path.insert() route but it did not make a difference either way
I also tried this in a VM with windows 7, clean install, no change. My next step is to try to use Wine in Debian, but I don't really want to go that route. Any help would be appreciated. Thank you
Turns out this was an appJar/packaging issue, the pyinstaller was not looking in the correct directory for the assets. per the dev of appJar, I commented out two lines of code in the appJar.py, lines 508-509:
if self.platform == self.WINDOWS:
self.topLevel.wm_iconbitmap(self.appJarIcon)
More on the specifics here: https://github.com/jarvisteach/appJar/issues/84
I probably can fix this by using the --path argument with pyinstaller but for the moment, the issue is fully resolved
When I run the fully_connected_feed.py code:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/fully_connected_feed.py
I get an error:
Traceback (most recent call last):
File "C:/Users/AppData/Local/Continuum/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist/fully_connected_feed.py", line 277, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "C:/Users/AppData/Local/Continuum/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist/fully_connected_feed.py", line 222, in main
run_training()
File "C:/Users/AppData/Local/Continuum/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist/fully_connected_feed.py", line 120, in run_training
data_sets = input_data.read_data_sets(FLAGS.input_data_dir, FLAGS.fake_data)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py", line 211, in read_data_sets
SOURCE_URL + TRAIN_IMAGES)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py", line 142, in maybe_download
gfile.Copy(temp_file_name, filepath)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 316, in copy
compat.as_bytes(oldpath), compat.as_bytes(newpath), overwrite, status)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.OutOfRangeError: Read fewer bytes than requested
How do I resolve this issue?
After doing the following, I was able to run the script without errors. The key for getting it to work for me, was the version of tensorflow installed has to match the tutorial code, otherwise there were exceptions. Although, I got a different exception than you, at first.
After installing tensorflow, check version. Details of this step may be different if you installed it pip or some other method:
$ conda list tensorflow
# packages in environment at /Users/agr/miniconda3/envs/tensorflow:
#
tensorflow 0.11.0 py35_0 conda-forge
Clone the git repo
$ git clone https://github.com/tensorflow/tensorflow.git
Inspect the tags available and checkout the release matching your install:
$ cd tensorflow
$ git tag -l -n1
...
$ git checkout v0.11.0
Run script!
$ cd examples/tutorials/mnist/
$ python fully_connected_feed.py
The key point being, run the script from here, not from the link you posted in the original question.
TL; DR
Something else is altering your files as you create them. Find the process and stop it.
Research
I've just run the demo with Windows 10, Python 3.5, tensorflow 0.12.0 with no errors. It is therefore something about your environment.
Looking at the actual line of the error, you are failing to read the required number of bytes from the open file. Going further up the stack you can see that CopyFile is actually trying to read all the bytes of a file into a string in this function. This starts by finding out the current file size and then trying to read all the bytes.
The problem is that the file size at the start of this process doesn't match the size by the end of the copy. In other words, something else has altered your file.
What next?
Your best bet is to try to find out what else is accessing your file. I suggest you use the techniques explained here to see what else has the file open as you are running the copy.
I encountered the same problem on Windows 2012 Server.
As suggested in the previous post, I downloaded and launched Process Monitor, then set the filter: "Path contains mnist" see the image. The datasets were downloaded and unpacked correctly, while running code both from Spyder and Jupyter.
I suspect that there is a race condition in the library code, i.e. missing synchronization between downloading and unpacking operations. As Process Monitor introduced additional delays, the datasets were sucessfully downloaded before the next operation started, hence the hazardous behavior was not observed.