im2txt & TensorFlow 1.4.1 - python

Does anyone here succeed to run im2txt with TensorFlow 1.4.1?
I'm using this model(https://drive.google.com/file/d/0B_qCJ40uBfjEWVItOTdyNUFOMzg/view)
2018-01-04 00:46:59.268582: W tensorflow/core/framework/op_kernel.cc:1192] Not found: Key lstm/basic_lstm_cell/kernel not found in checkpoint
Then I tried the following script to convert model. The script generated checkpoint, .meta, .data, and .index.
OLD_CHECKPOINT_FILE = "/tmp/my_checkpoint/model.ckpt-3000000"
NEW_CHECKPOINT_FILE = "/tmp/my_converted_checkpoint/model.ckpt-3000000"
import tensorflow as tf
vars_to_rename = {
"lstm/BasicLSTMCell/Linear/Matrix": "lstm/basic_lstm_cell/weights",
"lstm/BasicLSTMCell/Linear/Bias": "lstm/basic_lstm_cell/biases",
}
new_checkpoint_vars = {}
reader = tf.train.NewCheckpointReader(OLD_CHECKPOINT_FILE)
for old_name in reader.get_variable_to_shape_map():
if old_name in vars_to_rename:
new_name = vars_to_rename[old_name]
else:
new_name = old_name
new_checkpoint_vars[new_name] = tf.Variable(reader.get_tensor(old_name))
init = tf.global_variables_initializer()
saver = tf.train.Saver(new_checkpoint_vars)
with tf.Session() as sess:
sess.run(init)
print("save checkpoint")
saver.save(sess, NEW_CHECKPOINT_FILE)
Could anyone tell me how I can use those files to run im2txt with TensorFlow 1.4.1. (Actually, I could run im2txt with tensorflow 0.12.1)
Env
python 3.5.2
Mac OS X version 10.12.6
TensorFlow 1.4.1
Thank for your help.

Get the same error with checkpoint file with tf 1.4.1 and python3.5 on MacOS 10.13
Reason: checkpoint file downloaded is generated using an old version of tensorflow(python2). word_count.txt file format
answers from https://github.com/KranthiGV/Pretrained-Show-and-Tell-model
Changes:
1. generate ckp file which can be loaded by tf1.4.1
OLD_CHECKPOINT_FILE = "model.ckpt-1000000"
NEW_CHECKPOINT_FILE = "model2.ckpt-1000000"
import tensorflow as tf
vars_to_rename = {
"lstm/basic_lstm_cell/weights": "lstm/basic_lstm_cell/kernel",
"lstm/basic_lstm_cell/biases": "lstm/basic_lstm_cell/bias",
}
new_checkpoint_vars = {}
reader = tf.train.NewCheckpointReader(OLD_CHECKPOINT_FILE)
for old_name in reader.get_variable_to_shape_map():
if old_name in vars_to_rename:
new_name = vars_to_rename[old_name]
else:
new_name = old_name
new_checkpoint_vars[new_name] =
tf.Variable(reader.get_tensor(old_name))`
init = tf.global_variables_initializer()
saver = tf.train.Saver(new_checkpoint_vars)
with tf.Session() as sess:
sess.run(init)
saver.save(sess, NEW_CHECKPOINT_FILE)
python3 file reading problem, in im2txt/run_reference.py
with tf.gfile.GFile(filename, "rb") as f:
word_count.txt downloaded from that link need to be replaced with this one
https://github.com/siavash9000/im2txt_demo/tree/master/im2txt_pretrained

Chunfang's solution works for me, but I wanted to share another approach.
In recent versions of TensorFlow, Google provides an "official" checkpoint_convert.py utility to convert old RNN checkpoints:
python checkpoint_convert.py [--write_v1_checkpoint] \
'/path/to/old_checkpoint' '/path/to/new_checkpoint'

Related

How to upgrade TF1 GAN notebook on Colab to TF2? It does't work because Colab is't supporting TF1 anymore

I was trying to run this notebook on colab,
https://colab.research.google.com/github/https-deeplearning-ai/GANs-Public/blob/master/C1W1_(Colab)_Inputs_to_a_pre_trained_GAN.ipynb ,
but first I got this :
ValueError: Tensorflow 1 is unsupported in Colab.
then I upgraded it using this script:
import tensorflow as tf
!tf_upgrade_v2 \
--intree stylegan/ \
--inplace
and I did comment these:
%tensorflow_version 1.x
tflib.init_tf()
but I got this one! and couldn't solve:
AttributeError: Can't get attribute 'Network' on <module 'dnnlib.tflib.network' from '/content/stylegan/dnnlib/tflib/network.py'>
Can somebody help?
# Clone the official StyleGAN repository from GitHub
!git clone https://github.com/NVlabs/stylegan.git
%tensorflow_version 1.x
import os
import pickle
import numpy as np
import PIL.Image
import stylegan
from stylegan import config
from stylegan.dnnlib import tflib
from tensorflow.python.util import module_wrapper
module_wrapper._PER_MODULE_WARNING_LIMIT = 0
# Initialize TensorFlow
tflib.init_tf()
# Go into that cloned directory
path = 'stylegan/'
if "stylegan" not in os.getcwd():
os.chdir(path)
# Load pre-trained network
# url = 'https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ' # Downloads the pickled model file: karras2019stylegan-ffhq-1024x1024.pkl
url = 'https://bitbucket.org/ezelikman/gans/downloads/karras2019stylegan-ffhq-1024x1024.pkl'
with stylegan.dnnlib.util.open_url(url, cache_dir=config.cache_dir) as f:
print(f)
_G, _D, Gs = pickle.load(f)
# Gs.print_layers() # Print network details

Error converting tensorflow .pb model to .mlmodel

I am trying to convert a tensorflow graph (.pb file) into a .mlmodel
import tfcoreml
coreml_model = tfcoreml.convert(tf_model_path='optimized_model.pb', mlmodel_path='FaceImages.mlmodel', output_feature_names=['final_result'], input_name_shape_dict={'ResizeBilinear': {'images': None, 'size': {None, None}}}, minimum_ios_deployment_target='13')
but I am getting following error:
/usr/local/lib/python3.6/dist-packages/coremltools/converters/nnssa/frontend/tensorflow/graphdef_to_ssa.py
in load_tf_graph(graph_file)
21 with tf.io.gfile.GFile(graph_file, "rb") as f:
22 graph_def = tf.compat.v1.GraphDef()
---> 23 graph_def.ParseFromString(f.read())
24
25 # Then, we import the graph_def into a new Graph and returns it
DecodeError: Error parsing message
Could anybody help with this pls?
Here is the colab project where I have attached the tensorflow model and the associated code for conversion
https://colab.research.google.com/drive/1S7nf7pnX15UuswFZaTih5pHhfDFwG5Xa
Have you checked that the version of Tensorflow your are using is compatible with those libraries? This is just a guess, but you could try running
!pip install tensorflow --upgrade
at the top of the notebook to see if it resolves the issue.

TensorBoard error - [WinError 2] The system cannot find the file specified

I am trying to run TensorBoard with the following code
%load_ ext tensorboard.notebook
import tensorflow as tf
x = tf.constant( [100,200,300], name = 'x')
y = tf.constant([1,2,3], name = 'y')
sum_x = tf.reduce_sum(x, name="sum_x"
prod y = tf.reduce_prod(y,name="prod_y")
final div = tf.div(sum_x, prod_y, name = 'final div’)
final_mean = tf.reduce_mean([sum_x, prod_y], name = 'final_mean*)
sess=tf .Session()
print ("x: ",sess.run(x))
print ("y: ", sess.run(y))
print ("sum(x): ", sess.run(sum_x))
print ("prod(y): ", sess.run(prod_y))
print ("sum(x)/prod(y): ", sess.run(final_ div))
print ("mean(sum(x), prod(y)): ", sess.run(final_mean))
writer = tf.summary.FileWriter('janani_ex_2', sess.graph)
writer.close()
sess.close()
%tensorboard --logdir = 'janani_ex_2'
This displays Launching tensorboard and then run into a [WinError 2] The system cannot find the file specified error.
What am I doing wrong?
On Windows, you need to change this line
%tensorboard --logdir = 'janani_ex_2'
to
%tensorboard --logdir '.\janani_ex_2'
I'm assuming that the folder janani_ex_2 includes your logs and it's located inside the folder, where your jupyter notebook is in.
If you are not using Jupyter notebook, then in the folder with the tensorboard objects, use this:
tensorboard --logdir ./
It will look in the current folder to get the files. To get to the folder, type cd and then copy and paste the name of the folder.

Issue with converting tensorflow model to Intel Movidius graph

Hello I faced with the problem when trying to use Intel Movidius Neural Stick with tensorflow. I have keras model and I convert it to tensorflow model. When I convert it to Movidius graph I got error:
Traceback (most recent call last):
File "/usr/local/bin/mvNCCompile", line 118, in
create_graph(args.network, args.inputnode, args.outputnode, args.outfile, args.nshaves, args.inputsize, args.weights)
File "/usr/local/bin/mvNCCompile", line 104, in create_graph
net = parse_tensor(args, myriad_config)
File "/usr/local/bin/ncsdk/Controllers/TensorFlowParser.py", line 290, in parse_tensor
if have_first_input(strip_tensor_id(node.outputs[0].name)):
IndexError: list index out of range
Here is my code:
from keras.models import model_from_json
from keras.models import load_model
from keras import backend as K
import tensorflow as tf
import nn
import os
weights_file = "weights.h5"
sess = K.get_session()
K.set_learning_phase(0)
model = nn.alexnet_model() # get keras model
model.load_weights(weights_file)
saver = tf.train.Saver()
saver.save(sess, "./TF_Model/tf_model") # convert keras to tensorflow model
tf_model_path = "./TF_Model/tf_model"
fw = tf.summary.FileWriter('logs', sess.graph)
fw.close()
os.system('mvNCCompile TF_Model/tf_model.meta -in=conv2d_1_input -on=activation_7/Softmax') # get Movidius graph
Python version: 2.7
OS: Ubuntu 16.04
Tensorflow version: 1.12
As I know, the ncsdk compiler does not resolve every part of a normal tensorflow network, so you have to modify the network and re-save it in an NCS-friendly way in order to successfully make a Movidius graph.
For more information about how to modify tensorflow network, have a look at the official guidance.
Hope it will help you.

Tensorflow Read from HDFS mac : java.lang.NoSuchFieldError: LOG

I am trying to read from external hadoop from tensorflow on my mac. I have built tf with hadoop support from source, and also build hadoop with native library support on my mac. I am getting the following error ,
hdfsBuilderConnect(forceNewInstance=0, nn=192.168.60.53:9000, port=0, kerbTicketCachePath=(NULL), userName=(NULL)) error:
java.lang.NoSuchFieldError: LOG
at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:62)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.<init>(ProtobufRpcEngine.java:145)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.<init>(ProtobufRpcEngine.java:133)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.<init>(ProtobufRpcEngine.java:119)
at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:102)
at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:579)
at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:418)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:314)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:162)
at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:159)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:159)
2018-10-05 16:01:21.867554: W tensorflow/core/kernels/queue_base.cc:277] _0_input_producer: Skipping cancelled enqueue attempt with queue not closed
Traceback (most recent call last):
This is my code:
import tensorflow as tf
def create_file_reader_ops(filename_queue):
reader = tf.TextLineReader(skip_header_lines=1)
_, csv_row = reader.read(filename_queue)
record_defaults = [[""], [""], [0], [0]]
country, code, gold, silver = tf.decode_csv(csv_row, record_defaults=record_defaults)
features = tf.stack([gold, silver])
return features, country
filename_queue = tf.train.string_input_producer([
"hdfs://192.168.60.53:9000/iris_data_multiclass.csv",
])
example, country = create_file_reader_ops(filename_queue)
with tf.Session() as sess:
tf.global_variables_initializer().run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
while True:
try:
example_data, country_name = sess.run([example, country])
print(example_data, country_name)
except tf.errors.OutOfRangeError:
break
I have build hadoop from source on mac.
$ hadoop version
Hadoop 2.7.3
Subversion https://github.com/apache/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by himaprasoon on 2018-10-04T11:09Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using /Users/himaprasoon/git/hadoop/hadoop-dist/target/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
hadoop checknative output
$ hadoop checknative
18/10/05 16:15:05 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library libbz2.dylib
18/10/05 16:15:05 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /Users/himaprasoon/git/hadoop/hadoop-dist/target/hadoop-2.7.3/lib/native/libhadoop.dylib
zlib: true /usr/lib/libz.1.dylib
snappy: true /usr/local/lib/libsnappy.1.dylib
lz4: true revision:99
bzip2: true /usr/lib/libbz2.1.0.dylib
openssl: true /usr/local/lib/libcrypto.dylib
tf version : 1.10.1
Any ideas what I might be doing wrong?
here are my environment variables.
HADOOP_HOME=/Users/himaprasoon/git/hadoop/hadoop-dist/target/hadoop-2.7.3/
HADOOP_MAPRED_HOME=$HADOOP_HOME
HADOOP_COMMON_HOME=$HADOOP_HOME
HADOOP_HDFS_HOME=$HADOOP_HOME
YARN_HOME=$HADOOP_HOME
HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
HADOOP_INSTALL=$HADOOP_HOME
OPENSSL_ROOT_DIR="/usr/local/opt/openssl"
LDFLAGS="-L${OPENSSL_ROOT_DIR}/lib"
CPPFLAGS="-I${OPENSSL_ROOT_DIR}/include"
PKG_CONFIG_PATH="${OPENSSL_ROOT_DIR}/lib/pkgconfig"
OPENSSL_INCLUDE_DIR="${OPENSSL_ROOT_DIR}/include"
PATH="/usr/local/opt/protobuf#2.5/bin:$PATH
HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HADOOP_HOME}/lib/native
JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:${HADOOP_HOME}/lib/native
this is how I am running my program
CLASSPATH=$($HADOOP_HDFS_HOME/bin/hdfs classpath --glob) python3.6 myfile.py
references used to build tf and hadoop
Hadoop native libraries not found on OS/X
https://medium.com/#s.matthew.english/build-hadoop-from-source-on-macos-a3fb2b958b6c
Can Tensorflow read from HDFS on Mac?
https://gist.github.com/zedar/f631ace0759c1d512573
Have you read this post?
Tensorflow Enqueue operation was cancelled
It seems there is a workaround for the same error message there:
The problem happens at the very last stage when python tries to kill threads.
To do this properly you should create a train.Coordinator and pass it to your
queue_runner (no need to pass sess, as default session will be used>
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
// do your things
coord.request_stop()
coord.join(threads)
The last two lines should be added to your while loop to make sure all threads are properly killed.

Categories