Sorry if this happens to be trivial as I happen to be new with these stuff. I setup theano to use my gpu for computations on ubuntu trusty tahr. I have AMD Radeon HD 7670M gpu. When I try to run the test script to check the functioning of the theano with gpu, I get the following error:
Mapped name None to device opencl0:0: Turks
Traceback (most recent call last):
File "test.py", line 11, in <module>
f = function([], T.exp(x))
File "/home/sachu/git/Theano/theano/compile/function.py", line 322, in function
output_keys=output_keys)
File "/home/sachu/git/Theano/theano/compile/pfunc.py", line 480, in pfunc
output_keys=output_keys)
File "/home/sachu/git/Theano/theano/compile/function_module.py", line 1784, in orig_function
defaults)
File "/home/sachu/git/Theano/theano/compile/function_module.py", line 1648, in create
input_storage=input_storage_lists, storage_map=storage_map)
File "/home/sachu/git/Theano/theano/gof/link.py", line 699, in make_thunk
storage_map=storage_map)[:3]
File "/home/sachu/git/Theano/theano/gof/vm.py", line 1042, in make_all
no_recycling))
File "/home/sachu/git/Theano/theano/gof/op.py", line 975, in make_thunk
no_recycling)
File "/home/sachu/git/Theano/theano/gof/op.py", line 875, in make_c_thunk
output_storage=node_output_storage)
File "/home/sachu/git/Theano/theano/gof/cc.py", line 1189, in make_thunk
keep_lock=keep_lock)
File "/home/sachu/git/Theano/theano/gof/cc.py", line 1130, in __compile__
keep_lock=keep_lock)
File "/home/sachu/git/Theano/theano/gof/cc.py", line 1602, in cthunk_factory
*(in_storage + out_storage + orphd))
RuntimeError: ('The following error happened while compiling the node', GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float64, (False,))>), '\n', 'Could not initialize elemwise support')
The script I ran was the one available on the website: http://deeplearning.net/software/theano/tutorial/using_gpu.html
Is it something wrong with the config? I believe all dependencies are set properly, but I could have made some mistake, but then I would probably something other than runtime-error. I searched a lot on the github for info related to this, but found nothing. Same was the result after searching on stackoverflow, heance I am posting this here. Any help is appreciated.
Thanks
Additional Info: python3.4, theano bleeding edge version. Libgpuarray, clblas, openblas are all built from the git source master branch. 64bit architecture.
Theano support for OpenCL is just not ready yet and it does not seem to be a priority for the development team to get this working (see this issue). So either you will need some patience or an nvidia GPU on which you could run CUDA.
Related
I recently updated my pipelines on GCP dataflow from version 2.27 to version 2.34
Pipelines using WriteToDataStore connector failed due to following error:
Error message from worker: Traceback (most recent call last):
File "apache_beam/runners/common.py", line 1233, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 571, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam/runners/common.py", line 1369, in apache_beam.runners.common._OutputProcessor.process_outputs
File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py", line 83, in process max_ops_budget = self._calc_max_ops_budget(self._first_instant, instant)
File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py", line 74, in _calc_max_ops_budget max_ops_budget = int(self._BASE_BUDGET / self._num_workers * (1.5**growth)) OverflowError: (34, 'Numerical result out of range')
During handling of the above exception, another exception occurred: Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/batchworker.py", line 651, in do_work work_executor.execute()
File "/usr/local/lib/python3.7/site-packages/dataflow_worker/executor.py", line 213, in execute op.start()
File "dataflow_worker/shuffle_operations.py", line 63, in "dataflow_worker/shuffle_operations.py", line 261, in
...contd error message "apache_beam/runners/worker/operations.py", line 714, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam/runners/common.py", line 1235, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 1316, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam/runners/common.py", line 1233, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam/runners/common.py", line 571, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam/runners/common.py", line 1369, in apache_beam.runners.common._OutputProcessor.process_outputs
File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py", line 83, in process max_ops_budget = self._calc_max_ops_budget(self._first_instant, instant)
File "/usr/local/lib/python3.7/site-packages/apache_beam/io/gcp/datastore/v1new/rampup_throttling_fn.py", line 74, in _calc_max_ops_budget max_ops_budget = int(self._BASE_BUDGET / self._num_workers * (1.5**growth))
RuntimeError: OverflowError: (34, 'Numerical result out of range') [while running 'Write to Data-store/Enforce throttling during ramp-up']
The jobs worked fine until now.
I checked the apache-beam python sdk updates added in release 2.32 for adding ramp-up to DatastoreIO connector [BEAM-12272] Python - Backport FirestoreIO connector's ramp-up to DatastoreIO connector - ASF JIRA
This introduces two new parameters for the connector
throttle_rampup and hint_num_workers as described in apache_beam.io.gcp.datastore.v1new.datastoreio module — Apache Beam documentation
I have not made any changes to the parameter values.
I need help in understanding what the parameters mean, particularly hint_num_workers and why is it failing with default values.
However setting the throttle_rampup=False the job runs fine.
If I want to go with best practices and use throttle_rampup=True, how to do I make job run successfully.
Thanks in advance.
this is a know issue in the rampup_throttling_fn.py caused by the data type of the max_ops_budget variable, causing an overflow. You can see the issue report on beam GitHub. A fix was already merged to master. So, updating for a newer version should solve the issue (or downgrade for a version when its not available).
About the meaning of the parameters, there ins't much beyond the documentation description:
throttle_rampup – Whether to enforce a gradual ramp-up.
i.e, you have a number of workers that may increase due to the load, and this can be gradual or abrupt.
hint_num_workers – A hint for the expected number of workers, used to estimate appropriate limits during ramp-up throttling.
i.e the expected final number of workers, then the function can calculate how much workers can be created in a time slot to cause an gradual of abrupt effect on the ramp-up.
I have a fmu which is created in GT-Suite and am trying to work with it in python.
I have followed jmodelica tutorials
from pyfmi import load_fmu
model = load_fmu('myFMU.fmu')
res = model.simulate(final_time=10)
My fmu gets loaded but when I try to run model.simulate step it throws an error
Traceback (most recent call last):
File "<ipython-input-3-4812da4bb52b>", line 1, in <module>
res = model.simulate(final_time=10)
File "src\pyfmi\fmi.pyx", line 6981, in pyfmi.fmi.FMUModelCS2.simulate
File "src\pyfmi\fmi.pyx", line 304, in pyfmi.fmi.ModelBase._exec_simulate_algorithm
File "src\pyfmi\fmi.pyx", line 298, in pyfmi.fmi.ModelBase._exec_simulate_algorithm
File "C:\Users\chinn\Anaconda3\envs\test_env\lib\site-packages\pyfmi\fmi_algorithm_drivers.py", line 761, in __init__
self.model.setup_experiment(start_time=start_time, stop_time_defined=self.options["stop_time_defined"], stop_time=final_time)
File "src\pyfmi\fmi.pyx", line 4292, in pyfmi.fmi.FMUModelBase2.setup_experiment
FMUException: Failed to setup the experiment.
I have tried running it in multiple environments in my pc but am getting the same error. Googled a lot but couldn't find anything. Can some one help me with resolving this issue?
The fmu is probably not exported with the correct license setting.
I am trying to export trained TensorFlow models to C++ using freeze_graph.py. I am trying to export the ssd_mobilenet_v1_coco_2017_11_17 model using the following syntax:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=frozen_inference_graph.pb \
--input_checkpoint=model.ckpt \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
The terminal said that the build was successful but showed the following error:
Traceback (most recent call last):
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 350, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 249, in main
FLAGS.saved_model_tags)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 227, in freeze_graph
input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 171, in _parse_input_graph_proto
text_format.Merge(f.read(), input_graph_def)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 525, in Merge
descriptor_pool=descriptor_pool)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 579, in MergeLines
return parser.MergeLines(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 612, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 627, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 671, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 1144, in ConsumeIdentifierOrNumber
raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 2:1 : Expected identifier or number, got `.
On running that command again, I got this message:
WARNING: /home/my_username/tensorflow/tensorflow/core/BUILD:1814:1: in includes attribute of cc_library rule //tensorflow/core:framework_headers_lib: '../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /home/my_username/tensorflow/tensorflow/tensorflow.bzl:1138:30
WARNING: /home/my_username/tensorflow/tensorflow/core/BUILD:1814:1: in includes attribute of cc_library rule //tensorflow/core:framework_headers_lib: '../../external/nsync/public' resolves to 'external/nsync/public' not below the relative path of its package 'tensorflow/core'. This will be an error in the future. Since this rule was created by the macro 'cc_header_only_library', the error might have been caused by the macro implementation in /home/my_username/tensorflow/tensorflow/tensorflow.bzl:1138:30
WARNING: /home/my_username/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:exporter': No longer supported. Switch to SavedModel immediately.
WARNING: /home/my_username/tensorflow/tensorflow/contrib/learn/BUILD:15:1: in py_library rule //tensorflow/contrib/learn:learn: target '//tensorflow/contrib/learn:learn' depends on deprecated target '//tensorflow/contrib/session_bundle:gc': No longer supported. Switch to SavedModel immediately.
INFO: Analysed target //tensorflow/python/tools:freeze_graph (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/python/tools:freeze_graph up-to-date:
bazel-bin/tensorflow/python/tools/freeze_graph
INFO: Elapsed time: 0.419s, Critical Path: 0.00s
INFO: Build completed successfully, 1 total action
Traceback (most recent call last):
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 350, in <module>
app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py", line 124, in run
_sys.exit(main(argv))
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 249, in main
FLAGS.saved_model_tags)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 227, in freeze_graph
input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
File "/home/my_username/tensorflow/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py", line 171, in _parse_input_graph_proto
text_format.Merge(f.read(), input_graph_def)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 525, in Merge
descriptor_pool=descriptor_pool)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 579, in MergeLines
return parser.MergeLines(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 612, in MergeLines
self._ParseOrMerge(lines, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 627, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 671, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File "/home/my_username/.cache/bazel/_bazel_my_username/3572bc2aff1de1dd37356cf341944e54/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py", line 1144, in ConsumeIdentifierOrNumber
raise self.ParseError('Expected identifier or number, got %s.' % result)
google.protobuf.text_format.ParseError: 2:1 : Expected identifier or number, got `.
I am exporting the ssd_mobilenet_v1_coco_2017_11_17 just as for practice. I intend to export my own trained models and test the output with this program.
I have built TensorFlow 1.5 using Bazel v0.11.1. I validated the installation using the following code snippet provided in the TensorFlow website;
# Python
import tensorflow as tf
hello = tf.constant('Hello, TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
I also ran the object detection iPython example notebook and it worked.
I am using Ubuntu 17.10.1 on a laptop with an Intel Core i5-8250U CPU, 8GB RAM, 1TB HDD and an NVIDIA MX150(2 GB) GPU. Please help. How do I export a trained model to C++?
in order to export object detection models I use the export_inference_graph.py code in research/object-detection. Here's an example of running the code:
python3 export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path path/to/model.config \
--trained_checkpoint_prefix path/to/model.ckpt-CHECKPOINTNUMBER \
--output_directory path/to/frozen_inference_graph
Then I use the created frozen_inference_graph.pb with a C++ code that is essentially the same as label_image with little modifications to run for a detection model rather than a classification one.
I have installed Theano framework and enabled CUDA on my machine, however when I "import theano" in my python console, I got the following message:
>>> import theano
Using gpu device 0: GeForce GTX 950 (CNMeM is disabled, CuDNN not available)
Now that "CuDNN not available", I downloaded cuDnn from Nvidia website. I also updated 'path' in environment, and added 'optimizer_including=cudnn' in '.theanorc.txt' config file.
Then, I tried again, but failed, with:
>>> import theano
Using gpu device 0: GeForce GTX 950 (CNMeM is disabled, CuDNN not available)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Anaconda2\lib\site-packages\theano\__init__.py", line 111, in <module>
theano.sandbox.cuda.tests.test_driver.test_nvidia_driver1()
File "C:\Anaconda2\lib\site-packages\theano\sandbox\cuda\tests\test_driver.py", line 31, in test_nvidia_driver1
profile=False)
File "C:\Anaconda2\lib\site-packages\theano\compile\function.py", line 320, in function
output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\pfunc.py", line 479, in pfunc
output_keys=output_keys)
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1776, in orig_function
output_keys=output_keys).create(
File "C:\Anaconda2\lib\site-packages\theano\compile\function_module.py", line 1456, in __init__
optimizer_profile = optimizer(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 101, in __call__
return self.optimize(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 230, in apply
sub_prof = optimizer.optimize(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 230, in apply
sub_prof = optimizer.optimize(fgraph)
File "C:\Anaconda2\lib\site-packages\theano\gof\opt.py", line 89, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "C:\Anaconda2\lib\site-packages\theano\sandbox\cuda\dnn.py", line 2508, in apply
dnn_available.msg)
AssertionError: cuDNN optimization was enabled, but Theano was not able to use it. We got this error:
Theano can not compile with cuDNN. We got this error:
>>>
anyone can help me? Thanks.
There should be a way to do it by setting only the Path environment variable but I could never get that to work. The only thing that worked for me was to manually copy the CuDNN files into the appropriate folders in your CUDA installation.
For example, if your CUDA installation is in C:\CUDA\v7.0 and you extracted CuDNN to C:\CuDNN you would copy as follows:
The contents of C:\CuDNN\lib\x64\ would be copied to C:\CUDA\v7.0\lib\x64\
The contents of C:\CuDNN\include\ would be copied to C:\CUDA\v7.0\include\
The contents of C:\CuDNN\bin\ would be copied to C:\CUDA\v7.0\bin\
After that it should work.
In addition to all the stuffs you did I updated following content of .theanorc.txt in my home folder and it worked after that.
[lib]
#cnmem=1.0
cudnn=1.0
I just pip installed theano and tried to run theano.test(). It produced a very long log of errors and I copied the first part. I also tried a couple other examples - I have seen
"local_dot_to_dot22"
and
"ValueError: invalid token "Files\Enthought\Canopy\App\appdata\canopy1.5.2.2785.win-x86_64\Scripts" in ldflags_str: "-LC:\Program Files\Enthought\Canopy\App\appdata\canopy-1.5.2.2785.win-x86_64\Scripts -lmk2_core -lmk2_intel_thread -lmk2_rt"
several times.
I'm using python 2.7 (canopy), scipy 0.15.1-2 and numpy 1.9.2-1. I am very new to theano. I appreciate if you can point me to the right direction. Thanks!
EEEEEERROR (theano.gof.opt): Optimization failure due to: local_dot_to_dot22
ERROR:theano.gof.opt:Optimization failure due to: local_dot_to_dot22
ERROR (theano.gof.opt): TRACEBACK:
ERROR:theano.gof.opt:TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "c:\theano\theano\gof\opt.py", line 1737, in process_node
replacements = lopt.transform(node)
File "c:\theano\theano\tensor\blas.py", line 1776, in local_dot_to_dot22
return [_dot22(x.dimshuffle('x', 0), y).dimshuffle(1)]
File "c:\theano\theano\gof\op.py", line 647, in __call__
no_recycling=[])
File "c:\theano\theano\gof\op.py", line 918, in make_thunk
no_recycling)
File "c:\theano\theano\gof\op.py", line 836, in make_c_thunk
output_storage=node_output_storage)
File "c:\theano\theano\gof\cc.py", line 1175, in make_thunk
keep_lock=keep_lock)
File "c:\theano\theano\gof\cc.py", line 1113, in __compile__
keep_lock=keep_lock)
File "c:\theano\theano\gof\cc.py", line 1541, in cthunk_factory
key = self.cmodule_key()
File "c:\theano\theano\gof\cc.py", line 1257, in cmodule_key
compile_args=self.compile_args(),
File "c:\theano\theano\gof\cc.py", line 936, in compile_args
ret += x.c_compile_args()
File "c:\theano\theano\tensor\blas.py", line 652, in c_compile_args
return ldflags(libs=False, flags=True)
File "c:\theano\theano\tensor\blas.py", line 537, in ldflags
include_dir=include_dir)
File "c:\theano\theano\gof\utils.py", line 182, in rval
val = f(*args, **kwargs)
File "c:\theano\theano\tensor\blas.py", line 597, in _ldflags
% (t, ldflags_str))
ValueError: invalid token "Files\Enthought\Canopy\App\appdata\canopy- 1.5.2.2785.win-x86_64\Scripts" in ldflags_str: "-LC:\Program Files\Enthought\Canopy\App\appdata\canopy-1.5.2.2785.win-x86_64\Scripts -lmk2_core -lmk2_intel_thread -lmk2_rt"
The problem here is problem caused by having spaces in your path, i.e. Canopy is installed in C:\Program Files\Enthought\Canopy but the Theano scripts don't work well with the space between Program and Files. Try uninstalling Canopy and reinstall in a directory with no space in the path.
You should also follow the other instructions for installing Theano on Windows. Unfortunately it's not as simple as just pip install theano.
In case you don't want to reinstall things, if they're heavy programs, for instance, affecting Window's registry and so, you can try symbolic links.
A symbolic link will create something similar to a shortcut to a folder, but seen as an actual folder by other applications.
So, you can do something like this:
Run cmd as administrator
User this command: mklink /D "C:\LinkToProgramFiles" "C:\Program Files"
And then, you start using "C:\LinkToProgramFiles" in your ldflags var.