the version of python is 3.6
I tried to execute my code but, there are still some errors as below:
Traceback (most recent call last):
File
"C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py",
line 55, in
, binary=True)
File "E:\Program
Files\Python\Python35-32\lib\site-packages\gensim\models\word2vec.py",
line 1282, in load_word2vec_format
raise DeprecationWarning("Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.")
DeprecationWarning: Deprecated. Use
gensim.models.KeyedVectors.load_word2vec_format instead.
how to fix the code? or is the path to data wrong?
This is just a warning, not a fatal error. Your code likely still works.
"Deprecation" means a function's use has been marked by the authors as no longer encouraged.
The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message.
Your warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead.
Did you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning?
Related
This is my first post here. I'm building a Python window application with PyQt5 that implements interactions with the OpenAI completions endpoint. So far, any code that I've written myself has performed fine, and I was reaching the point where I wanted to start implementing long-term memory for conversational interactions. I started by just running my own chain of prompts for categorizing and writing topical subjects and summaries to text files, but I decided it best to try exploring open source options to see how the programming community is managing things. This led me to LangChain, which seems to have some popular support behind it and already implements many features that I intend.
However, I have not had even the tiniest bit of success with it yet. Even the most simple examples don't perform, regardless of what context I'm implementing it in (within a class, outside a class, in an asynchronous loop, to the console, to my text browsers within the main window, whatever) I always get the same error message.
The simplest possible example:
import os
from langchain.llms import OpenAI
from local import constants #For API key
os.environ["OPENAI_API_KEY"] = constants.OPENAI_API_KEY
davinci = OpenAI(model_name= 'text-davinci-003', verbose=True, temperature=0.6)
text = "Write me a story about a guy who is frustrated with Python."
print("Prompt: " + text)
print(davinci(text))
It capably instantiates the wrapper and prints the prompt to the console, but at any point a command is sent through the wrapper's functions to receive generated text, it encounters this AttributeError.
Here is the traceback:
Traceback (most recent call last):
File "D:\Dropbox\Pycharm Projects\workspace\main.py", line 16, in <module>
print(davinci(text))
File "D:\Dropbox\Pycharm Projects\workspace\venv\lib\site-packages\langchain\llms\base.py", line 255, in __call__
return self.generate([prompt], stop=stop).generations[0][0].text
File "D:\Dropbox\Pycharm Projects\workspace\venv\lib\site-packages\langchain\llms\base.py", line 128, in generate
raise e
File "D:\Dropbox\Pycharm Projects\workspace\venv\lib\site-packages\langchain\llms\base.py", line 125, in generate
output = self._generate(prompts, stop=stop)
File "D:\Dropbox\Pycharm Projects\workspace\venv\lib\site-packages\langchain\llms\openai.py", line 259, in _generate
response = self.completion_with_retry(prompt=_prompts, **params)
File "D:\Dropbox\Pycharm Projects\workspace\venv\lib\site-packages\langchain\llms\openai.py", line 200, in completion_with_retry
retry_decorator = self._create_retry_decorator()
File "D:\Dropbox\Pycharm Projects\workspace\venv\lib\site-packages\langchain\llms\openai.py", line 189, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
AttributeError: module 'openai.error' has no attribute 'Timeout'
I don't expect that there is a fault in the LangChain library, because it seems like nobody else has experienced this problem. I imagine I may have some dependency issue? Or I do notice that others using the LangChain library are doing so in a notebook development environment, and my lack of familiarity in that regard is making me overlook some fundamental expectation of the library's use?
Any advice is welcome! Thanks!
What I tried: I initially just replaced my own function for managing calls to the completion endpoint with one that issued the calls through LangChain's llm wrapper. I expected it to work as easily as my own code had, but I received that error. I then stripped everything apart layer by layer attempting to instantiate the wrapper at every scope of the program, then I attempted to make the calls in an asynchronous function through a loop that waited to completion, and no matter what, I always get that same error message.
I think it might be something about your current installed versions of Python, OpenAI, and/or LangChain. Maybe try using a newer version of Python and OpenAI. I'm new to Python and these things but hopefully I could help.
I encountered an stranger error as below, when compile my theano function. I am using the version 0.7 of theano. I hope a quick work around is available. The function dump is here.
<<!! BUG IN FGRAPH.REPLACE OR A LISTENER !!>> <type 'exceptions.AssertionError'> local_shape_to_shape_i
ERROR (theano.gof.opt): Optimization failure due to: local_shape_to_shape_i
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/theano/gof/opt.py", line 1527, in process_node
fgraph.replace_all_validate(repl_pairs, reason=lopt)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/toolbox.py", line 259, in replace_all_validate
fgraph.replace(r, new_r, reason=reason, verbose=False)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 502, in replace
self.change_input(node, i, new_r, reason=reason)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 442, in change_input
self.__import_r__([new_r], reason=reason)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 257, in __import_r__
self.__import__(apply_node, reason=reason)
File "/usr/local/lib/python2.7/dist-packages/theano/gof/fg.py", line 365, in __import__
assert node not in self.apply_nodes
AssertionError
This error message appears when a bug in a Theano optimization causes an invalid graph modification.
If you ever see "Optimization failure due to: <something>", try the following:
Search the internet, and the theano-users mailing list in particular, for the message including the specific <something> (in this case <something> is "local_shape_to_shape_i"). You may find a message indicating that the bug has already been identified. If it's been reported to the Theano developers then it may have already been fixed though you may need to update to the bleeding edge version of Theano direct from GitHub (i.e. pip install --upgrade may not be sufficient).
Even if you can't find any mention online, try updating to the bleeding edge version if that's possible for you. It may have already been fixed.
If the latest bleeding edge version still exhibits the bug then report it on the theano-users mailing list.
Ignore it. Optimization failures do not cause invalid computations. The only side effect (at least in theory) is that the computation may not be as efficient as it might otherwise be.
I am trying to simulate a repeat-until loop in Theano:
def method_a(arguments):
...
return result, theano.scan.until(t.eq(a,b))
I encountered the following strange behaviour. Let b be a constant. Whenever a is a constant, everything works fine. However, when a is a scalar, I get an error related to optimisation:
ERROR (theano.gof.opt): SeqOptimizer apply <theano.gof.opt.EquilibriumOptimizer object at 0x110d0d8d0>
ERROR (theano.gof.opt): Traceback:
ERROR (theano.gof.opt): Traceback (most recent call last):
File "[...]/lib/python2.7/site-packages/theano/gof/opt.py", line 196, in apply
sub_prof = optimizer.optimize(fgraph)
File "[...]/python2.7/site-packages/theano/gof/opt.py", line 82, in optimize
ret = self.apply(fgraph, *args, **kwargs)
File "[...]/python2.7/site-packages/theano/gof/opt.py", line 1665, in apply
gopt.apply(fgraph)
File "[...]/python2.7/site-packages/theano/scan_module/scan_opt.py", line 1300, in apply
if self.belongs_to_set(nd, subset):
File "[...]/python2.7/site-packages/theano/scan_module/scan_opt.py", line 1286, in belongs_to_set
rep.op.inputs)
File "[...]/python2.7/site-packages/theano/scan_module/scan_utils.py", line 452, in equal_computations
dx.type.dtype == dy.type.dtype and
AttributeError: 'NoneTypeT' object has no attribute 'dtype'
I'd appreciate if someone could help me understand the error. I'm assuming the AttributeError doesn't refer to a or b, because I can print their dtype and see that they do have one. Other than that, I can't make any sense out of it.
[Edit] This is not a fatal error. The code runs normally and the process finishes with exit code 0. It looks like Theano is trying to optimise the graph and fails to do so, which doesn't really impact the program.
The traceback indicate that in the function equal_compuations(), we didn't cover all case, when doing some comparison.
I have a PR with a fix for it here:
https://github.com/Theano/Theano/pull/1928
thanks for the report.
Your [edit] section, indicate me that you cut some of the errors message. If this happear during optimization with a warning, it mean an optimization was just skipped. It is possible that the optimization just don't apply, but it could be possible that with the fix, now the optimization apply. If that is the case, there could be some speed up with the fix.
Basically, I want to read the CPU temperature with Python. Please explain in layman's terms as I have never done this on Windows before nor have I had to work with wmi.
This is what I have at the moment:
import wmi
w = wmi.WMI(namespace="root\wmi")
temperature_info = w.MSAcpi_ThermalZoneTemperature()[0]
print temperature_info.CurrentTemperature
(I got this code from this thread: Accessing CPU temperature in python)
However, on running the script, I get this error:
Traceback (most recent call last):
File "C:\Users\Ryan\Desktop\SerialSystemMonitor", line 4, in <module>
temperature_info = w.MSAcpi_ThermalZoneTemperature()[0]
File "C:\Python27\lib\site-packages\wmi.py", line 819, in query
handle_com_error ()
File "C:\Python27\lib\site-packages\wmi.py", line 241, in handle_com_error
raise klass (com_error=err)
x_wmi: <x_wmi: Unexpected COM Error (-2147217396, 'OLE error 0x8004100c', None, None)>
What can I do to get this to work?
According to the MSDN page on WMI Error Constants, the error you have received is:
WBEM_E_NOT_SUPPORTED
2147749900 (0x8004100C)
Feature or operation is not supported.
Presumably, then, your CPU does not provide temperature information through WMI. If your CPU doesn't expose this information, you're probably out of luck, at least as far as a straightforward solution in Python goes.
I assume you've tried the other option given in the answer you linked, using Win32_TemperatureProbe(); if you haven't, try it.
Just execute as Admin. It's work for me.
I'm not sure if Grail browser is a good choice nowadays, however I want to try it, because I have some problems about graphics running on Firefox-Fermi. The next, is what I obtain after trying grail-0.6 (tgz)
# python grail.py
Traceback (most recent call last):
File "grail.py", line 43, in ?
from Tkinter import *
After installing "tkinter" adequately, I run "grail.py" again, and I get
# python grail.py
/root/grail-0.6/grailbase/app.py:6: Deprecation Warning: the regex module is
deprecated; please use the re module
import regex
/usr/lib/python2.4/regsub.py:15: DeprecationWarning: the regsub module is
deprecated; please use re.sub()
DeprecationWarning)
Traceback (most recent call last):
File "grail.py", line 499, in ?
main()
File "grail.py", line 108, in main
app = Application(prefs=prefs, display=display)
File "grail.py", line 248, in __init__
self.stylesheet = Stylesheet.Stylesheet(self.prefs)
File "/root/grail-0.6/Stylesheet.py", line 21, in __init__
self.load()
File "/root/grail-0.6/Stylesheet.py", line 45, in load
massaged.append((g, c), v % fparms_dict)
TypeError: append() takes exactly one argument (2 given)
but now, I'm not able to understand the message at all. May you advice me about this problem?
Wow - that's a blast from the past! My advice is to give up: Grail hasn't been touched in more than a dozen years. It's dead.
The error message you're getting stems from a change made way back in Python 1.6 (released 5 September 2000). Here's the message from the release notes:
The append() method for lists can no longer be invoked with more
than one argument. This used to append a single tuple made out of
all arguments, but was undocumented. To append a tuple, use
e.g. l.append((a, b, c)).
So, you can:
Give up. Recommended ;-)
Install an ancient version of Python; or,
Change that line to
massaged.append(((g, c), v % fparms_dict))
and see what breaks next ;-)
About the next problem
Python 0.9.1 is extremely old, from early 1991. The language changed in many, many ways before 1.0 was released.
According to the old Grail home page, Grail 0.6:
requires Python 1.5 or newer, and Tcl/Tk 8.0 or newer.
So find Python 1.5 if you're determined to pursue this ;-) Note that the .append() semantics were changed in version 1.6, so the original .append() code that hurt you at first should still work OK in 1.5.