random: not enough state (1 bytes); ignored - python

Not sure what the above error means. I just installed ghmm on my mac and get this error every time I do a import ghmm. I do not get this message on my ghmm install on my linux machine and other than that all functions appear to be fine.
I wondering if anyone has seen this before and if there's anything I can do to get rid of this. The only thing I did different between the two installs was the autogen.sh file was refering to "libtoolize" which doesn't exist on my mac so I changed it to its replacement "glibtoolize" which allowed it to compile and install fine.
Any suggestions on what this error actually means(and hopefully how I can solve it) would be great.
(I couldn't find the answer on google but this program does not appear to be specific to ghmm)

I'm willing to be corrected on this, but at a guess I'd say this has nothing to do directly with ghmm or your compile tools. I think the error message you're seeing is coming from the BSD random number functions that OSX uses (they are documented here).
Assuming that ghmm is causing the warning (and not python), it might be possible to configure the build process to use plain old rand or some other PRNG. Alternatively, maybe you can find the right place to add a call to initstate() (see above doc link) to provide the state information it wants.
This bit from the man page probably points to your problem:
If initstate() is called with less than 8 bytes of state information, or if setstate() detects that the state information has been garbled, error messages are printed on the standard error output.

eaj is correct that initstate needs more than 8 bytes for state information. The best way to do this for ghmm is with either the --enable-gsl or --with-rng=bsd option for ./configure. --with-rng=bsd makes the type "ghmm_rng_state_t" 8 bytes instead of 1. See rng.h in the ghmm directory.

The ghmm web site says this about "libtoolize":
Mac OS X: 10.6 ships with a broken libtool which breaks the installation (and it also ships with Python 2.5, so you need an update for that). James Howard posted a solution on the mailing list: [Ghmm-list] Compiling in OS X 10.6
http://sourceforge.net/mailarchive/message.php?msg_id=25874107
HTH

Related

Can't add rules to iptables, nothing gets committed

According to the documentation:rules, doing the following should add a simple rule to the iptables list of rules:
rule = iptc.Rule()
rule.src = "127.0.0.1"
rule.protocol = "udp"
rule.target = rule.create_target("ACCEPT")
match = rule.create_match("comment")
match.comment = "this is a test comment"
chain = iptc.Chain(iptc.Table(iptc.Table.FILTER), "INPUT")
chain.insert_rule(rule)
However, running this example, results in absolutely zero new rules.
I'm verifying this by doing:
iptables -L --line-number
Before I submit a bug issue, I'd like to know if anyone else has encountered this and if so, how you worked around it.
I'm running everything as root just to be on the safe side, I also tried verifying the rules by running another example code from the same section of the documentation:
table = iptc.Table(iptc.Table.FILTER)
for chain in table.chains:
print ("=======================")
print ("Chain ", chain.name)
for rule in chain.rules:
print ("Rule", "proto:", rule.protocol, "src:", rule.src, "dst:", \
rule.dst, "in:", rule.in_interface, "out:", rule.out_interface,)
print ("Matches:")
for match in rule.matches:
print (match.name)
print ("Target:"),
print (rule.target.name)
print ("=======================")
(modified slightly to work with Python3).
This was to make sure there wasn't an issue with the auto-commit, however, still the same results.
I will also point out that it did work for a short bit, for roughly 3 additions to iptables. And it might work to do a systemctl restart iptables, but I'd like to if possible - figure out why this is going wrong before I do the classic old "windows trick" of rebooting stuff. (nothing in journald/systemd either mentioning anything about iptables)
Seeing as #larsks couldn't reproduce the issue I dug a little further.
It appears that a system update had been performed (classic mistake, I apologize).
This causes the loaded kernel version to differ from the kernel module of iptables, there's some fixes in place that solves this issue using the iptables command so that you can still add rules.
However, using the lib python-iptables does not work.
What the actual difference is is beyond me, I dug a little bit but couldn't locate where this would cause an issue.
Rebooting the machine in this instance is the only (to me known) way to solve this issue unfortunately. This is so that the loaded kernel module and installed tools match the version they're working against.
(another solution would be to keep the old iptables command and libraries, meaning backing them up and pointing the libraries to the backed up version until a reboot can be made).

Debugger times out at "Collecting data..."

I am debugging a Python (3.5) program with PyCharm (PyCharm Community Edition 2016.2.2 ; Build #PC-162.1812.1, built on August 16, 2016 ; JRE: 1.8.0_76-release-b216 x86 ; JVM: OpenJDK Server VM by JetBrains s.r.o) on Windows 10.
The problem: when stopped at some breakpoints, the Debugger window is stuck at "Collecting data", which eventually timeout. (with Unable to display frame variables)
The data to be displayed is neither special, nor particularly large. It is somehow available to PyCharm since a conditional break point on some values of the said data works fine (the program breaks) -- it looks like the process to gather it for display only (as opposed to operational purposes) fails.
When I step into a function around the place I have my breakpoint, its data is displayed correctly. When I go up the stack (to the calling function, the one I stepped down from and where I wanted initially to have the breakpoint) - I am stuck with the "Collecting data" timeout again.
There have been numerous issues raised with the same point since at least 2005. Some were fixed, some not. The fixes were usually updates to the latest version (which I have).
Is there a general direction I can go to in order to fix or work around this family of problems?
EDIT: a year later the problem is still there and there is still no reaction from the devs/support after the bug was raised.
EDIT April 2018: It looks like the problem is solved in the 2018.1 version, the following code which was hanging when setting a breakpoint on the print line now works (I can see the variables):
import threading
def worker():
a = 3
print('hello')
threading.Thread(target=worker).start()
I had the same issue with Pycharm 2018.2 when working on a complex Flask project with SocketIO.
When I put a debug breakpoint inside the code and pressed the debug button, it stopped at the breakpoint, but the variables didn't load. It was just infinitely collecting data. I enabled Gevent compatibility and it resolved the issue. Here is where you can find the setting:
In case you landed here because you are using PyTorch (or any other deep learning library) and try to debug in PyCharm (torch 1.31, PyCharm 2019.2 in my case) but it's super slow:
Enable Gevent compatible in the Python Debugger settings as linkliu mayuyu pointed out. The problem might be caused due to debugging large deep learning models (BERT transformer in my case), but I'm not entirely sure about this.
I'm adding this answer as it's end of 2019 and this doesn't seem to be fixed yet. Further I think this is affecting many engineers using deep learning, so I hope my answer-formatting triggers their stackoverflow algorithm :-)
Note (June 2020):
While adding the Gevent compatible allows you to debug PyTorch models, it will prevent you from debug your Flask application in PyCharm! My breakpoints were not working anymore and it took me a while to figure out that this flag is the reason for it. So make sure to enable it only on a per-project base.
I also had this issue when I was working on code using sympy and the Python module 'Lea' aiming to calculate probability distributions.
The action I took that resolved the timeout issue was to change the 'Variables Loading Policy' in the debug setting from the default 'Asynchronously' to 'Synchronously'.
I think that this is caused by some classes having a default method __str__() that is too verbose. Pycharm calls this method to display the local variables when it hits a breakpoint, and it gets stuck while loading the string.
A trick I use to overcome this is manually editing the class that is causing the error and substitute the __str__() method for something less verbose.
As an example, it happens for pytorch _TensorBase class (and all tensor classes extending it), and can be solved by editing the pytorch source torch/tensor.py, changing the __str__() method as:
def __str__(self):
# All strings are unicode in Python 3, while we have to encode unicode
# strings in Python2. If we can't, let python decide the best
# characters to replace unicode characters with.
return str() + ' Use .numpy() to print'
#if sys.version_info > (3,):
# return _tensor_str._str(self)
#else:
# if hasattr(sys.stdout, 'encoding'):
# return _tensor_str._str(self).encode(
# sys.stdout.encoding or 'UTF-8', 'replace')
# else:
# return _tensor_str._str(self).encode('UTF-8', 'replace')
Far from optimum, but comes in hand.
UPDATE: The error seems solved in the last PyCharm version (2018.1), at least for the case that was affecting me.
I met the same problem when I try to run some Deep Learning scripts written by PyTorch (PyCharm 2019.3).
I finally figured out that the problem is I set num_workers in DataLoader to a large value (in my case 20).
So, in the debug mode, I would suggest to set num_workers to 1.
For me, the solution was removing manual watches every-time before starting to debug. If there were any existing manual watches in the "variables" window then it would remain stuck in "Collecting data...".
Using Odoo or Other Large Python Server
None of the above solution worked for me despite I tried all.
It normally works but saldomly gives this annoying Collecting data... or sometimes Timed Out....
The solution is to restart Pycharm and set less breakpoints as possible. after that it starts to work again.
I don't know way is doing that (maybe too many breakpoint) but it worked.

Excel RTD server in Python not updating data

I've got the excelRTDserver.py up and running in Excel 2010 (32bit) by changing the EXCEL_TLB_MINOR value to 7. I can see the server in the add-ins list and if I enter =RTD("Python.RTD.TimeServer","","seconds","5") into a cell, I get the current time. But it never updates. If I change the "5" to another number, I get an update but after the initial change it never changes again.
How do I get it to update? I found someone else with a similar problem here, but no solution.
UPDATE: I've got a little further - there is an exception raised within ServerStart when casting the PyIDispatch callback object into a IRTDUpdateEvent callback object. Using this method to capture the error message, I get "Cannot create a file when that file already exists.". If I follow the suggestion here and use win32com.client.CastTo(CallbackObject,'IRTDUpdateEvent') I get "This COM object can not automate the makepy process - please run makepy manually for this object", but I have already run makepy for Microsoft Excel 12.0 Object Library (1.6).
Any help would be greatly appreciated.
To work around this problem I've created a new project on github for pythoncom excel types:
https://github.com/pyxll/exceltypes
This includes a slightly modified version of excelRTDServer.py that uses the new type PyIRTDUpdateEvent instead of the win32com makepy wrapper, and so it now works in Excel 2010 (look for the comments 'EXCELTYPES_MODIFICATION' in exceltypes/demos/excelRTDServer.py).
To build the project you will need visual studio installed (it won't build with gcc) and you can build it using the setup.py included in the project as follows:
python setup.py install
If you need to force it to use visual studio instead of gcc use the "--compiler=msvc" option, if you're using anaconda for example.
If you want to use Visual Studio 2012 instead of the default 2010 add the following lines to setup.py:
from distutils import msvc9compiler
msvc9compiler.VERSION = 11
I think you may be out of luck.
According to the author of excelRTDServer.py in a recent python-win32 thread:
The message that this is in response to describes your exact problem, and it's recent, so maybe you already got this info directly, but in case you didn't...
I fear that things with IRTDUpdateEvent have changed with recent versions
of excel (since Excel 2007? I guess that's not so 'recent' anymore...).
While hunting around for news of interface changes, I came across this
thread in a java forum:
http://www.nevaobject.com/phpbb3/viewtopic.php?t=516
The part that worries me is this comment:
"Apparently in Excel 12 (Excel 2007) the RTD callback object that
implements dual IRTDUpdateEvent interface throws exception (generic COM
exception 0x80020009) when is called via IDispatch. If you use v-table
binding the call to UpdateNotify succeeds. I don't really know whether it
is a bug in Excel 12 or a feature."
So far I haven't been able to confirm this against the MSDN information...
But if this is true, it does explain the problem being seen. Many older
examples on the web, and pywin32+makepy treat this interface as IDispatch,
and wrap it accordingly.
I don't think we can fix this with pywin32 as it is right now. My
understanding is that it relies on IDispatch support. May need to look at
comtypes (http://starship.python.net/crew/theller/comtypes/) to wrap the
(new?) IRTDUpdateEvent objects, or maybe a C extension. :(
Python:
I get "This COM object can not automate the makepy process - please run makepy manually for this object", but I have already run makepy for Microsoft Excel 12.0 Object Library (1.6).
Yesterday at work after a while reading your question, I forgot that is python and not java :)).. Well, the only thing I think now is that seems you need to run the PIA for office 2010.
Edit later: if you steel have problems after what i told you., please comment and not downvote, because this issue is uncommon.
JAVA:
This happen because is missing the option to generate v-tables.
You need to modify ServerStart method and also IRTDServer interface and IRTDServer_Impl class., so CallbackObject is COMIUnknown. Then you need to generate IRTDServer_Skel class by runing the IBuilder.
Now you can generate a new java wrapper for IRTDUpdateEvent to request v-table:
That error message sometimes is raised when u put it in something like 'for'-loop,here is a hackly solution 4u:import time,and use 'sleep()' in your loop
The IRTDUpdateEvent problem (throwing exception) as described in here should be fixed in the latest Office 365 version.
"Apparently in Excel 12 (Excel 2007) the RTD callback object that implements dual IRTDUpdateEvent interface throws exception (generic COM exception 0x80020009) when is called via IDispatch. If you use v-table binding the call to UpdateNotify succeeds. I don't really know whether it is a bug in Excel 12 or a feature."
Therefore excelRTDserver.py should work fine with the latest version of Office. In other words, =RTD("Python.RTD.TimeServer","","seconds","5") should continuously get updated as expected.

Using cscope to browse Python code with VIM?

Has anyone managed successfully using cscope with Python code? I have VIM 7.2 and the latest version of cscope installed, however it doesn't get my code's tags correctly (always off by a couple of lines). I tried the pycscope script but its output isn't supported by the modern version of cscope.
Any ideas? Or an alternative for browsing Python code with VIM? (I'm specifically interested in the extra features cscope offers beyond the simple tags of ctags)
EDIT: I'm going to run through the process step by step:
Preparing the sources:
exhuberant ctags, has an option: -x
Alternatively, ctags can generate a cross reference file which lists,
in human readable form, information about the various source objects
found in a set of language files.
This is the key to the problem:
ctags -x $(ls **/*.py); # replace with find if no zsh
will give you your database of source objects in a known, format, described under
man ctags; # make sure you use exuberant ctags!
Gnu Global is not limited to only the "out of the box" type of files. Any regular file format will serve.
Also, you can use gtags-cscope, which comes with global as mentioned in section 3.7 of the manual, for a possible shortcut using gtags. You'll end up with an input of a ctags tabular file which Global/gtags can parse to get your objects, or you can use the source for pycscope together with your ctags file of known format to get an input for the vim cscope commands in
if_cscope.txt.
Either way it's quite doable.
Perhaps you'd prefer idutils?
Definintely possible since
z3c.recipe.tags
on pypi makes use of both ctags and idutils to create tag files for a buildout, which is a method I shall investigate in short while.
Of course, you could always use the greputils script below, it has support for idutils , we know idutils works with python, and if that fails, there is also something called vimentry from this year that also uses python, idutils and vim.
Reference links (not complete list):
gtags vimscript, uses Gnu global. updated 2008
greputils vimscript, contains support for the *id idutils, 2005
lid vimscript, Ancient, but this guy is pretty good, his tag and buffer howtos are amazing 2002
An updated version of pyscope, 2010
Hopefully this helps you with your problem, I certainly helped me. I would have been quite sad tonight with a maggoty pycscope.
This seems to work for me:
Change to the top directory of your python code. Create a file called cscope.files:
find . -name '*.py' > cscope.files
cscope -R
You may need to perform a cscope -b first if the cross references don't get built properly.
From a correspondence with the maintainer of cscope, this tool isn't designed to work with Python, and there are no plans to implement that compatibility. Whatever works now, apparently works by mistake, and there is no promise whatsoever that it will keep working.
It appears I've been using an out-of-date version of pycscope. The latest version 0.3 is supported by the cscope DB. The author of pycscope told me that he figured out the output format for the cscope DB from reading the source code of cscope. That format isn't documented, on purpose, but nevertheless it currently works with pycsope 0.3, which is the solution I'll be using.
I'm going to accept this answer since unfortunately no other answer provided help even after bounty was declared. No answers are upvoted, so I honestly have no idea where the bounty will go.
There is a wonderful Python-mode-klen plugin. If you have it and rope (python refactoring library) installed, then going to the definition of a particular term is as simple as <C-c>g or <C-c>rag (first is filetype mapping, second is a global one). There are much more useful features, some useless for me. All of them are disableable. Features from list of questions found at cscope-intro:
Where is this symbol used? <C-c>f. Rather confusing though, as results in quickfix list do show - instead of the actual lines (though they point to the correct location). Maybe it will be fixed.
Where is it defined?, What is this global symbol's definition?, Where is this function in the source files? <C-c>g
What is <...> global symbol's definition? <C-c>raj
Not very much, but I am not too experienced user of ropevim.
I got the same question you got, after browsing the internet, I found a way to fix this:
create a python script: cscope_scan.py
import os
codeRootDir = os.getcwd()
__revision__ = '0.1'
__author__ = 'lxd'
FILE_TYPE_LIST= ['py']
if __name__ == '__main__':
import os
f = open('cscope.files','w')
for root,dirs,files in os.walk(codeRootDir):
for file in files:
for file_type in FILE_TYPE_LIST:
if file.split('.')[-1] == file_type:
f.write('%s\n' %os.path.join(root,file))
f.close()
cmd = 'cscope -bk'
os.system(cmd)
excute this script under you code's root folder, this will generate the cscope.files and then excute cscope -b I don't know what happens to my computer, the last two lines aren't working well, but I think manually type a cscope -bk is acceptable:)
This hack also seems to force cscope to go through Python files:
cscope -Rb -s *
If you accept that cscope is apparently not designed to work with Python.
Superset any language any tool question: How to find all occurrences of a variable in Vim?

Vim Python omni-completion failing to work on system modules

I'm noticing that even for system modules, code completion doesn't work too well.
For example, if I have a simple file that does:
import re
p = re.compile(pattern)
m = p.search(line)
If I type p., I don't get completion for methods I'd expect to see (I don't see search() for example, but I do see others, such as func_closure(), func_code()).
If I type m., I don't get any completion what so ever (I'd expect .groups(), in this case).
This doesn't seem to affect all modules.. Has any one seen this behaviour and knows how to correct it?
I'm running Vim 7.2 on WinXP, with the latest pythoncomplete.vim from vim.org (0.9), running python 2.6.2.
Completion for this kind of things is tricky, because it would need to execute the actual code to work.
For example p.search() could return None or a MatchObject, depending on the data that is passed to it.
This is why omni-completion does not work here, and probably never will. It works for things that can be statically determined, for example a module's contents.
I never got the builtin omnicomplete to work for any languages. I had the most success with pysmell (which seems to have been updated slightly more recently on github than in the official repo). I still didn't find it to be reliable enough to use consistently but I can't remember exactly why.
I've resorted to building an extensive set of snipMate snippets for my primary libraries and using the default tab completion to supplement.

Categories