I am trying to use fastText with PyCharm. Whenever I run below code:
import fastText
model=fastText.train_unsupervised("data_parsed.txt")
model.save_model("model")
The process exits with this error:
Process finished with exit code -1073740791 (0xC0000409)
What causes this error and what can be done to avoid it?
Are you using a windows system? 0xC0000409 means stack buffer overflow as seen in this windows help link.
Below is some advice that is taken from this link to solve similar type of issues.
STATUS_STACK_BUFFER_OVERRUN is a /GS exception. They are thrown when Windows detects 'tampering' of a security cookie protecting a return address. It is probable that you are writing something past the end of a buffer, or writing something to a pointer that is pointing to the wrong place. However it is also possible that you have some dodgy memory or otherwise faulty hardware that is tripping validation code.
One thing that you could try is to disable the /GS switch (project properties, look for C/C++ -> Code Generation -> Buffer Security Check) and recompile. Running the code again may well cause an error that you can trap and trace. I think /GS is designed not to give you any info for security reasons.
Another thing you could do is run the code as is on a different PC and see if that fails, this may point to a hardware problem if it doesn't.
Other strategies are reduce the size of the training file by removing some text and reducing the size of the vocabulary by running some text normalisation.
Related
I am currently writing a comparably large program in python. Within in the program I have a GUI, which, after I made some change throws the following error:
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 40 (X_TranslateCoords)
Resource id in failed request: 0x40220f
Serial number of failed request: 2653
Current serial number in output stream: 2653
It only happens about every other time I start the program. Since the program is quite large I cannot just post my entire code here. Does anybody have a good standard way on how to start looking for the error? I seems to be a problem with the Linux window manager, as I do not have the problem in Windows.
I am currently running Ubuntu 22.04
Oh and if anybody could also tell me what the error actually means that would also be very nice.
Furthermore it might be interesting to say that the error started right after I started running parts of the program in a different thread
Thanks in advance
I have seen a lot of specific posts to particular case-specific problems, but no fundamental motivating explanation. What does this error:
RuntimeError: CUDA error: device-side assert triggered
mean? Specifically, what is the assert that is being triggered, why is the assert there, and how do we work backwards to debug the problem?
As-is, this error message is near useless in diagnosing any problem because of the generality that it seems to say "some code somewhere that touches the GPU" has a problem. The documentation of Cuda also does not seem helpful in this regard, though I could be wrong.
https://docs.nvidia.com/cuda/cuda-gdb/index.html
When I shifted my code to work on CPU instead of GPU, I got the following error:
IndexError: index 128 is out of bounds for dimension 0 with size 128
So, perhaps there might be a mistake in the code which for some strange reason comes out as a CUDA error.
When a device-side error is detected while CUDA device code is running, that error is reported via the usual CUDA runtime API error reporting mechanism. The usual detected error in device code would be something like an illegal address (e.g. attempt to dereference an invalid pointer) but another type is a device-side assert. This type of error is generated whenever a C/C++ assert() occurs in device code, and the assert condition is false.
Such an error occurs as a result of a specific kernel. Runtime error checking in CUDA is necessarily asynchronous, but there are probably at least 3 possible methods to start to debug this.
Modify the source code to effectively convert asynchronous kernel launches to synchronous kernel launches, and do rigorous error-checking after each kernel launch. This will identify the specific kernel that has caused the error. At that point it may be sufficient simply to look at the various asserts in that kernel code, but you could also use step 2 or 3 below.
Run your code with cuda-memcheck. This is a tool something like "valgrind for device code". When you run your code with cuda-memcheck, it will tend to run much more slowly, but the runtime error reporting will be enhanced. It is also usually preferable to compile your code with -lineinfo. In that scenario, when a device-side assert is triggered, cuda-memcheck will report the source code line number where the assert is, and also the assert itself and the condition that was false. You can see here for a walkthrough of using it (albeit with an illegal address error instead of assert(), but the process with assert() will be similar.
It should also be possible to use a debugger. If you use a debugger such as cuda-gdb (e.g. on linux) then the debugger will have back-trace reports that will indicate which line the assert was, when it was hit.
Both cuda-memcheck and the debugger can be used if the CUDA code is launched from a python script.
At this point you have discovered what the assert is and where in the source code it is. Why it is there cannot be answered generically. This will depend on the developers intention, and if it is not commented or otherwise obvious, you will need some method to intuit that somehow. The question of "how to work backwards" is also a general debugging question, not specific to CUDA. You can use printf in CUDA kernel code, and also a debugger like cuda-gdb to assist with this (for example, set a breakpoint prior to the assert, and inspect machine state - e.g. variables - when the assert is about to be hit).
With newer GPUs, instead of cuda-memcheck you will probably want to use compute-sanitizer. It works in a similar fashion.
In my case, this error is caused because my loss function just receive values between [0, 1], and i was passing other values.
So, normalizing my loss function input, solve this:
saida_G -= saida_G.min(1, keepdim=True)[0]
saida_G /= saida_G.max(1, keepdim=True)[0]
Read this: link
I am writing a Python program to analyze log files. So basically I have about 30000 medium-size log files and my Python script is designed to perform some simple (line-by-line) analysis of each log file. Roughly it takes less than 5 seconds to process one file.
So once I set up the processing, I just left it there and after about 14 hours when I came back, my Python script simply paused right after analyzing one log file; seems that it hasn't written into the file system for the analyzing output of this file, and that's it. No more proceeding.
I checked the memory usage, it seems fine (less than 1G), I also tried to write to the file system (touch test), it also works as normal. So my question is that, how should I proceed to debug the issue? Could anyone share some thoughts on that? I hope this is not too general. Thanks.
You may use Trace or track Python statement execution and/or The Python Debugger module.
Try this tool https://github.com/khamidou/lptrace with command:
sudo python lptrace -p <process_id>
It will print every python function your program invokes and may help you understand where your program stucks or in an infinity loop.
If it does not output anything, that's proberbly your program get stucks, so try
pstack <process_id>
to check the stack trace and find out where stucks. The output of pstack is c frames, but I believe somehow you can find something useful to solve your problem.
In PyCharm (JetBrains), I have been having trouble with typing full statements without getting an interuption. I first thought it was due to me not having updated the software, so I updated it, but the problem remains.
So if I type any statement or word, PyCharm seems to delay before I can proceed. An example:
import csv
Even before I finish typing "import" - if I delay a keystroke - PyCharm begins to "think" and the window is not accessible for about one to two seconds (quite literally). I assume it is going to give me suggestions or show a tip/error about the code.
Any thoughts to prevent this from happening?
Edit:
Windows 8.1; PyCharm 2016.2
Code Complete turned off via Settings->Editor->General->Code Completion, but did not solve problem.
Key PC Spec:
Intel Core i5-337U
4GB Ram
64-bit
Edit2:
I receive this error when I run anything now, including simply print("test"):
Process finished with exit code -1073741511 (0xC0000139)
Will separate the question somewhere else, since this may be a separate problem altogether.
Try disabling code completion. I believe that your computer can't search through all of Python's librarys fast enough so it freezes for a bit.
I have a rather large client-server network application, written in Python. I'm using select.poll to provide asynchronous capabilities. For the past six months, everything has worked fine. However, recently I changed some things and allowed the client to reliably log-off from the server. It appeared at first glance that the client was never receiving the request, and furthermore, it was blocking. When I killed the process with , I received the following output:
*** glibc detected *** /usr/bin/python: corrupted double-linked list: 0x0a9fea60 ***
======= Backtrace: =========
/lib/i386-linux-gnu/libc.so.6(+0x6cbe1)[0xd96be1]
/lib/i386-linux-gnu/libc.so.6(+0x6fc1c)[0xd99c1c]
/lib/i386-linux-gnu/libc.so.6(__libc_malloc+0x63)[0xd9b1d3]
/usr/lib/i386-linux-gnu/libxcb.so.1(+0x8ff6)[0xb30ff6]
/usr/lib/i386-linux-gnu/libxcb.so.1(+0x706d)[0xb2f06d]
/usr/lib/i386-linux-gnu/libxcb.so.1(+0x75b5)[0xb2f5b5]
/usr/lib/i386-linux-gnu/libxcb.so.1(xcb_writev+0x67)[0xb2f667]
/usr/lib/i386-linux-gnu/libX11.so.6(_XSend+0x14b)[0x59b42b]
/usr/lib/i386-linux-gnu/libX11.so.6(_XFlush+0x39)[0x59b889]
/usr/lib/i386-linux-gnu/libX11.so.6(XFlush+0x31)[0x57ba81]
/usr/lib/libSDL-1.2.so.0(+0x34dfe)[0x16adfe]
/usr/lib/libSDL-1.2.so.0(+0x37998)[0x16d998]
/usr/lib/libSDL-1.2.so.0(+0x393db)[0x16f3db]
/usr/lib/libSDL-1.2.so.0(SDL_PumpEvents+0x3d)[0x140d7d]
/usr/lib/libSDL-1.2.so.0(SDL_PollEvent+0x17)[0x140db7]
/usr/lib/libSDL-1.2.so.0(SDL_EventState+0x58)[0x140f78]
/usr/lib/libSDL-1.2.so.0(SDL_JoystickEventState+0x5b)[0x16810b]
/usr/lib/python2.7/dist-packages/pygame/joystick.so(+0x196d)[0x55896d]
/usr/lib/python2.7/dist-packages/pygame/base.so(+0x178a)[0x56078a]
/usr/lib/python2.7/dist-packages/pygame/base.so(+0x17c7)[0x5607c7]
/usr/bin/python(PyEval_EvalFrameEx+0x4332)[0x80de822]
/usr/bin/python(PyEval_EvalCodeEx+0x127)[0x80e11e7]
/usr/bin/python[0x8105a61]
/usr/bin/python(PyObject_Call+0x4a)[0x80a464a]
/usr/bin/python(PyEval_CallObjectWithKeywords+0x44)[0x80da034]
/usr/bin/python(Py_Finalize+0xc7)[0x8070ee1]
/usr/bin/python(Py_Main+0xc66)[0x805c109]
/usr/bin/python(main+0x1b)[0x805b25b]
/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0xd40e37]
/usr/bin/python[0x81074ad]
followed by a memory map, which I'm not posting for the sake of brevity. I ran the code under PDB, and found that the client was blocking on the call to pollingObject.poll(0), which shouldn't be blocking. So, I changed that call to select.select([socket], [], [], 0), still without success. I'm using PyGame, if that makes a difference, as I know it sometimes does. I'm completely lost here. I know that Python overrides malloc, could it have something to do with that?
I managed to fix it by implementing the network code in C and calling it from Python.
It looks to me like PyGame is checking for input events after the X connection has been closed, due to finalizers. Calling anything in Xlib with a Display * that's already been passed to XCloseDisplay means accessing already-freed memory, of course, and if that's what's going on it isn't surprising that glibc's heap becomes corrupted.
If my diagnosis is correct, you won't be able to truly fix it at the application level, but producing a minimal test case and submitting it to the PyGame developers might be productive.