First of all I am quite new with both python and Linux.
That said I am trying to communicate to an FTDI UM232H chip using the pylibftdi library.
I am running my scripts on Linux Ubuntu 12.04.
I installed the library that I got here:
http://pylibftdi.readthedocs.org/en/latest/
and apparently everything worked fine.
I was also able to run some of the examples successfully.
I then tried to write some code to communicate with the device: I wired it in a bus powered configuration (in order to get power from the USB), then I short-circuited TX and RX pins so that what I am sending on TX will be read back on RX.
I do not get any error, but I am not able to read back anything on RX.
Here is my very simple code:
import pylibftdi as p
import time
test = p.Driver()
print test.list_devices()
#This is not working
#print test.libftdi_version()
dev1 = p.Device(device_id='FTUBL36A', mode='t')#, chunk_size = 0)
dev1.flush()
dev1.baudrate = 9600
len = dev1.write('Hello World')
time.sleep(1)
res = dev1.read(len)
print 'res: ', res
Also I am not able to get the libftdi_verion information, even though I installed it.
Does anybody have an idea of what I am doing wrong? Has anyone else ever experienced such a problem?
Thanks in advance!
I'm afraid I don't have a full answer, but some observations (I would make this more concise and put as a comment, but I don't have sufficient rep).
Disclaimer: I'm the author of pylibftdi
libftdi version / installation
The Ubuntu libftdi package (even latest 13.10) is 0.20. This is particularly confusing / annoying since the Ubuntu package name is 'libftdi1'. Prior to (real) libftdi 1.0, there isn't a ftdi_get_library_version() function, so libftdi_version() won't work with the Ubuntu default package. The next version of pylibftdi recognises this and gives an appropriate response.
To install 1.0, follow the instructions at http://developer.intra2net.com/mailarchive/html/libftdi/2013/msg00014.html (e.g. the following worked for me - note I'd previously had the Ubuntu libftdi1 package installed, and other dependencies may be required):
$ sudo apt-get install cmake libusb-1.0
$ git clone git://developer.intra2net.com/libftdi
$ cd libftdi
$ git checkout tags/v1.0
$ mkdir build; cd build
$ cmake ..
$ make
$ sudo make install
$ sudo ldconfig # ensure the new library is recognised
Following this, getting the library version should work:
$ python -c 'from pylibftdi import Driver; print Driver().libftdi_version()'
(1, 0, 0, '1.0', 'v1.0')
Data reading errors with UM232H
Note that the latest libftdi (repeat things above but use git checkout master) seems to work slightly better for me, and your program above works perfectly for me with a UM232H (I obviously omitted the device=... parameter, but otherwise left it unchanged). If I replace 'Hello World' with 'Hello World' * 10, writing and reading a longer string, then I get a truncated value returned, typically only 29 bytes. I'm not sure why it's this value; with earlier libftdi versions it seemed to return 17 bytes consistently. Very odd.
With other devices (UB232R, UM232R), this all works perfectly as expected, so in the (admittedly unlikely) event you have a choice of device, you could consider switching... Note the FT232H chip is relatively new and support in libftdi may not be as solid as for the older devices - but equally possible is that I'm incorrectly assuming it should work in a similar way to the older devices in pylibftdi.
Other thoughts
I've tried blacklisting ftdi_sio (add blacklist ftdi_sio to the end of /etc/modprobe.d/blacklist.conf), which stops the Linux kernel doing who-knows-what with the device on plugging it in (the Rx/Tx LED flashes several times when plugging in without this and the ftdi_sio module is loaded. I'm not sure this is necessary or makes a difference though.
Note that there's no guarantee that a read() will return anything previously written to the device (assuming an external Tx->Rx loopback), due to internal buffering in the device, and various driver layers (including USB). These are just streams, and framing should be done in the application layer. Having said that, it 'just works' on a UB232R, and even taking this into account seems the UM232H seems to have issues with pylibftdi for serial mode.
Bitbang mode seems to work fine for UM232H / pylibftdi.
I'll continue investigating and update this answer if I find out anything more.
Related
When I use a plugin that requires python, it can't find it and barfs.
The places that seem to being searched are:
Using -version I see both:
+python/dyn
+python3/dyn
However :echo has("python3") returns 0.
I'm not sure if this is compile time config, or runtime-configurable via .vimrc.
I'm not a python developer, and the few times I've ventured into that world were in the middle of the python2/python3 mess that turned me off completely. I've played around enough to have configured pyenv it seems, and get
╰─$ which python
/Users/benlieb/.pyenv/shims/python
╰─$ python --version
Python 3.10.3
Can anyone help shed light on what to do to get python3 findable/usable in my vim?
Update:
Following #romainl's suggestion below I set in my .vimrc
set pythonthreedll=/Users/benlieb/.pyenv/shims/python
But getting the following error:
+python/dyn and +python3/dyn are described here: :help python-dynamic.
By default, :help 'pythonthreedll' points to:
/opt/homebrew/Frameworks/Python.framework/Versions/3.10/Python
because MacVim is built against that version. The message in your screenshot says that there is nothing at that path. In order to have a working Python 3 interface, you can either:
install Python 3.10 via homebrew,
or point pythonthreedll to a valid path.
For example, I don't use Homebrew so the default value is useless to me, but I use MacPorts so this is my pythonthreedll:
set pythonthreedll=/opt/local/Library/Frameworks/Python.framework/Versions/3.10/lib/libpython3.10.dylib
After some time, I found the following works, thought it was not a fun path of discovery.
let &pythonthreedll = trim(system("pyenv which python"))
I was playing around with the Scapy sniff function and I wanted to add a filter into the parameters. So I added this filter:
pkt = sniff(count=1, filter='arp')
and the output i recieve is:
WARNING: Cannot set filter: libpcap is not available. Cannot compile filter !
I still get a packet that was sniffed, but for some reason the filter is not working.
I am running Mac OS Big Sur. I have libpcap installed using Homebrew and I have tcpdump installed using Homebrew.
I also saw online that you could manually initialize pcap on Scapy using
conf.use_pcap = True
However when I type this in I get:
WARNING: No libpcap provider available ! pcap won't be used
I'm sure it is just a small fix but I can't seem to figure out what I am doing wrong. If anyone can help that would be amazing!
Older versions of Python 3 assume that, on macOS, all shared libraries are in files located in one of a number of directories.
That is not the case in Big Sur; instead, a cache file is generated for system shared libraries, and at least some of the libraries from which the cache file is generated are not shipped with the OS.
This is one of the issues in CPython issue 41100, "Support macOS 11 and Apple Silicon Macs"; the fix is to look in the shared library cache as well as in the file system.
That issue says
Thank you to everyone who contributed to this major undertaking! A particular thank you to Lawrence for doing much of the initial work and paving the way. Now that 3.8 also supports Big Sur and Apple Silicon Macs as of the imminent 3.8.10 release, it's time to close this issue. If new concerns arise, pleasa open or use other issues.
So a sufficiently recent version of Python should fix this issue.
tldr:
$ brew install libpcap
$ ln -s /usr/local/opt/libpcap/lib/libpcap.a /usr/local/lib/libpcap.a
$ ln -s /usr/local/opt/libpcap/lib/libpcap.dylib /usr/local/lib/libpcap.dylib
Explanation (applicable for Python 3.9.1, Scapy 2.4.5 # Big Sur and libpcap installed by brew):
When you debug the Scapy sniff function, after a while you get to scapy.libs.winpcapy, line 36:
_lib_name = find_library("pcap")
find_library is located in ctypes.util, for POSIX it starts on line 72. On line 73 you can see that the library is expected as one of these filenames ['libpcap.dylib', 'pcap.dylib', 'pcap.framework/pcap'], being fed to dyld_find.
dyld_find is located in ctypes.macholib.dyld on line 121. If you iter through the chain on line 125 yourself, you find out that dyld_find is trying to succeed with one of these paths:
/usr/local/lib/
/Users/<user>/lib/
/usr/local/lib/
/lib/
/usr/lib/
In my case none of them contained the libpcap lib, which is installed in different location by brew.
The library sits in /usr/local/opt/libpcap/lib/.
And here you go, you just need to get the file libpcap.dylib (nothing wrong with libpcap.a too) into one of those paths searched by dyld_find. The two soft links above are one of a few more possible solutions.
I am working on node2vec in Python, which uses Gensim's Word2Vec internally.
When I am using a small dataset, the code works well. But as soon as I try to run the same code on a large dataset, the code crashes:
Error: Process finished with exit code 134 (interrupted by signal 6: SIGABRT).
The line which is giving the error is
model = Word2Vec(walks, size=args.dimensions,
window=args.window_size, min_count=0, sg=1,
workers=args.workers, iter=args.iter)
I am using PyCharm and Python 3.5.
What is happening? I could not find any post which could solve my problem.
You are almost certainly running out of memory – which causes the OS to abort your memory-using process with the SIGABRT.
In general, solving this means looking at how your code is using memory, leading up to and at the moment of failure. (The actual 'leak' of excessive bulk memory usage might, however, be arbitrarily earlier - with only the last small/proper increment triggering the error.)
Specifically with the usage of Python, and the node2vec tool which makes use of the Gensim Word2Vec class, some things to try include:
Watch a readout of the Python process size during your attempts.
Enable Python logging to at least the INFO level to see more about what's happening leading-up to the crash.
Further, be sure to:
Optimize your walks iterable to not compose a large in-memory list. (Gensim's Word2Vec can work on a corpus of any length, iuncluding those far larger than RAM, as long as (a) the corpus is streamed from disk via a re-iterable Python sequence; and (b) the model's number of unique word/node tokens can be modeled within RAM.)
Ensure the number of unique words (tokens/nodes) in your model doesn't require a model larger than RAM allows. Logging output, once enabled, will show the raw sizes involved just before the main model-allocation (which is likely failing) happens. (If it fails, either: (a) use a system with more RAM to accomdate your full set of nodes; or (b) or use a higher min_count value to discard more less-important nodes.)
If your Process finished with exit code 134 (interrupted by signal 6: SIGABRT) error does not involve Python, Gensim, & Word2Vec, you should instead:
Search for occurrences of that error combined with more specific details of your triggering situations - the tools/libraries and lines-of-code that create your error.
Look into general memory-profiling tools for your situation, to identify where (even long before the final error) your code might be consuming almost-all of the available RAM.
If you're running macOS v10.15 (Catalina), this might help you.
For me, I started seeing this error right after the upgrade to Catalina.
Execute the following commands one by one in Terminal, and you should be good:
brew update && brew upgrade && brew install openssl
cd /usr/local/Cellar/openssl/1.0.2t/lib
sudo cp libssl.1.0.0.dylib libcrypto.1.0.0.dylib /usr/local/lib/
cd /usr/local/lib
mv libssl.dylib libssl_bak.dylib
mv libcrypto.dylib libcrypto_bak.dylib
sudo ln -s libssl.1.0.0.dylib libssl.dylib
sudo ln -s libcrypto.1.0.0.dylib libcrypto.dylib
I found this in one of the Apple forums (but I can't seem to recollect exactly where).
Also, some blessed soul has also written a batch for this.
It can be found in this gist.
I had the same issue, and finally, I figured it out. The reason for me was my Keras version 2.2.0 was too high.
After I changed the version to 2.0.1, it worked.
For me the problem was with the Snowflake connector Python library running on macOS v10.15 (Catalina).
I found the solution from user VikR in a blog post given in answer 59538581 that has been deleted from this page: Python Abort trap: 6 fix after Catalina update by Danny Bryant. It explains that the SSL libraries need to be placed back into your Mac's operating system path and gives the steps to do it. It also lists the steps to upgrade your libraries using brew and pip3.
Here are the steps that I followed to get my Python script running again.
brew update
brew upgrade
cd /usr/local/lib
ln -s /usr/local/Cellar/openssl\#1.1/1.1.1j/lib/libssl.1.1.dylib libssl.dylib
ln -s /usr/local/Cellar/openssl\#1.1/1.1.1j/lib/libcrypto.1.1.dylib libcrypto.dylib
pip3 install --upgrade snowflake-connector-python
For me I did not have to install OpenSSL as I had already installed it. Please read Bryant's page for more detail.
Note that
My version of OpenSSL is of course later than Bryant's instructions. Your version will most likely be later, too, compared to what I used here.
The Homebrew /Cellar/ directory structure was slightly different for me versus when Bryant wrote his instructions. It may have changed again when you read this.
I chose to link the libraries directly rather than linking to copies of the libraries, as Bryant did.
My Homebrew /Cellar/ and /usr/local/lib folders actually needed a fair amount of user ownership changes. Since that wasn't related to the original question, I omitted those steps.
Use:
import os
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
I've been trying to get PyOpenCL and PyCUDA running on a Linux Mint machine. I have things installed but the demo scripts fail with the error:
pyopencl.cffi_cl.LogicError: clgetplatformids failed: PLATFORM_NOT_FOUND_KHR
Configuration
$ uname -a && cat /etc/lsb-release && lspci | grep NV
Linux 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
DISTRIB_DESCRIPTION="Linux Mint 17.3 Rosa"
01:00.0 VGA compatible controller: NVIDIA Corporation GK208 [GeForce GT 730] (rev a1)
Relevant installed packages:
libcuda1-352-updates
libcudart5.5:amd64
nvidia-352-updates
nvidia-352-updates-dev
nvidia-cuda-dev
nvidia-cuda-toolkit
nvidia-opencl-icd-352-updates
nvidia-profiler
nvidia-settings
ocl-icd-libopencl1:amd64
ocl-icd-opencl-dev:amd64
opencl-headers
python-pycuda
python-pyopencl
python3-pycuda
python3-pyopencl
Research
This post describes a scenario in which the package-manager installed opencl/cuda implementation don't set up some simlinks correctly. That issue doesn't seem to be present on my system.
There was a version number mismatch between the graphics drivers (were nvidia-340) and the nvidia-opencl package (352). I update the graphics drivers to nvidia-352-updates-dev but the issue remains.
There is a bug in Arch linux that seems to revolve around the necessary device files not being created. However, I've verified that the /dev/nvidia0 and /dev/nvidiactl exist and have permissions 666, so they should be accessible.
Another Stackoverflow post suggests running the demos as root. I have tried this and the behavior does not change.
Older installation instructions for cuda/opencl say to download drivers directly from the NVidia website. I'm not sure this still applies, so I'm holding off on that for now since there seem to be plenty of relevant packages in the respositories.
The same error, but for an ATI card on a different linux system, was resolved by putting proper files in /usr/lib/OpenCL/vendors. That path isn't used on my system, However, I do have /etc/OpenCL/vendors/nvidia.icd which contains the line libnvidia-opencl.so.1, suggesting my issue is dissimilar.
This error has been observed on OSX, but for unrelated reasons. Similar error messages for PyCUDA also appear to be unrelated.
This error can occur under remote access since the device files are not initialized if X is not loaded. However, I'm testing this in a desktop environment. Furthermore, I've run the manual commands suggested in that thread just to be sure, and they are redundant since the relevant /dev entries already exist.
There is a note here about simply running a command a few times to get around some sort temporary glitch. That doesn't seem to help.
This post describes how the similar cuInit failed: no device CUDA error was caused by not having the user in the video group. To check, I ran usermod -a -G video $USER, but it did not resolve my issue.
In the past, routine updates have broken CUDA support. I have not taken the time to explore every permutation of package version numbers, and it's possible that downgrading some packages may change the situation. However, without further intuition about the source of the issue, I'm not going to invest time in doing that since I don't know whether it will work.
The most common google search result for this error, appearing four times on the first pages, is a short and unresolved email thread on the PyOpenCL list. Checking the permissions bits for /dev/nvidia0 and /dev/nvidiactl is suggested. On my machine user/group/other all have read and write access to these devices, so I don't think that's the source of the trouble.
I've also tried building and installing PyOpenCL form the latest source, rather than using the version in the repositories. This is failing at an earlier phase which suggests to me it is not building correctly.
Summary
The issue would appear to be that PyCUDA/PyOpenCL cannot locate the graphics card. There are several known issues that can cause this, but none of them seem to apply here. I'm missing something, and I'm not sure what else to do.
I ran cProfile on a python 3 script, worked nicely, then tried to visualize it using runsnake. Howvever, I got an empty screen and the error 'bad marshal data'.
I removed .pyc file but that did not work either.
The code I used to install runsnake was:
sudo apt-get install python-profiler python-wxgtk2.8 python-setuptoolD
sudo easy-install installSquareMap RunSnakeRun
I am using UBUNTU.
Many thanks.
note: I should add I installed everything while py3k was activated
TL;DR: This error occurs when profiling in Python 2.x and viewing the profile in Python 3.x or vice versa.
I had the same problem. As far as I can tell, the RunSnakeRun package has not been ported to Python3. At least, I could pip it to python2 but not to python3 (SyntaxError). Further, I think the output format of cProfile is not compatible between python 2/3. I did not take the time to find an definitive confirmation of this, but in the doc of cProfile class pstats.Stats(*filenames, stream=sys.stdout), they do say "The file selected by the above constructor must have been created by the corresponding version of profile or cProfile. To be specific, there is no file compatibility guaranteed with future versions of this profiler, and there is no compatibility with files produced by other profilers.". This seems to be the origin of your problem. For e.g., I made a profile output from python3
import cProfile
cProfile.run('some code to profile', 'restats')
and tried to open it in RunSnakeRun and got the same marhsal error you got. Further, if I do
import pstats
p = pstats.Stats('restats')
p.strip_dirs().sort_stats(-1).print_stats()
in python3, it works like a charm. If I do it in python2, it gives the marshal error. Now, RunSnakeRun is executed in python2 (unless you found some way to make it run in python3). So, my guess is that you have performed your profiling in python3 and are using tools relying on python2 to analyze them, which tools are expecting the output to be compatible with python2.
The RunSnakeRun project seems to be inactive for a while now (copyright on the home page is 2005-2011) and there is no indication that it will be ported to python3.... Maybe considering alternative visualization tool might be the best way to go for you if you want to develop in Python3. pyprof2calltree in combination with KCachegrind worked fine for me in Linux. It can provide a similar visual view of the profiling output as you would get from RunSnakeRun.
Also ran into the same problem and I think there's no (good) way to use runsnake for Python3 (as it already was mentioned in the previous answer). However, SnakeViz might help. It's a relatively intuitive graphical overview of profiling data that, like runsnake, builds on top of profile outputs. Nice bonus: works also for Jupyter notebooks!