I'm trying to find out the speed of a live data-transfer via USB on a Mac run from the command line with Android Debug Bridge.
Is there a way to do this with any Python-packages ?
Basically, I just want to the script to show me the speed as-is shown at the bottom of a file-transfer window. If not with Python, any command-line utility for the same are welcome.
Are you doing the file transfer inside python?
With a reader and writer?
If so, you can read a piece into a buffer, write it out, update a progressbar and repeat this until the file is completely transfered.
The progressbar module has options to calculate and display the transfer rate just by giving it updates on the writing progress.
See http://code.google.com/p/python-progressbar/ for more info and examples of the progressbar module.
edit:
fixxer, you can use python to check the file size of the file(s) on the usb device and update the progressbar when the file grows.
This is not really measuring the transfer speed of the usb bus, but if you're transfering files it will give an indication of how fast this is going.
If you are streaming a movie, or flashing a chip you'd have to talk to the usb bus directly.
Maybe look into http://www.libusb.org/ and it's python wrapper https://github.com/walac/pyusb
Related
I am currently working on a project where I need to have a live stream video and a motion detection that when a motion is detected, the raspberry pi records a 10 second video. I have put each functionality in different python files. I am using a single Pi camera. I have also created a file that consists of all the camera functionality so that I don't need to initialize the picamera twice. In the live stream file there is a button that allows the user to activate the motion detection functionality. When the button is clicked an error is shown "Failed to enable connection: Out of Resources"(Note that the live stream is already running.). I just wanted to know if it is possible to run two programs simultaneously using a single pi camera. If it is possible, how? Any help would be appreciated. Thank you in advance.
In general, AFAIK, it is not possible to access a single Raspberry Pi camera simultaneously from 2 separate processes.
I see a couple of possibilities which I'll describe in separate sections.
Use the shell and its tee command, to duplicate your video stream to 2 separate processes like this:
raspivid ... | tee >(MotionDetectionProcess) >(LiveStreamingProcess) > /dev/null
This is a bash "process substitution".
You might use raspividyuv or ffmpeg or some other program, or even one written by yourself, to read the camera and pass the data to stdout.
This approach is very quick to set up and simple - everything just reads or writes its stdin or stdout. On the downside, everything must be started at the same time and things run in "lock-step" with each other.
Run a single video capture process, that simply writes the video frames into multiprocessing shared memory.
Write your other two programs separately, but such that they "attach" to the shared memory and get the frames from there rather than directly from the camera.
This approach is initially a bit harder to program, but things are more decoupled, so you may choose not to bother running motion detection one day, or to start it later or sooner, and your live stream will be unaffected.
I'm trying to making a screen recorder program, and I also want to record the audio output.
I found a few other questions which had the same problem as mine, and the solutions were either outdated or they didn't work on my current computer.
Most solutions I found used VB-AUDIO cable to loopback audio, but I don't want my program to rely on other applications for it to function.
Is there any library I can use which has a proper loopback function that can be used to record and save computer audio output?
so I am working with a friend in developing a robot (using a Raspberry Pi). This robot we are working on will be an autonomous boat. Now, for the Raspberry Pi, the Raspbian image we are using already has ROS (specifically, ROS Kinetic) nicely installed on it, and I have confirmed that ROS is working.
For our robot boat, we have different features that we wish to include in it:
Getting GPS location
Getting audio via hydrophone and processing audio to detect a certain frequency range (ie. I want the boat to detect when a sound of 8500-9000 Hz is clearly heard via the hydrophone)
Being able to communicate over XBee
So I have used ROS in the past and I am familiar with the concept of publishing and subscribing to topics. However, my friend says that ROS will cause performance issues due to ROS having some "overhead", claiming that ROS will slow down our audio processing or something.
Instead, he proposes the following alternative method:
Have each of the 3 aspects of our robot (as mentioned above) in different Python files.
When the Raspberry Pi starts up, have all of the Python files be run automatically.
To pass information to each other (essentially "mimicking" the publishing/subscribing functionality of ROS), the Python files will write to different text files (to "publish" values) and read from those text files (to "subscribe"), and the values that these text files contain will be overwritten on each update of a new value.
So... which method of passing information is the better method for our robot?
Using ROS
Using the aforementioned file writing/reading method proposed by my friend
Something else
Oh, and other things that I should mention:
I know how to use ROS, my friend doesn't
My friend has not actually finished writing all the code for doing his file writing/reading idea, whereas ROS is already all setup and good to go on the Raspberry Pi
While I could find plenty of sites that list the various advantages of ROS, I could not find anything that compares ROS to my friend's method that I have mentioned above.
ROS has nodelets which allow multiple nodes to live in the same process and communicate with each other without copy overhead - so less overhead than writing a file would incur.
http://wiki.ros.org/nodelet
I'm looking for some help 'cause I'm getting a bit frustrated on this... :-(
I have a headless Raspberry PI 3 with a PiFi DAC+ audio card, basically an HiFiBerry clone. On the PI I installed mpd and mpc as a client.
On top of those I wrote a python script that invokes some mpc commands to control the underlying mpd daemon (load a playlist, play a stream,...).
Now the issue.
The overall audio setup based on the hifiberry-dacplus overlay works well, the sound is good and I'm fine with it. Mpc & mpd work, I can control all the functionalities of mpd (at least the ones I need) through mpc without a flaw...but, if I try to run my python script suddenly I cannot hear anything anymore, even if no specific errors are traced.
The 'scary' thing is that, after aborting the script execution, I'm no more able to play any sound (I tried with several wav files using aplay), and again no specific errors show up in the log files...looks like someone just 'muted' the volume, but alsamixer shows all playback levels to 100%. I need to reboot the PI to get my sound back.
I checked for clues in the usual places:
/var/log/messages
/var/log/syslog
dmesg
boot.log
/var/log/mpd/mpd.log
I also run aplay -vvv when audio was blocked and compared the output with a session where audio was running fine but I didn't notice any difference...
I know it would be very difficult to diagnose the problem without having access to my system, but do you have any ideas on where else to look to understand if something went wrong?
Just for info, here's my aplay -l output:
**** List of PLAYBACK Hardware Devices ****
card 0: sndrpihifiberry [snd_rpi_hifiberry_dacplus], device 0: HiFiBerry DAC+ HiFi pcm512x-hifi-0 []
Subdevices: 1/1
Subdevice #0: subdevice #0
Thank you!
Michele
EDIT: seems like there is some incompatibility between the audio board and a 16x2 LCD display I'm using to show the name of the stream I'm playing. The display is a very common one, based on the HD44780 chip.
My code uses the AdaFruit python library available here to drive it and I still have to figure where the problem is: the audio board, as per HiFiberry docs is connected through GPIO 2,3,18,19,20,21 (plus ground & +5V for power), so it shouldn't cause any conflict with the LCD which uses different pins, but I wouldn't bet on it.
Anyway, removing the LCD management part from the python code (but leaving the display physically attached to the RaspBerry pins) apparently solved the problem...
I'll keep this question updated, maybe could be useful for someone else, who knows!
Ok, I got it. As usual, I just went too fast with CTRL-C & CTRL-V without properly reading the code...
I didn't notice I left this statement in my python code
lcd_backlight = 2 #GPIO pin to control lcd backlight
Actually The GPIO 2 (which is one of the two I2C enabled pins on the Raspberry) is not connected to the LCD, but it is used by the audio board for configuration purposes: this way, whenever I tried to initialize the LCD, the audio board was somehow reconfigured, making it "mute". The only way to reset the faulty configuration was to reboot the PI itself.
Just leaving the default 'None' value for the backlight control pin (I do not need it) did the trick.
I work on a project to control my PC with a remote, and a infrared receptor on an Arduino.
I need to simulate keyboard input with a process on linux who will listen arduino output and simulate keyboard input. I can dev it with Python or C++, but i think python is more easy.
After many search, i found many result for... windows u_u
Anyone have a library for this ?
thanks
EDIT: I found that /dev/input/event3 is my keyboard. I think write in to simulate keyboard, i'm searching how do that
To insert input events into the Linux input subsystem, use the user-mode input device driver, uinput. This might help: http://thiemonge.org/getting-started-with-uinput (Note that while the tutorial references /dev/input/uinput, the correct file on my Ubuntu 12.04 PC is /dev/uinput.
The most generic solution is to use pseudo-terminals: you connect tttyn to the standard in and standard out of the program you want to monitor, and use pttyn to read and write to it.
Alternatively, you can create two pipes, which you connect to the standard in and standard out of the program to be monitored before doing the exec. This is much simpler, but the pipes look more like a file than a terminal to the program being monitored.