I want to create a program that can read and store the data from a qr scanning device but i don't know how to get the input from the barcode scanner as an image or save it in a variable to read it after with openCV
Typically a barcode scanner automatically outputs to the screen, just like a keyboard (except really quickly), and there is an end of line character at the end (like and enter).
Using a python script all you need to do is start the script, connect a scanner, scan something, and get the input (STDIN) of the script. If you built a script that was just always receiving input and storing or processing them, you could do whatever you please with the data.
A QR code is read in the same way that a barcode scanner works, immediately outputting the encoded data as text. Just collect this using the STDIN of a python script and you're good to go!
Related
I am currently working on a project where I need to have a live stream video and a motion detection that when a motion is detected, the raspberry pi records a 10 second video. I have put each functionality in different python files. I am using a single Pi camera. I have also created a file that consists of all the camera functionality so that I don't need to initialize the picamera twice. In the live stream file there is a button that allows the user to activate the motion detection functionality. When the button is clicked an error is shown "Failed to enable connection: Out of Resources"(Note that the live stream is already running.). I just wanted to know if it is possible to run two programs simultaneously using a single pi camera. If it is possible, how? Any help would be appreciated. Thank you in advance.
In general, AFAIK, it is not possible to access a single Raspberry Pi camera simultaneously from 2 separate processes.
I see a couple of possibilities which I'll describe in separate sections.
Use the shell and its tee command, to duplicate your video stream to 2 separate processes like this:
raspivid ... | tee >(MotionDetectionProcess) >(LiveStreamingProcess) > /dev/null
This is a bash "process substitution".
You might use raspividyuv or ffmpeg or some other program, or even one written by yourself, to read the camera and pass the data to stdout.
This approach is very quick to set up and simple - everything just reads or writes its stdin or stdout. On the downside, everything must be started at the same time and things run in "lock-step" with each other.
Run a single video capture process, that simply writes the video frames into multiprocessing shared memory.
Write your other two programs separately, but such that they "attach" to the shared memory and get the frames from there rather than directly from the camera.
This approach is initially a bit harder to program, but things are more decoupled, so you may choose not to bother running motion detection one day, or to start it later or sooner, and your live stream will be unaffected.
I'm trying to write a python script to run a transcoding program from the commandline, automating the process of converting a video stream from h264 to h265 with half the bitrate..
So far I'm reading a text file outputted by "mediainfo" with the filename and bitrate of each video file one line at a time.
I can parse the lines into variables and create a string to run the commandline NvEncC64.exe program..
This is where it gets slightly complicated. Using subprocess and os stuff detaches the process from the shell and runs it in the background.. This is fine - but the video encoder can only run three instances at a time as the graphics card runs out of resources.
I need a way to start three processes maximum and hold off launching any more instances of NvEncC64.exe till one of them finishes..
I have zero clue how to do this.
Can someone point me in the right direction? :)
I have a barcode scanner, and currently, it is working as a keyboard so if the scanner successfully scans by pushing the trigger the scanned code goes to the computer as input.
Now, I want to write a python program on my Raspberry Pi 3B which connects to the scanner and start the scanning process without the need of pushing the trigger on the scanner. Meaning that I make a GUI where only by just clicking on a button the user starts the scanning process and the scanned code (if the scan was successful) gets outputted.
The question is: how to do it?
I have tried pyusb but it can't send a command to the scanner to scan (Or I don't know how).
Even worse if it turned out that there is nothing like Python-Scanner communication, only the primitive connected || not connected type.
Depending on which operating system you use, you should look into the SDKs for the scanner you're using. It seems they provide some tools to control the scanner. They a re not very informative on what exactly they support though. ( for example: https://www.zebra.com/us/en/support-downloads/software/developer-tools/scanner-sdk-for-linux.html)
The reference manual for the serial interface I found here:
https://www.zebra.com/content/dam/zebra_new_ia/en-us/manuals/barcode-scanners/Simple%20Serial%20Interface%20Programmer's%20Guide.pdf
I tried to read up on how to open files properly in python (concerning formatting of special characters, in this case carriage return) but still can't figure it out.
I'm writing this script in nano on my rpi (over ssh from my pc using putty).
The script collects data from sensors connected via I2C and SPI and prints them to logfiles. Any events, anomalies and errors are to be registered in an eventlog, with timestamp. that way, I dont need the ordinary print function and can make the program run in the backround and keep track of what it's doing by looking at the eventlog over FTP.
The following parts of the program gives different line handling
first occasion
second occasion
The first gives a file that gets a ^M at the beginning of each line except the first when i view it in nano, but it looks fine and dandy when i open it in notebook on my PC.
The second looks good in nano, but have no newlines or carriage returns when I open it in notepad and is impossible to read properly.
First: Why are they different? I have looked at it over and over again. Because one is inside a function and the other "raw" in the code (it is inside a while loop)
Second: What does it take to get the files to look proper in both nano and notepad?
Hopefully I've given enough details :)
In VLC I can do "Media -> Open Network Stream" and under "Please enter a network URL" I can put udp://#239.129.8.16:1234.
And this is opening local UDP video stream.
VLC is not related to my question, I have put it just for clarification.
How can I connect to "udp://#239.129.8.16:1234" network stream in Python, get image from it (screenshot) and save it in file?
I think neither network programming nor Python is the focus of your question here. At the core, you need to feed a video decoder with the binary data stream, make the video decoder collect a sufficient amount of data for decoding a single frame, let the decoder save this, and abort the operation.
I am quite sure that the command line tool avconv from the libav project can do everything you need. All you need to do is dig into the rather complex documentation and find the right command line parameters for your application scenario. From a quick glance, it looks you will need for instance
‘-vframes number (output)’
Set the number of video frames to record. This is an alias for -frames:v.
Also, you should definitely search the docs for this sentence:
If you want to extract just a limited number of frames, you can use
the above command in combination with the -vframes or -t option
It also looks like avconv can directly read from a UDP stream.
Note that VLC, the example you gave, uses libav at the core. Also note that if you really need to execute all this "from Python", then spawn avconv from within Python using the subprocess module.