Does anyone know a beginner friendly tutorial for Star TSP 650II (using parallel port) to print something from django/python?
I can't seem to figure out how it works.
It's not an easy one, as the Star TSP printers accept 'Star graphic mode commands' - You can see the full commandset here:
http://www.starasia.com/Download/Manual/star_graphic_cm_en.pdf
I spent some time creating a python module - StarTSPImage, which will take a PIL image in Python, and then convert to the right raster/binary data that the printer is expecting. Using the module can print an image with the following:
import StarTSPImage
raster = StarTSPImage.imageFileToRaster('file.bmp', cut=True))
printer = open('/dev/usb/lp0', 'wb')
printer.write(raster)
I have used imgkit to generate an image from HTML in the past and used that for reciepts.
Related
I am experimenting with python to read words and numbers from screenshots of FORM, something like a scoreboard that can change several times a second. I think the project can be divided into 2 big parts:
Take screenshot of the forms several times a second
I already have some hint to use win32API for faster screenshot here.
Read the words and numbers from the screenshot with reference of blank form
For this, I already have general idea from the youtube video below
https://www.youtube.com/watch?v=cUOcY9ZpKxw
what I understood is, to apply tesseract to very specific point/ area in the form.
But with this method in the second part, I have a hunch that the execution time is rather slow.
(based from what I see from the video)
So my question is, is there any fast way to read a scoreboard that changes several times a second?
Edit:
Below is my current best effort with the project. I only submit the second part, which is the current bottleneck here.
The image can be found here.
The problem here is, even for just one frame of screenshot, tesseract need around 3 seconds to finish. I tried to use multiprocessing, but it seems my code is not clean enough, so the result is worse than not using it.
import cv2
import pytesseract
import time
import concurrent.futures
pytesseract.pytesseract.tesseract_cmd ="C:\\Program Files\\Tesseract-OCR\\tesseract.exe"
"the height of each field"
h=19
"the list of each field's area and name"
fields=[[(75,5),(130,h),"line 1"],
[(75,5+h),(130,2*h),"line 2"],
[(75,5+2*h),(130,3*h),"line 3"],
[(75,5+3*h),(130,4*h),"line 4"],
[(75,5+4*h),(130,5*h),"line 5"],
[(75,5+5*h),(130,6*h),"line 6"],
[(75,5+6*h),(130,7*h),"line 7"],
[(75,5+7*h),(130,8*h),"line 8"],
[(75,5+8*h),(130,9*h),"line 9"],
[(75,5+9*h),(130,10*h),"line 10"]]
a = time.time()
"loading filled forms"
img = cv2.imread("filled.jpg")
myData = []
"function to crop and OCR the image"
def read(field):
imgCrop = img[field[0][1]:field[1][1],field[0][0]:field[1][0]]
data=pytesseract.image_to_string(imgCrop)
return data
"use this for serial processing"
for field in fields:
myData.append(read(field))
"use this for multi processing"
#if __name__ == '__main__':
# with concurrent.futures.ProcessPoolExecutor() as executor:
# results = executor.map(read, fields)
#for result in results:
# myData.append(result)
print (myData)
b = time.time()
print(b-a)
EDIT 2:
It seems tesseract by default, using multiprocessing and using multiprocessing manually will only hinder the processing speed.
Also, it seems that OCR and image recognition, particularly on its speed and accuracy are still in active research right now. So, maybe I need to wait a little bit more.
Last, I will try to use Google Cloud Vision in the future.
I have a pretty well functioning patch that uses two cameras, and employs cv.jit.track to track an object in "3D".
The console prints out sets of coordinates as the object moves through space (format [x,y,z]). My end goal is to translate this data into 3D modelling software to map a path that the object has travelled. One option for achieving this is a python script like this.
I need to determine how exactly to send the stream of data from the Max console outside of Max. Perhaps this is into a Python application, or a table, or literally a text file. Any suggestions are greatly appreciated!
excuse the code dump:
<pre><code>
----------begin_max5_patcher----------
3299.3oc6cz1aaia9yI+JDL5F1.7L3qhjGv.51ssaeaC31vvgtCExxpIpUVx
UVNuzC2+8IQRESEKISIJkljUihj7XIa879a7gr+xkWrXc1cQ6W38cduy6hK9
kKu3B4aU8FWnguXw1f6BSB1KusEoQ2ls9iKVptTQzcEx2N2KDV+lYGJRhJJt
eWj5KdwBueVeocAEgWGmd06yiBKTWEAvq.K8fLX0uPB4OQq.O7YROrMNs7KT
97A52Ldi7wVhJ+Ae+EGuS0yVdqvp27Wu7xperzYpCMNpi4yjTmPLVpiNcT21
n86CtJ5DxaeTgGv6MPu23FUhP9U+xm2OUhZgJIu.nRslJBPGKUBmcM08F1gs
PBXBZE17ECtziTQDK8vv9IH3oDDsCBBLoDDpGBR.jlTDDdjj.gMcjPWZdWEU
bylnaRh2WLNMONU4iTQrsn4su39jHSb3GxR1rvR0Rn+7a7UQ+wgQkleiiCHv
hUH.h.XB8qREWHDrRPfTAP+BJJ4NRePHnA24CYoE6i+h7AAgqnBJj6W+8H5i
KU8ISC1pXs+o73fjEsv+3SG+6v1nzCKLd5eHHLxLzvIrs3zRkpRt2xwvAY3U
bHlyv0uGujqRnmne0fCV4s3wpcR71ToKtHZqNwhE+sRZ3eEuMx6u+W799RtY
dPE1tr5G+0+wO58ehVGFr06eWDmDWbu2eNp330+wzfcO9y78A6CCVmD48Oyy
ze3E8auamXTbzOSd4MWDk+9nzpGjI+uoHFOg4XT90F4U6mvKNcW4ioWOFiVp
SjtIgH3DXofGBKFAVLYrwtCymtw6KdAIIiySiO2WEQQlRCUL3n7Hzz4N3iwE
qBiRRVmjE9o5u1a1GlmURdFZks4o35pOXXVRVthv.q3XeJyeYi+RfwbJpTav
fBOggozCaSkaeTx1rMMdtqyx2Dk23ACVZ7Cymz0QAO9dYDLrRIss+x7it9pF
eLg.gf7ks9Wler1xdkHk3XgRvqhe.L9L4Y2dh6jZqDwCNCKuqqihu5Z42J5A
ovoBqR7913MEWW8dDp0ge9gnznaBdvGUI9GuONKU9szhVHt9NuJOdSRbZjcR
jl5rjin72zgeNqCq8Z8JSG1+4jN7taiS2jcamQUJum2uMnHO9tyGUVkHOT63
AIofxfzCntETGIchlvPoqCRuZjMDfJqQgIighoCNFJFL+8zQluU4W99n9Swx
BAIVHITeUUZhNRn5nY13q0.imNwdGLlJc8Co6BB+jG1Rk89frIYKLU0REwfK
eG2QiiHSG+H7lUUrjh7RNxpM4AV4AvBhFIHU+R.Ft0Ac1sNLIZu2ltKqrLy8
dPu2lGrI9vdO1T3FrlQnbV.43gK98eRLGxuZMJ4v1fojngRpkM7NVgYyum+v
jr9bK1aup.LidUgYCW6lO+siJaWTpSQ1pIugGZiL+g1pTY+bwpqxCVWkbQUl
Edu8PZ7mOD4AmPcXHU8KcK2FTaWgyuRryscEURlByWDbo+Z3rzCVxy+dvhY3
S6kj.8rnEruD5.aq7uxLe9VGXqWerWgMfsUgtZ0p9Jz.V9h4lKtK3SkEjq92
3bynJJ1mxrYUVws3K.N+CDPvtcUsY9mGIEJMuYkEMJWCVHCPfkkPBDR2ACMT
JDL+kCrKORRvuyIBtr93SIXwHHXxS.AK8peFJhigpDDXDguOfHph9gQCOaH9
7uN5RJ5Cd+ljM+W4qkM9KmjqnVjqPDc3Vt.77mEjb.P7dC1EJ1WfFtoaKY8I
lvRcBy1VlAPwokxEkj38S8nJiTYzR0sNlbRqiOm1KaJ0dOrccTdugbkckr2z
1Ks.KqI43KjzOiT7PAC13jgFdZISYYLiMNJT.ZgM.Ty5CZEUkFR6YdfmfVU5
KlcuzEdq6ofVU6qU4mOb8EMiRYNLTF0fR6c9aoaPX3gs8kP1GRxBFCShHidW
5.YLZSCJyU1721jvqKUX5ew.zk5cNMJc+QnBcfQLQfPLAfhKcvxGrGHevrm1
9zQ6Pnj.obY96ifZohWPTqpXEGH1Irhrtx.XeXd7tBuhay9Nu679idah1Ub8
hyOALL5oZuPc.zgje.EO6Y2Vh7Qw2D48Eml4GJDbJESYCdlsoSYAYSPBQ0jG
k0B4M7DhnrmlDhNi9bVZTk97u06due28kp0e42u3b1oD0JLNJkXzSlR78iLe
OsiWfLkkwn19Drn6ZR7NWZMz3oPh34kgYsHiBGYsID+ut0lHG1x6G+vVpJmF
y7G4rVRdR1cLkz3cNSi3wNOoDx2lmzWTyhGjCTZ07WSyhG4ayS5+OoCidMpC
SetnCOEIOnSFZz4Nfeh5q4DO6rX8tWQEOcTyNKV7bd1YgBzwLEFw.FQeYL5r
Z4HlKdhFcVj3U0nyV6gVYGLhQmERe4M5rZhFoJXXDiNKD+0ZzYmBeePCucVr
DqsIzQu3FXVMQC4vQNvrP3y5AlEobDpVE1QLurPvy44kUGMSSciXIxe4Osr0
JvX9XmVV1raz94x7+xy7P8uXpVPk5gIve3s4X5DzEYWcUR2VimcErzdYUaGA
R8OseqYM77pMoR4yYQU0IO5f4QhpUueSRee1g7vZZqdSQ3cDc2DsuHNMnPWW
z6NtwYLtoswaTApzqXffJWRWtz30moFooFPUGkHxDOj2oD5XwxUU61szaZID
5.HDCnSQVfSHqOwFj0uOjkUlVSE5oVU8ZjkHSJ1MFIyYbqFaPMXjnYPpKmey
yhsUiNkmq7k5ujdeRxoRy4GU0Z3e9GkjzQN9np1aEmWZ2TkXprXY1ZwBOmhF
Aa55oIDWs6R8UlKUPSt0rM5f9v9rXPP4PVAUCdlIjqpRTavMhMdZTVylPtha
1n6UUDxY4aHjJco5wosAjZrrfHgZviHNi2UMG3r3MsW4MlBpvFBTuomgq3mb
RaUcMW8kis.SIr9vTAjafolPtha13OkzKWjCk5hPUQQZmARHW8JBbF2pNl6l
GbyFqFDvh3y50dR6nrIDV0zDBeE6waHkpKM0tPsJL9YTF7QyCCG5thZq7Qs+
ItZqc+HnoWswYhP6AEpp0oF0ZLImSUVcVfq85zBWlz9o765cRkv.wcCOIthn
Hg+Jxi20nSOhZSlVbaj89JWEpS0xlPpQrApNFOZ.nFbk1kIyP5XBmEJ.ngh9
SrX.BGfbv.ZpsAcV0tF4floeIgNEUQtYFBbVhqRGa5k31vEQnA30UGqtADog
GYR6djcylhZSYDTlEIcPwsLm6ceMWSxzF7Fws.uwpy3Oc74lPv5txympn0xN
tAOGdi6MATXKaBCrZ3yYTEMwYSCldVNrMsKRO+HlPtVnF2BtHF2u4GTZ+oOr
zD5rieDD1P7KgbEwQSESkPMypuIDVsln0LbAvckAa7Di41T2jZKUUWgGXFZB
qUHKAXOxZBM4oDaSVMP9fKBQzccdBe20ispQlhAi0pI.oyK4HVaS8d9Ct7C8
n810kbsWZ1vnGLNy5Ny8Iv8Lyp715u+ekwZ8OYmFxzqFWWWyUdsUUXzqiip+
Od3TbizCdSbGuoBqWVAX2wCk937UKnmtkQD26EL0FitdiTqRowu9bRQsrXMf
n0y.C+THhf2IuWdsIOiZt0BiyQxP0L.dDXpC9PsovUeaZ4HkC6N6+SulqJUV
Eg2u+7+j56T8RJX.4JtYy5VZQWJ35yfgG.lbguUK6o8XppT61wT2LoH13eCi
NuEkVZa.3Za.rI7LUXihnZZFMgbE2rpD4dWgPJQVpAfazzulPXlpigbUwTL2
S3gXUcmzyi1XlYuzjPOEsZmZeSev.id9f060BkOehAN+XiI77SDD6IBEp1wx
EPbqPNgMMKoWqKeJ7gFhTo6pyUuIjpyOJ+WXt6qiL1lp7obK7Wo2Sjzoxm.1
4DbaOIVT2IYMEtYwVnGHQaX2MB1uaztzUl.NCnsUpAVXooWzE0ljnIfYRBzY
HzqMjfMNKHbCewD9LzGEpygMpiT3CLia36d2yfTa7i0Oaj32rouT1wHb7IKB
GzFGDXgMbQ8Yirpe5MgzKIt1iqDxU71F8TnMRejOwXsOaBgQlK4EFMCIkaGg
fG.gX.M4C2AzVjEdNjUUydMWuIDSePcH0VjPecHDaVNODnAWGLGpH.ab0Q3m
OCYNB1xIXXWWxUST.wpBNsHydFm2Ed2xkbFuwVg2VTHU0+GS0Ede5kZf2p8C
Pvtc2DkWu4lkn7hsAeTs6k4KkfwoJP4dRXQdzMM2LzKBxCuNtHJr3PtZqRdm
9+2UWTse0ySODqUQKYVWpOaoezdP3gcYoZej7SQIIY2Ve7aWxf9Pvgjhlr0f
vvnzhla5dDExjahcNjK2M6.pfRnM210la.z2IOzqqNlw40LmkZYYd429wiAa
MlrsDMhq8NXJ6OR.xsfc0Al8fQelOgAjmT.TABRkTBD.E90aG+gifJgrbKrT
gg62oO3Bj6zkK+0K+e.gM30E
-----------end_max5_patcher-----------
</code></pre>
I'm attempting to grab an image of diagrams constructed within a rectangle on a power point slide deck. I found python-pptx and am able to identify the shapes on each slide. Is there any way to expand this to take a snapshot of the area within the rectangle shape and export it as an image?
# Auto grab the photos created in Powerpoint
from pptx import Presentation
prs = Presentation('ex.pptx')
for slide in prs.slides:
print(slide)
for shape in slide.shapes:
print(shape)
# Identify shape on each slide, find area within, and save as .png
I think you're going to be best off looking at a COM32 type of solution, either writing something in VBA or possibly using the win32com library in Python if you really want a Python solution.
Either way this is going to fire up a "live" PowerPoint application instance and basically run it by remote control. That sort of thing isn't a great idea server-side, but if it's just for personal productivity it might work fine.
python-pptx can't do this sort of thing and probably never will. The rendering engine needs to get involved in this type of work and python-pptx is strictly a .pptx file editor/generator.
With Aspose.Slides for Python, you can easily save presentation shapes to images. The following code example shows you how to save all charts from a presentation to PNG images:
import aspose.slides as slides
import aspose.slides.charts as charts
import aspose.pydrawing as draw
with slides.Presentation("example.pptx") as presentation:
for slide_index, slide in enumerate(presentation.slides):
for shape_index, shape in enumerate(slide.shapes):
# Looking for charts, for example.
if isinstance(shape, charts.Chart):
# Get a chart image.
with shape.get_thumbnail() as chart_image:
# Save the chart image to PNG.
image_path = "chart_image_{}_{}.png".format(slide_index, shape_index)
chart_image.save(image_path, draw.imaging.ImageFormat.png)
Aspose.Slides for Python is a paid product, but you can get a temporary license or use it in a trial mode to evaluate all features for managing presentations. Alternatively, you can use Aspose.Slides Cloud SDK for Python. This package provides a REST-based API for managing presentations as well. The code example below shows you how to do the same using Aspose.Slides Cloud:
import asposeslidescloud
import aspose.pydrawing as draw
from asposeslidescloud.apis.slides_api import SlidesApi
from asposeslidescloud.models import *
slides_api = SlidesApi(None, "my_client_id", "my_client_secret")
file_name = "example.pptx"
# Upload the presentation to the default storage.
with open(file_name, "rb") as file_stream:
slides_api.upload_file(file_name, file_stream)
# Get the number of slides.
slides_info = slides_api.get_slides(file_name)
slide_count = len(slides_info.slide_list)
for slide_index in range(1, slide_count + 1):
# Get the number of shapes on the current slide.
shapes_info = slides_api.get_shapes(file_name, slide_index)
shape_count = len(shapes_info.shapes_links)
for shape_index in range(1, shape_count + 1):
shape = slides_api.get_shape(file_name, slide_index, shape_index)
# Looking for charts, for example.
if shape.type == "Chart":
# Get the chart as a PNG image.
image_path = slides_api.download_shape(file_name, slide_index, shape_index, ShapeExportFormat.PNG)
print("A chart image was saved to " + image_path)
This is also a paid product, but you can make 150 free API calls per month for any purposes.
I work as a Support Developer at Aspose and can answer your questions of these libraries on Aspose.Slides forum.
I'm trying to convert an image to ZPl and then print the label to a 6.5*4cm label on a TLP 2844 zebra printer on Python.
My main problems are:
1.Converting the image
2.Printing from python to the zebra queue (I've honestly tried all the obvious printing packages like zebra0.5/ win32 print/ ZPL...)
Any help would be appreciated.
I had the same issue some weeks ago. I made a python script specifically for this printer, with some fields available. I commented (#) what does not involve your need, but left it in as you may find it helpful.
I also recommend that you set your printer to the EPL2 driver, and 5cm/s print speed. With this script you'll get the PNG previews with an EAN13 formatted barcode. (If you need other formats, you might need to hit the ZPL module docs.)
Please bear in mind that if you print with ZLP 2844, you will either need to use their paid software, or you will need to manually configure the whole printer.
import os
import zpl
#import pandas
#df= pandas.read_excel("Datos.xlsx")
#a=pandas.Series(df.GTIN.values,index=df.FINAL).to_dict()
for elem in a:
l = zpl.Label(15,25.5)
height = 0
l.origin(3,1)
l.write_text("CUIT: 30-11111111-7", char_height=2, char_width=2, line_width=40)
l.endorigin()
l.origin(2,5)
l.write_text("Art:", char_height=2, char_width=2, line_width=40)
l.endorigin()
l.origin(5.5,4)
l.write_text(elem, char_height=3, char_width=2.5, line_width=40)
l.endorigin()
l.origin(2, 7)
l.write_barcode(height=40, barcode_type='2', check_digit='N')
l.write_text(a[elem])
l.endorigin()
height += 8
l.origin(8.5, 13)
l.write_text('WILL S.A.', char_height=2, char_width=2, line_width=40)
l.endorigin()
print(l.dumpZPL())
lista.append(l.dumpZPL())
l.preview()
To print the previews without having to watch and confirm each preview, I ended up modifying the ZPL preview method, to return an IO variable so I can save it to a file.
fake_file = io.BytesIO(l.preview())
img = Image.open(fake_file)
img = img.save('tags/'+'name'+'.png')
On the Label.py from ZPL module (preview method):
#Image.open(io.BytesIO(res)).show(). <---- comment out the show, but add the return of the BytesIO
return res
I had similar issues and created a .net core application which takes an image and converts it to ZPL, either to a file or to the console so it's pipeable in bash scripts. You could package it with your python app call it as a subprocess like so:
output = subprocess.Popen(["zplify", "path/to/file.png"], stdout=subprocess.PIPE).communicate()[0]
Or feel free to use my code as a reference point and implement it in python.
Once you have a zpl file or stream you can send it directly to a printer using lpr if you're on linux. If on windows you can connect to a printer using it's IP address as shown in this stack overflow question
For what is worth and for anyone else reference, was facing a similar situation and came up with a solution. To whom it may help:
Converting the image?
After trying many libraries i came across ZPLGRF although it seems the demo is focused on PDF only, i found in the source that there is a from_image() class property that could convert from image to zpl combining it part of the demo/exaples. Full code description below
Printing from python to the zebra queue?
Many libraries again but i settled with ZEBRA seem to be the most straight forward one to send raw zpl to a zebra printer
CODE
from zplgrf import GRF
from zebra import Zebra
#Open the image file and generate ZPL from it
with open(path_to_your_image, 'rb') as img:
grf = GRF.from_image(img.read(), 'LABEL')
grf.optimise_barcodes()
zpl_code = grf.to_zpl
#Setup and print to Zebra Printer
z = Zebra()
#This will return a list of all the printers on a given machine as a list
#['printer1', 'printer2', ...]
z.getqueues()
#If or once u know the printer queue name then u can set it up with
z.setqueue('printer1')
#And now is ready to send the raw ZPL text
z.output(zpl_code )
The above i have tested successfully with a Zebra GX430t printer connected via USB in a Windows 11 machine.
Hope it helps
I made this code in python 2.7 for downloading bing traffic flow map (specific area) every x minutes.
from cStringIO import StringIO
from PIL import Image
import urllib
import time
i = 1
end = time.time() + 60*24*60
url = 'https://dev.virtualearth.net/REST/V1/Imagery/Map/AerialWithLabels/45.8077453%2C15.963863/17?mapSize=500,500&mapLayer=TrafficFlow&format=png&key=Lt2cLlR9OcfEnMLv5qyd~YbPpC6zOQdhTMcwsKCwlgQ~Am2YLG00hHI6h7W1IPq31VOzqEXKAhedzHfknCejIrdQF_iVrQS82AUdjBT0YMtt'
while True:
buffer = StringIO(urllib.urlopen(url).read())
image = Image.open(buffer)
image.save('C:\Users\slika'+str(i)+'.png')
i=i+1
if time.time()>end:
break
time.sleep(60*10)
This is one of the images i got traffic flow
Now my question is can i convert only traffic flow lines (green,yellow, orange, red) and assign them attributes (1,2,3,4) or ('No traffic' , 'Light' , 'Moderate' , 'Heavy') into shape file for usage in QGIS. What modules should i look for and is it even possible. Any idea or sample code would be much helpful.
This is against the terms of use of Bing Maps.
Also, I notice that you are using a Universal Windows App key. Those keys are to only be used in public facing Windows apps that anyone has access to. These keys cannot be used in GIS/business apps. Use a Dev/Test key or upgrade to an Enterprise account.