I'm trying to convert an image to ZPl and then print the label to a 6.5*4cm label on a TLP 2844 zebra printer on Python.
My main problems are:
1.Converting the image
2.Printing from python to the zebra queue (I've honestly tried all the obvious printing packages like zebra0.5/ win32 print/ ZPL...)
Any help would be appreciated.
I had the same issue some weeks ago. I made a python script specifically for this printer, with some fields available. I commented (#) what does not involve your need, but left it in as you may find it helpful.
I also recommend that you set your printer to the EPL2 driver, and 5cm/s print speed. With this script you'll get the PNG previews with an EAN13 formatted barcode. (If you need other formats, you might need to hit the ZPL module docs.)
Please bear in mind that if you print with ZLP 2844, you will either need to use their paid software, or you will need to manually configure the whole printer.
import os
import zpl
#import pandas
#df= pandas.read_excel("Datos.xlsx")
#a=pandas.Series(df.GTIN.values,index=df.FINAL).to_dict()
for elem in a:
l = zpl.Label(15,25.5)
height = 0
l.origin(3,1)
l.write_text("CUIT: 30-11111111-7", char_height=2, char_width=2, line_width=40)
l.endorigin()
l.origin(2,5)
l.write_text("Art:", char_height=2, char_width=2, line_width=40)
l.endorigin()
l.origin(5.5,4)
l.write_text(elem, char_height=3, char_width=2.5, line_width=40)
l.endorigin()
l.origin(2, 7)
l.write_barcode(height=40, barcode_type='2', check_digit='N')
l.write_text(a[elem])
l.endorigin()
height += 8
l.origin(8.5, 13)
l.write_text('WILL S.A.', char_height=2, char_width=2, line_width=40)
l.endorigin()
print(l.dumpZPL())
lista.append(l.dumpZPL())
l.preview()
To print the previews without having to watch and confirm each preview, I ended up modifying the ZPL preview method, to return an IO variable so I can save it to a file.
fake_file = io.BytesIO(l.preview())
img = Image.open(fake_file)
img = img.save('tags/'+'name'+'.png')
On the Label.py from ZPL module (preview method):
#Image.open(io.BytesIO(res)).show(). <---- comment out the show, but add the return of the BytesIO
return res
I had similar issues and created a .net core application which takes an image and converts it to ZPL, either to a file or to the console so it's pipeable in bash scripts. You could package it with your python app call it as a subprocess like so:
output = subprocess.Popen(["zplify", "path/to/file.png"], stdout=subprocess.PIPE).communicate()[0]
Or feel free to use my code as a reference point and implement it in python.
Once you have a zpl file or stream you can send it directly to a printer using lpr if you're on linux. If on windows you can connect to a printer using it's IP address as shown in this stack overflow question
For what is worth and for anyone else reference, was facing a similar situation and came up with a solution. To whom it may help:
Converting the image?
After trying many libraries i came across ZPLGRF although it seems the demo is focused on PDF only, i found in the source that there is a from_image() class property that could convert from image to zpl combining it part of the demo/exaples. Full code description below
Printing from python to the zebra queue?
Many libraries again but i settled with ZEBRA seem to be the most straight forward one to send raw zpl to a zebra printer
CODE
from zplgrf import GRF
from zebra import Zebra
#Open the image file and generate ZPL from it
with open(path_to_your_image, 'rb') as img:
grf = GRF.from_image(img.read(), 'LABEL')
grf.optimise_barcodes()
zpl_code = grf.to_zpl
#Setup and print to Zebra Printer
z = Zebra()
#This will return a list of all the printers on a given machine as a list
#['printer1', 'printer2', ...]
z.getqueues()
#If or once u know the printer queue name then u can set it up with
z.setqueue('printer1')
#And now is ready to send the raw ZPL text
z.output(zpl_code )
The above i have tested successfully with a Zebra GX430t printer connected via USB in a Windows 11 machine.
Hope it helps
Related
I would like to relink a Photoshop Smart Object to a new file using Python.
Here's a screenshot of the button that's used in Photoshop to perform this action - "Relink to File":
I've found some solutions in other programming languages but couldn't make them work in Python, here's one for example: Photoshop Scripting: Relink Smart Object
Editing Contents of a Smart Object would also be a good option, but I can't seem to figure that one out either.
Here's a screenshot of the button to Edit Contents of a Smart Object:
So far I have this:
import win32com.client
psApp = win32com.client.Dispatch('Photoshop.Application')
psDoc = psApp.Application.ActiveDocument
for layer in psDoc.layers:
if layer.kind == 17: # layer kind 17 is Smart Object
print(layer.name)
# here it should either "Relink to File" or "Edit Contents" of a Smart Object
I have figured out a workaround! I simply ran JavaScript in Python.
This is the code to Relink to File.... You could do a similar thing for Edit Contents but I haven't tried it yet, as relinking works better for me.
Keep in mind the new_img_path must be a raw string as far as I'm aware, for example:
new_img_path = r"C:\\Users\\miha\\someEpicPic.jpg"
import photoshop.api as ps
def js_relink(new_img_path):
jscode = r"""
var desc = new ActionDescriptor();
desc.putPath(stringIDToTypeID('null'), new File("{}"));
executeAction(stringIDToTypeID('placedLayerRelinkToFile'), desc, DialogModes.NO);
""".format(new_img_path)
JavaScript(jscode)
def JavaScript(js_code):
app = ps.Application()
app.doJavaScript(js_code)
Does anyone know a beginner friendly tutorial for Star TSP 650II (using parallel port) to print something from django/python?
I can't seem to figure out how it works.
It's not an easy one, as the Star TSP printers accept 'Star graphic mode commands' - You can see the full commandset here:
http://www.starasia.com/Download/Manual/star_graphic_cm_en.pdf
I spent some time creating a python module - StarTSPImage, which will take a PIL image in Python, and then convert to the right raster/binary data that the printer is expecting. Using the module can print an image with the following:
import StarTSPImage
raster = StarTSPImage.imageFileToRaster('file.bmp', cut=True))
printer = open('/dev/usb/lp0', 'wb')
printer.write(raster)
I have used imgkit to generate an image from HTML in the past and used that for reciepts.
For machine learning purposes (sckit-learn), I need to extract the raw text from lots of PDF files. First off, I was using xpdf pdftotext to do this task:
exe = r'"'+os.path.join(xpdf_path,"pdftotext.exe")+'"'
cmd = exe+" "+"\""+pdf+"\""+" "+"\""+pdf+".txt"+"\""
subprocess.check_output(cmd)
with open(pdf+".txt") as f:
texto_converted = f.read()
But unfortunately, for few of them, I was unable to get the text because they are using "stream" on their pdf source, like this one.
The result is something like this:
59!"#$%&'()*+,-.#/#01"21"" 345667.0*(879:4$;<;4=<6>4?$#"12!/ 21#$#A$3A$>#>BCDCEFGCHIJKIJLMNIJILOCNPQRDS QPFTRPUCTCVQWBCTTQXFPYTO"21 "#/!"#(Z[12\&A+],$3^_3;9`Z &a# .2"#.b#"(#c#A(87*95d$d4?$d3e#Z"f#\"#2b?2"#`Z 2"!eb2"#H1TBRgF JhiO
jFK# 2"k#`Z !#212##"elf/e21m#*c!n2!!#/bZ!#2#`Z "eo ]$5<$#;A533> "/\ko/f\#e#e#p
I Even trying using zlib + regex:
import re
import zlib
pdf = open("pdfa.pdf", "rb").read()
stream = re.compile(b'.*?FlateDecode.*?stream(.*?)endstream', re.S)
for s in re.findall(stream,pdf):
s = s.strip(b'\r\n')
try:
print(zlib.decompress(s).decode('UTF-8'))
print("")
except:
pass
The result was something like this:
1 0 -10 -10 10 10 d1
0.01 0 0 0.01 0 0 cm
1 0 -10 -10 10 10 d1
0.01 0 0 0.01 0 0 cm
I even tried pdftopng (xpdf) to try tesseract after, without success
So, Is there any way to extract pure text from a PDF like that using Python or a third party app?
If you want to decompress the streams in a PDF file, I can recommend using qdpf, but on this file
qpdf --decrypt --stream-data=uncompress document.pdf out.pdf
doesn't help either.
I am not sure though why your efforts with xpdf and tesseract did not work out, using image-magick's convert
to create PNG files in a temporary directory and tesseract, you can do:
import os
from pathlib import Path
from tempfile import TemporaryDirectory
import subprocess
DPI=600
def call(*args):
cmd = [str(x) for x in args]
return subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode('utf-8')
def ocr(docpath, lang):
result = []
abs_path = Path(docpath).expanduser().resolve()
old_dir = os.getcwd()
out = Path('out.txt')
with TemporaryDirectory() as tmpdir:
os.chdir(tmpdir)
call('convert', '-density', DPI, abs_path, 'out.png')
index = -1
while True:
# names have no leading zeros on the digits, would be difficult to sort glob() output
# so just count them
index += 1
png = Path(f'out-{index}.png')
if not png.exists():
break
call('tesseract', '--dpi', DPI, png, out.stem, '-l', lang)
result.append(out.read_text())
os.chdir(old_dir)
return result
pages = ocr('~/Downloads/document.pdf', 'por')
print('\n'.join(pages[1].splitlines()[21:24]))
which gives:
DA NÃO REALIZAÇÃO DE AUDIÊNCIA DE AUTOCOMPOSIÇÃO NO CASO EM CONCRETO
Com vista a obter maior celeridade processual, assim como da impossibilidade de conciliação entre
If you are on Windows, make sure your PDF file is not open in a different process (like a PDF viewer), as Windows doesn't seem to like that.
The final print is limited as the full output is quite large.
This converting and OCR-ing takes a while so you might want to uncomment the print in call() to get some sense of progress.
There are two fairly simple techniques you can use.
1) Google's "Tessaract" open source OCR (optical character recognition). You could apply this evenly to all PDFs, though converting all that data into pixels and then working magic upon them is going to be more computationally expensive. Which is more important, engineer time or CPU time? There's a pytesseract module. Note that this tool works on image formats, so you'd have to use something like GhostScript (another open source project) to convert all of a PDF's pages to images, then run [py]tessaract on those images.
2) pyPDF can get each page and programmatically extract any text draw operations in the order they were drawn onto the page. This may be nothing like the logical reading order of the page... While a PDF could draw all the 'a's and then all the 'b's (and so forth), it's actually more efficient to draw everything in "font a" , then everything in "font b". It's important to note that "font b" might just be the italic version of "font a". This produces a shorter/more efficient stream of drawing commands, though probably not by such an amount as to be a good business decision to do so.
The kicker here is that a random pile of PDF files might require you to do some OCR. A poorly assembled PDF (one with a font subset that has no "to unicode" data) can't be properly mined for text even though it has nothing but text drawing operations. "Draw glyphs one through five from "font C" doesn't mean much if you don't know that those first five glyphs are "g-l-y-p-h", because that's the order they were used in.
On the other hand, if you've got home-grown PDFs or all your pdfs are from some known source (Word's pdf converter for example), you'll know what to expect in advance.
Note that the only thing mentioned above that I've actually used is Ghostscript. I remember it having a solid command line interface we used to generate images for some online PDF viewer Many Years Ago.
Using python module ffpyplayer, How can I see the frames or get the img object to display or show the video image/frames to the screen?, in the tutorial that I followed, it seems very simple, it reads the frames and plays oudio but (does not display) any video image or frame to the screen, only if I add the (print img, t) will print the frame info to the screen but not video image is displayed on the screen.
I being following tutorials from: https://pypi.python.org/pypi/ffpyplayer, and here: http://matham.github.io/ffpyplayer/player.html, and searched google but the only relevant results point to the same info, I am somewhat new to programming and python, and so maybe I am missing something that seems to be very simple but I can't figure it out myself.
I am using: windows 7 64bit, python 2.7.11 32bit.
Any Help will be appreciated thank you very much.
from ffpyplayer.player import MediaPlayer
vid = 'test_video.flv'
player = MediaPlayer(vid)
val = ''
while val != 'eof':
frame, val = player.get_frame()
if val != 'eof' and frame is not None:
img, t = frame
print img, t #This prints the image object
# display img #This does nothing!
Kivy already provides such a video player, based on ffpyplayer, for you.
It also has the necessary threads already setup for you, to deal with buttons, file reading, audio and timing.
Check this page:
https://kivy.org/docs/api-kivy.uix.videoplayer.html
To install kivy:
https://kivy.org/docs/installation/installation.html
Then you might wish to take a look at the code in:
<< python_path >>\lib\site-packages\kivy\uix\videoplayer.py
That example could be rather complex, so you can also look at this url:
How to play videos from the web like youtube in kivy
Finally, in case Kivy complains that you only have opengl 1.1 (as happened to me), you might try adding the following lines to your code:
from kivy.config import Config
Config.set('graphics', 'multisamples', '0')
These solved the problem to me.
EDIT: Figured THIS part out, but see 2nd post below for another question.
(a little backstory here, skip ahead for the TLDR :) )
I'm currently trying to write a few scripts for Blender to help improve the level creation workflow for a game that I play (Natural Selection 2). Currently, to move geometry from the level editor to Blender, I have to 1) Save a file from the editor as an .obj 2) import obj into blender, and make my changes. Then I 3) export to the game's level format using an exporter script I wrote, and 4) re-open the file in a new instance of the editor. 5) copy the level data from the new instance. 6) paste into the main level file. This is quite a pain to do, and quite clearly discourages even using the tool at all but for major edits. My idea for an improved workflow: 1) Copy data to clipboard in editor 2) Run importer script in Blender to load data. 3) Run exporter script in blender to save data. 4) Paste back into original file. This not only cuts out two whole steps in the tedious process, but also eliminates the need for extra files cluttering up my desktop. Currently though, I haven't found a way to read in clipboard data from the Windows clipboard into Blender... at least not without having to go through some really elaborate installation steps (eg install python 3.1, install pywin32, move x,y,z to the blender directory, uninstall python 3.1... etc...)
TLDR
I need help finding a way to write/read BINARY data to/from the clipboard in Blender. I'm not concerned about cross-platform capability -- the game tools are Windows only.
Ideally -- though obviously beggars can't be choosers here -- the solution would not make it too difficult to install the script for the layman. I'm (hopefully) not the only person who is going to be using this, so I'd like to keep the installation instructions as simple as possible. If there's a solution available in the python standard library, that'd be awesome!
Things I've looked at already/am looking at now
Pyperclip -- plaintext ONLY. I need to be able to read BINARY data off the clipboard.
pywin32 -- Kept getting missing DLL file errors, so I'm sure I'm doing something wrong. Need to take another stab at this, but the steps I had to take were pretty involved (see last sentence above TLDR section :) )
TKinter -- didn't read too far into this one as it seemed to only read plain-text.
ctypes -- actually just discovered this in the process of writing this post. Looks scary as hell, but I'll give it a shot.
Okay I finally got this working. Here's the code for those interested:
from ctypes import *
from binascii import hexlify
kernel32 = windll.kernel32
user32 = windll.user32
user32.OpenClipboard(0)
CF_SPARK = user32.RegisterClipboardFormatW("application/spark editor")
if user32.IsClipboardFormatAvailable(CF_SPARK):
data = user32.GetClipboardData(CF_SPARK)
size = kernel32.GlobalSize(data)
data_locked = kernel32.GlobalLock(data)
text = string_at(data_locked,size)
kernel32.GlobalUnlock(data)
else:
print('No spark data in clipboard!')
user32.CloseClipboard()
Welp... this is a new record for me (posting a question and almost immediately finding an answer).
For those interested, I found this: How do I read text from the (windows) clipboard from python?
It's exactly what I'm after... sort of. I used that code as a jumping-off point.
Instead of CF_TEXT = 1
I used CF_SPARK = user32.RegisterClipboardFormatW("application/spark editor")
Here's where I got that function name from: http://msdn.microsoft.com/en-us/library/windows/desktop/ms649049(v=vs.85).aspx
The 'W' is there because for whatever reason, Blender doesn't see the plain-old "RegisterClipboardFormat" function, you have to use "...FormatW" or "...FormatA". Not sure why that is. If somebody knows, I'd love to hear about it! :)
Anyways, haven't gotten it actually working yet: still need to find a way to break this "data" object up into bytes so I can actually work with it, but that shouldn't be too hard.
Scratch that, it's giving me quite a bit of difficulty.
Here's my code
from ctypes import *
from binascii import hexlify
kernel32 = windll.kernel32
user32 = windll.user32
user32.OpenClipboard(0)
CF_SPARK = user32.RegisterClipboardFormatW("application/spark editor")
if user32.IsClipboardFormatAvailable(CF_SPARK):
data = user32.GetClipboardData(CF_SPARK)
data_locked = kernel32.GlobalLock(data)
print(data_locked)
text = c_char_p(data_locked)
print(text)
print(hexlify(text))
kernel32.GlobalUnlock(data_locked)
else:
print('No spark data in clipboard!')
user32.CloseClipboard()
There aren't any errors, but the output is wrong. The line print(hexlify(text)) yields b'e0cb0c1100000000', when I should be getting something that's 946 bytes long, the first 4 of which should be 01 00 00 00. (Here's the clipboard data, saved out from InsideClipboard as a .bin file: https://www.dropbox.com/s/bf8yhi1h5z5xvzv/testLevel.bin?dl=1 )