Python script to convert po file to localized json - python

I need python script to convert po file to localized json.

you might start here http://docs.python.org/library/gettext.html and here http://docs.python.org/library/json.html

http://jsgettext.berlios.de/ contains a .po to json converter (p.erl) - for python you can use polib to access .po file contents and transform as desired

Related

How to use the biblib parser with a bibtex file stored in a pyhon variable?

I have a bibtex file that I get from the frontend and I'm trying to parse this file with biblib (a python library to parse bibtex files). Because I get the file from the frontend its not stored in a file on my computer. The file gets passed through a variable from the frontend to python and is then stored in the python variable fileFromFrontend. So I can use for example:
bibtexFile = fileFromFrontend.read()
to read the file.
now I'm trying to do something like the following to print the parsed file in the python terminal:
from pybtex.database.input import bibtex
parser = bibtex.Parser()
bibtexFile= parser.parse_file(fileFromFrontend)
print (bibtexFile.entries)
but then I get this error:
-->bibtexFile = parser.parse_file(filesFromFrontend)
-->with open_file(filename, encoding=self.encoding) as f:
-->AttributeError: __enter__
This is probably because the parser tries to open the file but he doesn't have to open this file, he just needs to read this file. I don't know what function of the biblib library to use for parsing the file from a variable and haven't found anything so far to solve my problem.
Hopefully somebody can help
thanks
According to documentation ( https://docs.pybtex.org/api/parsing.html ) there is methods
parse_string and parse_bytes which could work.
so like this
from pybtex.database.input import bibtex
parser = bibtex.Parser()
bibtexFile= parser.parse_bytes(fileFromFrontend.read())
print (bibtexFile.entries)
I don't have pybtex installed, so I couldn't try it myself. But try those methods. Parse_bytes and parse_string needs bib-format as second parameter. In examples that is bibtex, so I tried it here.

Exporting all fields in pcap files into the csv in python

I want to convert a pcap file into the csv using the python code; but the problem is I should specify precisely what fields to be exported using the tshark library. I want to export all fields. when I dont specify the fields, blank file is exported. the sample code is presented below:
tshark -r /root to the file/test1.pcap -T fields -e ip.src > test1.csv
I want to remove the special fields to export ALL fields; then accessing the fields in python (using a library like pandas in dictionary format like df["Source"])
Any help appreciated!
I want to remove the special fields to export ALL fields;
This is not possible with the CSV (fields) format
$tshark -r trace.pcap -T fields
tshark: "-Tfields" was specified, but no fields were specified with "-e".
An alternative solution is to use one of the JSON formats (-T ek|json|jsonraw) or the XML format (-T pdml).
then accessing the fields in python (using a library like pandas in dictionary format like df["Source"])
In python you could parse the JSON using json.lodas() and get a dictionary. See https://www.w3schools.com/python/python_json.asp

EBCDIC to ASCII Conversions

I have mainframe file in EBCDIC format and I want to convert those files into ASCII format.
I have tried converting EBCDIC to ASCII using python 2.6 but there are many issues in that like compression field didn't get converted and records count gets increased.
Is there any way to convert EBCDIC files having compressed fields to ASCII format.
If you already have the file downloaded you can convert it easily from EBCDIC to ASCII in a Linux or MacOS machine by using the command line.
To accomplish that you need to use the dd command.
Here a quick overview of some parameters it uses:
dd [bs=size] [cbs=size] [conv=conversion] [count=n] [ibs=size] [if=file] [imsg=string] [iseek=n] [obs=s] [of=file] [omsg=string] [seek=n] [skip=n]
There are more parameters that those above, to check all available just do the command: man dd, it will show all other available parameters and the explanation of each one.
In your case you should start with:
dd conv=ascii if=EBCDIC_file.txt of=ASCII_file.txt
where EBCDIC_file.txt is the filename of your input EBCDIC file and ASCII_file.txt will be the file created as output with all bytes converted from EBCDIC to ASCII.
Likewise you can do the reverse by using conv=ebcdic to convert a file from ASCII to EBCDIC.
Here's the man page for dd on the web: https://www.man7.org/linux/man-pages/man1/dd.1.html
When you mention compressed in your file, do you mean the whole file comes compressed from the mainframe? Probably it came TERSED (by using terse utility on mainframe). If this is the case, there is a public version of terse that runs on DOS, Linux, MacOS, AIX and others. It is available on cbtape site: http://www.cbttape.org/ftp/cbt/CBT892.zip
Options
Some options
Convert the file to a Text file on the mainframe (sort or eastrieve will both do this)
If it is a once off the Fileaid/File master can convert the file to Text on the mainframe
If it is a once off the RecordEditor should be able to edit the file with a Cobol Copybook. It can also generate JRecord code to read the file.
If there is only one Record-Type in the file, CobolToCsv can use the Cobol Copybook to convert the file to a CSV.
The JRecord lets you read a Cobol Copybook in Java
JRecord has a COBOL Copy utility will let you do a Cobol to cobol copy. If there is only one Record type you can
Copy the EBCDIC Copybook to the equivalent Ascii copybook (ext fields are converted, binary fields are left unchanged). This is usefull if converting a Mainframe Cobol file for use in a Windows / Linux Cobol system
Copy a EBCDIC Binary Copybook to an Ascii Text copybook
The Stingray project provides access to cobol files in python
CobolTCsv
For Example to convert a Cobol Data File to Csv (single record type) using CobolToCsv :
java -jar ../lib/Cobol2Csv.jar -I In/DTAR020.bin -O Out/o_DTAR020_space.csv ^
-C DTAR020.cbl ^
-Q DoubleQuote -FS Fixed_Length ^
-IC CP037 -Delimiter ,
Where
In/DTAR020.bin is the Input Cobol data file
Out/o_DTAR020_space.csv is the output Csv file
**DTAR020.cbl ** is the Cobol Copybook
Fixed_Length idicates it a fixed length File (FB on the Mainframe)
RecordEditor
To edit the file see How do you edit a Binary Mainframe file in the RecordEditor using a Cobol Copybook (pt1)
To generate JRecord Code see How do you generate java~jrecord code for a Cobol copybook

How can I convert stl files to ply files efficiently in Python?

Is there a way of converting stl (STereoLithography) files to ply (Polygon File Format) format in Python? I have to use another program that only accepts ply format.
Here is the link to the program that I want to use: http://www.cs.jhu.edu/~misha/Code/ShapeSPH/ShapeDescriptor/
Any other suggestions regarding how I could use this program?
My second question, can I use this .exe file by using Python? Should it be something like :
os.popen("file.exe --in value", shell=True)
os.system("file.exe --in value")
Am I right?

Is it possible to call xsl Apache FOP without providing an input file but instead passing a string

I am trying to generate a PDF using FOP. To do this I am taking in a template file, initialling its values with Jinja2 and then passing it through to fop with a system call.
Is it possible to do a subprocess call to FOP without passing through an input file but instead a string containing the XML directly? And if so how would I go about doing so?
I was hoping for something like this
fop -fo "XML here" -pdf output.pdf
Yes actually it was possible.
Using python I was able to import the xml from the file into lxml.etree:
tree = etree.parse('FOP_PARENT.fo.xml')
And then by using the etree to parse the include tags:
tree.xinclude()
Then it was a simple case of converting the xml back into unicode:
xml = etree.tounicode(tree)
This is how I got the templates to work. Hopefully this helps someone who has the same issue!

Categories