How to read inline-styles from WxPython - python

I'm trying to put text into a RichTextCtrl and then, after the user has made edits, I want to get the edited text back out along with the styles. Its the second part I'm having trouble with. Out of all the methods to get styles out of the buffer, none of them are really user-friendly.
The best I've come up with is to walk through the text a character at a time with GetStyleForRange(range, style). There has got to be a better way to do this! Here's my code now, which walks through gathering a list of text segments and styles.
Please give me a better way to do this. I have to be missing something.
buffer: wx.richtext.RichTextBuffer = self.rtc.GetBuffer()
end = len(buffer.GetText())
# Variables for text/style reading loop
ch: str
curStyle: str
i: int = 0
style = wx.richtext.RichTextAttr()
text: List[str] = []
textItems: List[Tuple[str, str]] = []
# Read the style of the first character
self.rtc.GetStyleForRange(wx.richtext.RichTextRange(i, i + 1), style)
curStyle = self.describeStyle(style)
# Loop until we hit the end. Use a while loop so we can control the index increment.
while i < end + 1:
# Read the current character and its style as `ch` and `newStyle`
ch = buffer.GetTextForRange(wx.richtext.RichTextRange(i, i))
self.rtc.GetStyleForRange(wx.richtext.RichTextRange(i, i + 1), style)
newStyle = self.describeStyle(style)
# If the style has changed, we flush the collected text and start new collection
if text and newStyle != curStyle and ch != '\n':
newText = "".join(text)
textItems.append((newText, curStyle))
text = []
self.rtc.GetStyleForRange(wx.richtext.RichTextRange(i + 1, i + 2), style)
curStyle = self.describeStyle(style)
# Otherwise, collect the character and continue
else:
i += 1
text.append(ch)
# Capture the last text being collected
newText = "".join(text)
textItems.append((newText, newStyle))

Here's a C++ version of the solution I mentioned in the comment above. It's a simple tree walk using a queue, so I think should be translatable to python easily.
const wxRichTextBuffer& buffer = m_richText1->GetBuffer();
std::deque<const wxRichTextObject*> objects;
objects.push_front(&buffer);
while ( !objects.empty() )
{
const wxRichTextObject* curObject = objects.front();
objects.pop_front();
if ( !curObject->IsComposite() )
{
wxRichTextRange range = curObject->GetRange();
const wxRichTextAttr& attr = curObject->GetAttributes();
// Do something with range and attr here.
}
else
{
// This is a composite object. Add its children to the queue.
// The children are added in reverse order to do a depth first walk.
const wxRichTextCompositeObject* curComposite =
static_cast<const wxRichTextCompositeObject*>(curObject);
size_t childCount = curComposite->GetChildCount() ;
for ( int i = childCount - 1 ; i >= 0 ; --i )
{
objects.push_front(curComposite->GetChild(i));
}
}
}

Related

Python InDesign scripting: Get overflowing textbox from preflight for automatic resizing

Thanks to this great answer I was able to figure out how to run a preflight check for my documents using Python and the InDesign script API. Now I wanted to work on automatically adjusting the text size of the overflowing text boxes, but was unable to figure out how to retrieve a TextBox object from the Preflight object.
I referred to the API specification, but all the properties only seem to yield strings which do not uniquely define the TextBoxes, like in this example:
Errors Found (1):
Text Frame (R=2)
Is there any way to retrieve the violating objects from the Preflight, in order to operate on them later on? I'd be very thankful for additional input on this matter, as I am stuck!
If all you need is to find and to fix the overset errors I'd propose this solution:
Here is the simple Extendscript to fix the text overset error. It decreases the font size in the all overflowed text frames in active document:
var doc = app.activeDocument;
var frames = doc.textFrames.everyItem().getElements();
var f = frames.length
while(f--) {
var frame = frames[f];
if (frame.overflows) resize_font(frame)
}
function resize_font(frame) {
app.scriptPreferences.enableRedraw = false;
while (frame.overflows) {
var texts = frame.parentStory.texts.everyItem().getElements();
var t = texts.length;
while(t--) {
var characters = texts[t].characters.everyItem().getElements();
var c = characters.length;
while (c--) characters[c].pointSize = characters[c].pointSize * .99;
}
}
app.scriptPreferences.enableRedraw = true;
}
You can save it in any folder and run it by the Python script:
import win32com.client
app = win32com.client.Dispatch('InDesign.Application.CS6')
doc = app.Open(r'd:\temp\test.indd')
profile = app.PreflightProfiles.Item('Stackoverflow Profile')
print('Profile name:', profile.name)
process = app.PreflightProcesses.Add(doc, profile)
process.WaitForProcess()
errors = process.processResults
print('Errors:', errors)
if errors[:4] != 'None':
script = r'd:\temp\fix_overset.jsx' # <-- here is the script to fix overset
print('Run script', script)
app.DoScript(script, 1246973031) # run the jsx script
# 1246973031 --> ScriptLanguage.JAVASCRIPT
# https://www.indesignjs.de/extendscriptAPI/indesign-latest/#ScriptLanguage.html
process = app.PreflightProcesses.Add(doc, profile)
process.WaitForProcess()
errors = process.processResults
print('Errors:', errors) # it should print 'None'
if errors[:4] == 'None':
doc.Save()
doc.Close()
input('\nDone... Press <ENTER> to close the window')
Thanks to the exellent answer of Yuri I was able solve my problem, although there are still some shortcomings.
In Python, I load my documents and check if there are any problems detected during the preflight. If so, I move on to adjusting the text frames.
myDoc = app.Open(input_file_path)
profile = app.PreflightProfiles.Item(1)
process = app.PreflightProcesses.Add(myDoc, profile)
process.WaitForProcess()
results = process.processResults
if "None" not in results:
# Fix errors
script = open("data/script.jsx")
app.DoScript(script.read(), 1246973031, variables.resize_array)
process.WaitForProcess()
results = process.processResults
# Check if problems were resolved
if "None" not in results:
info_fail(card.name, "Error while running preflight")
myDoc.Close(1852776480)
return FLAG_PREFLIGHT_FAIL
I load the JavaScript file stored in script.jsx, that consists of several components. I start by extracting the arguments and loading all the pages, since I want to handle them individually. I then collect all text frames on the page in an array.
var doc = app.activeDocument;
var pages = doc.pages;
var resizeGroup = arguments[0];
var condenseGroup = arguments[1];
// Loop over all available pages separately
for (var pageIndex = 0; pageIndex < pages.length; pageIndex++) {
var page = pages[pageIndex];
var pageItems = page.allPageItems;
var textFrames = [];
// Collect all TextFrames in an array
for (var pageItemIndex = 0; pageItemIndex < pageItems.length; pageItemIndex++) {
var candidate = pageItems[pageItemIndex];
if (candidate instanceof TextFrame) {
textFrames.push(candidate);
}
}
What I wanted to achieve was a setting where if one of a group of text frames was overflowing, the text size of all the text frames in this group are adjusted as well. E.g. text frame 1 overflows when set to size 8, no longer when set to size 6. Since text frame 1 is in the same group as text frame 2, both of them will be adjusted to size 6 (assuming the second frame does not overflow at this size).
In order to handle this, I pass an array containing the groups. I now check if the text frame is contained in one of these groups (which is rather tedious, I had to write my own methods since InDesign does not support modern functions like filter() as far as I am concerned...).
// Check if TextFrame overflows, if so add all TextFrames that should be the same size
for (var textFrameIndex = 0; textFrameIndex < textFrames.length; textFrameIndex++) {
var textFrame = textFrames[textFrameIndex];
// If text frame overflows, adjust it and all the frames that are supposed to be of the same size
if (textFrame.overflows) {
var foundResizeGroup = filterArrayWithString(resizeGroup, textFrame.name);
var foundCondenseGroup = filterArrayWithString(condenseGroup, textFrame.name);
var process = false;
var chosenGroup, type;
if (foundResizeGroup.length > 0) {
chosenGroup = foundResizeGroup;
type = "resize";
process = true;
} else if (foundCondenseGroup.length > 0) {
chosenGroup = foundCondenseGroup;
type = "condense";
process = true;
}
if (process) {
var foundFrames = findTextFramesFromNames(textFrames, chosenGroup);
adjustTextFrameGroup(foundFrames, type);
}
}
}
If this is the case, I adjust either the text size or the second axis of the text (which condenses the text for my variable font). This is done using the following functions:
function adjustTextFrameGroup(resizeGroup, type) {
// Check if some overflowing textboxes
if (!someOverflowing(resizeGroup)) {
return;
}
app.scriptPreferences.enableRedraw = false;
while (someOverflowing(resizeGroup)) {
for (var textFrameIndex = 0; textFrameIndex < resizeGroup.length; textFrameIndex++) {
var textFrame = resizeGroup[textFrameIndex];
if (type === "resize") decreaseFontSize(textFrame);
else if (type === "condense") condenseFont(textFrame);
else alert("Unknown operation");
}
}
app.scriptPreferences.enableRedraw = true;
}
function someOverflowing(textFrames) {
for (var textFrameIndex = 0; textFrameIndex < textFrames.length; textFrameIndex++) {
var textFrame = textFrames[textFrameIndex];
if (textFrame.overflows) {
return true;
}
}
return false;
}
function decreaseFontSize(frame) {
var texts = frame.parentStory.texts.everyItem().getElements();
for (var textIndex = 0; textIndex < texts.length; textIndex++) {
var characters = texts[textIndex].characters.everyItem().getElements();
for (var characterIndex = 0; characterIndex < characters.length; characterIndex++) {
characters[characterIndex].pointSize = characters[characterIndex].pointSize - 0.25;
}
}
}
function condenseFont(frame) {
var texts = frame.parentStory.texts.everyItem().getElements();
for (var textIndex = 0; textIndex < texts.length; textIndex++) {
var characters = texts[textIndex].characters.everyItem().getElements();
for (var characterIndex = 0; characterIndex < characters.length; characterIndex++) {
characters[characterIndex].setNthDesignAxis(1, characters[characterIndex].designAxes[1] - 5)
}
}
}
I know that this code can be improved upon (and am open to feedback), for example if a group consists of multiple text frames, the procedure will run for all of them, even though it need only be run once. I was getting pretty frustrated with the old JavaScript, and the impact is negligible. The rest of the functions are also only helper functions, which I'd like to replace with more modern version. Sadly and as already stated, I think that they are simply not available.
Thanks once again to Yuri, who helped me immensely!

Convert from Tensorflow -> CoreML 3.0 for slot/intent detection

I am trying to use some of the models created by this codebase (Slot-Filling-Understanding-Using-RNNs) in my Swift application.
I was able to convert lstm_nopooling, lstm_nopooling300 and lstm to convert to CoreML.
In model.py I used this code:
def save_model(self):
joblib.dump(self.summary, 'models/' + self.name + '.txt')
self.model.save('models/' + self.name + '.h5')
try:
coreml_model = coremltools.converters.keras.convert(self.model, input_names="main_input", output_names=["intent_output","slot_output"])
coreml_model.save('models/' + self.name + '.mlmodel')
except:
pass
print("Saved model to disk")
I am trying to convert the vectors back to an intent and slots.
I have this, but
func tokenizeSentences(instr: String) -> [Int] {
let s = instr.lowercased().split(separator: " ")
var ret = [Int]()
if let filepath = Bundle.main.path(forResource: "atis.dict.vocab", ofType: "csv") {
do {
let contents = try String(contentsOfFile: filepath)
print(contents)
var lines = contents.split { $0.isNewline }
var pos = 0
for word in s {
if let index = lines.firstIndex(of: word) {
print(index.description + " " + word)
ret.append(index)
}
}
return ret
} catch {
// contents could not be loaded
}
} else {
// example.txt not found!
}
return ret
}
func predictText(instr:String) {
let model = lstm_nopooling300()
guard let mlMultiArray = try? MLMultiArray(shape:[20,1,1],
dataType:MLMultiArrayDataType.int32) else {
fatalError("Unexpected runtime error. MLMultiArray")
}
let tokens = tokenizeSentences(instr: instr)
for (index, element) in tokens.enumerated() {
mlMultiArray[index] = NSNumber(integerLiteral: element)
}
guard let m = try? model.prediction(input: lstm_nopooling300Input.init(main_input: mlMultiArray))
else {
fatalError("Unexpected runtime error. MLMultiArray")
}
let mm = m.intent_output
let length = mm.count
let doublePtr = mm.dataPointer.bindMemory(to: Double.self, capacity: length)
let doubleBuffer = UnsafeBufferPointer(start: doublePtr, count: length)
let output = Array(doubleBuffer)
print("******** intents \(mm.count) ********")
print(output)
let mn = m.slot_output
let length2 = mn.count
let doublePtr2 = mm.dataPointer.bindMemory(to: Double.self, capacity: length2)
let doubleBuffer2 = UnsafeBufferPointer(start: doublePtr2, count: length2)
let output2 = Array(doubleBuffer2)
print("******** slots \(mn.count) ********")
print(output2)
}
}
When I run my code I get this, truncated, for intents:
******** intents 540 ********
[0.0028914143331348896, 0.0057610333897173405, 4.1651015635579824e-05,
0.15935245156288147, 5.6665314332349226e-05, 5.7797817134996876e-05, 0.0044302307069301605, 0.00012486864579841495, 0.0004683282459154725, 0.003053907072171569, 3.806956738117151e-05, 0.012112349271774292, 5.861848694621585e-05, 0.0031344725284725428,
The problem, I believe, is that the ids are in a pickle file, so in atis/atis.train.pkl perhaps.
All I did was train the models and convert those I could to CoreML and now I am trying to use it, but not certain what to do next.
I have a textfield and I enter 'current weather in london' and I hope to get something similar to (this is from running example.py)
{'intent': 'weather_intent', 'slots': [{'name': 'city', 'value': 'London'}]}
Here is the coreml input/output
Thanks to #MatthijsHollemans I was able to figure out what to do.
In data_processing.py I added these:
with open('atis/wordlist.csv', 'w') as f:
for key in ids2words.keys():
f.write("%s\n"%(ids2words.keys[key]))
with open('atis/wordlist_slots.csv', 'w') as f:
for key in ids2slots.keys():
f.write("%s\n"%(ids2slots[key]))
with open('atis/wordlist_intents.csv', 'w') as f:
for key in ids2intents.keys():
f.write("%s\n"%(ids2intents[key]))
This allows me to tokenize correctly, using wordlist.csv.
Then when I get the response back, and using mm.count was wrong, it should have been output.count for example, I could see the intents.
Look for the element that has the largest value, and then you look up in wordlist_intents.csv (I turned this into an array, probably should be a dictionary) to find the likely intent.
I still need to do the slots, but the basic idea is the same.
The key was to output the dictionary used in python to a csv file and then import that into the project.
UPDATE
I realized that when mm.count was 540 that is because it can have 20 words in the sentence and so it can return that many. So in my case I needed to split the word by space and then loop that many times as I won't get more slots than I have words.
I am doing this in SwiftUI so I also had to create an observable so I could use EnvironmentObject to pass in the terms.
So, to properly loop over the double array in memory I am including the latest code that does what I expect.
func predictText(instr:String) {
let model = lstm_nopooling300()
guard let mlMultiArray = try? MLMultiArray(shape:[20,1,1],
dataType:MLMultiArrayDataType.int32) else {
fatalError("Unexpected runtime error. MLMultiArray")
}
let tokens = tokenizeSentences(instr: instr)
let sent = instr.split(separator: " ")
print(instr)
print(tokens)
for (index, element) in tokens.enumerated() {
mlMultiArray[index] = NSNumber(integerLiteral: element)
}
guard let m = try? model.prediction(input: lstm_nopooling300Input.init(main_input: mlMultiArray))
else {
fatalError("Unexpected runtime error. MLMultiArray")
}
let mm = m.intent_output
let length = mm.count
let doublePtr = mm.dataPointer.bindMemory(to: Double.self, capacity: length)
var intents = [String]()
for i in 0...sent.count - 1 {
let doubleBuffer = UnsafeBufferPointer(start: doublePtr + i * 27, count: 27)
let output = Array(doubleBuffer)
let intent = convertVectorToIntent(vector: output)
intents.append(intent)
}
print(intents)
let mn = m.slot_output
let length2 = mn.count
let doublePtr2 = mn.dataPointer.bindMemory(to: Double.self, capacity: length2)
var slots = [String]()
for i in 0...sent.count - 1 {
let doubleBuffer2 = UnsafeBufferPointer(start: doublePtr2 + i * 133, count: 133)
let output2 = Array(doubleBuffer2)
var slot = ""
slot = convertVectorToSlot(vector: output2)
slots.append(slot)
slots.append(sent[i].description)
}
print(slots)
}

C++ debug/print custom type with GDB : the case of nlohmann json library

I'm working on a project using nlohmann's json C++ implementation.
How can one easily explore nlohmann's JSON keys/vals in GDB ?
I tried to use this STL gdb wrapping since it provides helpers to explore standard C++ library structures that nlohmann's JSON lib is using.
But I don't find it convenient.
Here is a simple use case:
json foo;
foo["flex"] = 0.2;
foo["awesome_str"] = "bleh";
foo["nested"] = {{"bar", "barz"}};
What I would like to have in GDB:
(gdb) p foo
{
"flex" : 0.2,
"awesome_str": "bleh",
"nested": etc.
}
Current behavior
(gdb) p foo
$1 = {
m_type = nlohmann::detail::value_t::object,
m_value = {
object = 0x129ccdd0,
array = 0x129ccdd0,
string = 0x129ccdd0,
boolean = 208,
number_integer = 312266192,
number_unsigned = 312266192,
number_float = 1.5427999782486669e-315
}
}
(gdb) p foo.at("flex")
Cannot evaluate function -- may be inlined // I suppose it depends on my compilation process. But I guess it does not invalidate the question.
(gdb) p *foo.m_value.object
$2 = {
_M_t = {
_M_impl = {
<std::allocator<std::_Rb_tree_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long long, unsigned long long, double, std::allocator, nlohmann::adl_serializer> > > >> = {
<__gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, bool, long long, unsigned long long, double, std::allocator, nlohmann::adl_serializer> > > >> = {<No data fields>}, <No data fields>},
<std::_Rb_tree_key_compare<std::less<void> >> = {
_M_key_compare = {<No data fields>}
},
<std::_Rb_tree_header> = {
_M_header = {
_M_color = std::_S_red,
_M_parent = 0x4d72d0,
_M_left = 0x4d7210,
_M_right = 0x4d7270
},
_M_node_count = 5
}, <No data fields>}
}
}
I found my own answer reading further the GDB capabilities and stack overflow questions concerning print of std::string.
The short path is the easiest.
The other path was hard, but I'm glad I managed to do this. There is lots of room for improvements.
there is an open issue for this particular matter here https://github.com/nlohmann/json/issues/1952*
Short path v3.1.2
I simply defined a gdb command as follows:
# this is a gdb script
# can be loaded from gdb using
# source my_script.txt (or. gdb or whatever you like)
define pjson
# use the lohmann's builtin dump method, ident 4 and use space separator
printf "%s\n", $arg0.dump(4, ' ', true).c_str()
end
# configure command helper (text displayed when typing 'help pjson' in gdb)
document pjson
Prints a lohmann's JSON C++ variable as a human-readable JSON string
end
Using it in gdb:
(gdb) source my_custom_script.gdb
(gdb) pjson foo
{
"flex" : 0.2,
"awesome_str": "bleh",
"nested": {
"bar": "barz"
}
}
Short path v3.7.0 [EDIT] 2019-onv-06
One may also use the new to_string() method,but I could not get it to work withing GDB with a live inferior process. Method below still works.
# this is a gdb script
# can be loaded from gdb using
# source my_script.txt (or. gdb or whatever you like)
define pjson
# use the lohmann's builtin dump method, ident 4 and use space separator
printf "%s\n", $arg0.dump(4, ' ', true, json::error_handler_t::strict).c_str()
end
# configure command helper (text displayed when typing 'help pjson' in gdb)
document pjson
Prints a lohmann's JSON C++ variable as a human-readable JSON string
end
April 18th 2020: WORKING FULL PYTHON GDB (with live inferior process and debug symbols)
Edit 2020-april-26: the code (offsets) here are out of blue and NOT compatible for all platforms/JSON lib compilations. The github project is much more mature regarding this matter (3 platforms tested so far). Code is left there as is since I won't maintain 2 codebases.
versions:
https://github.com/nlohmann/json version 3.7.3
GNU gdb (GDB) 8.3 for GNAT Community 2019 [rev=gdb-8.3-ref-194-g3fc1095]
c++ project built with GPRBUILD/ GNAT Community 2019 (20190517) (x86_64-pc-mingw32)
The following python code shall be loaded within gdb. I use a .gdbinit file sourced in gdb.
Github repo: https://github.com/LoneWanderer-GH/nlohmann-json-gdb
GDB script
Feel free to adopt the loading method of your choice (auto, or not, or IDE plugin, whatever)
set print pretty
# source stl_parser.gdb # if you like the good work done with those STL containers GDB parsers
source printer.py # the python file is given below
python gdb.printing.register_pretty_printer(gdb.current_objfile(), build_pretty_printer())
Python script
import gdb
import platform
import sys
import traceback
# adapted from https://github.com/hugsy/gef/blob/dev/gef.py
# their rights are theirs
HORIZONTAL_LINE = "_" # u"\u2500"
LEFT_ARROW = "<-" # "\u2190 "
RIGHT_ARROW = "->" # " \u2192 "
DOWN_ARROW = "|" # "\u21b3"
nlohmann_json_type_namespace = \
r"nlohmann::basic_json<std::map, std::vector, std::__cxx11::basic_string<char, std::char_traits<char>, " \
r"std::allocator<char> >, bool, long long, unsigned long long, double, std::allocator, nlohmann::adl_serializer>"
# STD black magic
MAGIC_STD_VECTOR_OFFSET = 16 # win 10 x64 values, beware on your platform
MAGIC_OFFSET_STD_MAP = 32 # win 10 x64 values, beware on your platform
""""""
# GDB black magic
""""""
nlohmann_json_type = gdb.lookup_type(nlohmann_json_type_namespace).pointer()
# for in memory direct jumps. cast to type is still necessary yet to obtain values, but this could be changed by chaning the types to simpler ones ?
std_rb_tree_node_type = gdb.lookup_type("std::_Rb_tree_node_base::_Base_ptr").pointer()
std_rb_tree_size_type = gdb.lookup_type("std::size_t").pointer()
""""""
# nlohmann_json reminder. any interface change should be reflected here
# enum class value_t : std::uint8_t
# {
# null, ///< null value
# object, ///< object (unordered set of name/value pairs)
# array, ///< array (ordered collection of values)
# string, ///< string value
# boolean, ///< boolean value
# number_integer, ///< number value (signed integer)
# number_unsigned, ///< number value (unsigned integer)
# number_float, ///< number value (floating-point)
# discarded ///< discarded by the the parser callback function
# };
""""""
enum_literals_namespace = ["nlohmann::detail::value_t::null",
"nlohmann::detail::value_t::object",
"nlohmann::detail::value_t::array",
"nlohmann::detail::value_t::string",
"nlohmann::detail::value_t::boolean",
"nlohmann::detail::value_t::number_integer",
"nlohmann::detail::value_t::number_unsigned",
"nlohmann::detail::value_t::number_float",
"nlohmann::detail::value_t::discarded"]
enum_literal_namespace_to_literal = dict([(e, e.split("::")[-1]) for e in enum_literals_namespace])
INDENT = 4 # beautiful isn't it ?
def std_stl_item_to_int_address(node):
return int(str(node), 0)
def parse_std_str_from_hexa_address(hexa_str):
# https://stackoverflow.com/questions/6776961/how-to-inspect-stdstring-in-gdb-with-no-source-code
return '"{}"'.format(gdb.parse_and_eval("*(char**){}".format(hexa_str)).string())
class LohmannJSONPrinter(object):
"""Print a nlohmann::json in GDB python
BEWARE :
- Contains shitty string formatting (defining lists and playing with ",".join(...) could be better; ident management is stoneage style)
- Parsing barely tested only with a live inferior process.
- It could possibly work with a core dump + debug symbols. TODO: read that stuff
https://doc.ecoscentric.com/gnutools/doc/gdb/Core-File-Generation.html
- Not idea what happens with no symbols available, lots of fields are retrieved by name and should be changed to offsets if possible
- NO LIB VERSION MANAGEMENT. TODO: determine if there are serious variants in nlohmann data structures that would justify working with strucutres
- PLATFORM DEPENDANT TODO: remove the black magic offsets or handle them in a nicer way
NB: If you are python-kaizer-style-guru, please consider helping or teaching how to improve all that mess
"""
def __init__(self, val, indent_level=0):
self.val = val
self.field_type_full_namespace = None
self.field_type_short = None
self.indent_level = indent_level
self.function_map = {"nlohmann::detail::value_t::null": self.parse_as_leaf,
"nlohmann::detail::value_t::object": self.parse_as_object,
"nlohmann::detail::value_t::array": self.parse_as_array,
"nlohmann::detail::value_t::string": self.parse_as_str,
"nlohmann::detail::value_t::boolean": self.parse_as_leaf,
"nlohmann::detail::value_t::number_integer": self.parse_as_leaf,
"nlohmann::detail::value_t::number_unsigned": self.parse_as_leaf,
"nlohmann::detail::value_t::number_float": self.parse_as_leaf,
"nlohmann::detail::value_t::discarded": self.parse_as_leaf}
def parse_as_object(self):
assert (self.field_type_short == "object")
o = self.val["m_value"][self.field_type_short]
# traversing tree is a an adapted copy pasta from STL gdb parser
# (http://www.yolinux.com/TUTORIALS/src/dbinit_stl_views-1.03.txt and similar links)
# Simple GDB Macros writen by Dan Marinescu (H-PhD) - License GPL
# Inspired by intial work of Tom Malnar,
# Tony Novac (PhD) / Cornell / Stanford,
# Gilad Mishne (PhD) and Many Many Others.
# Contact: dan_c_marinescu#yahoo.com (Subject: STL)
#
# Modified to work with g++ 4.3 by Anders Elton
# Also added _member functions, that instead of printing the entire class in map, prints a member.
node = o["_M_t"]["_M_impl"]["_M_header"]["_M_left"]
# end = o["_M_t"]["_M_impl"]["_M_header"]
tree_size = o["_M_t"]["_M_impl"]["_M_node_count"]
# in memory alternatives:
_M_t = std_stl_item_to_int_address(o.referenced_value().address)
_M_t_M_impl_M_header_M_left = _M_t + 8 + 16 # adding bits
_M_t_M_impl_M_node_count = _M_t + 8 + 16 + 16 # adding bits
node = gdb.Value(long(_M_t_M_impl_M_header_M_left)).cast(std_rb_tree_node_type).referenced_value()
tree_size = gdb.Value(long(_M_t_M_impl_M_node_count)).cast(std_rb_tree_size_type).referenced_value()
i = 0
if tree_size == 0:
return "{}"
else:
s = "{\n"
self.indent_level += 1
while i < tree_size:
# STL GDB scripts write "+1" which in my w10 x64 GDB makes a +32 bits move ...
# may be platform dependant and should be taken with caution
key_address = std_stl_item_to_int_address(node) + MAGIC_OFFSET_STD_MAP
# print(key_object['_M_dataplus']['_M_p'])
k_str = parse_std_str_from_hexa_address(hex(key_address))
# offset = MAGIC_OFFSET_STD_MAP
value_address = key_address + MAGIC_OFFSET_STD_MAP
value_object = gdb.Value(long(value_address)).cast(nlohmann_json_type)
v_str = LohmannJSONPrinter(value_object, self.indent_level + 1).to_string()
k_v_str = "{} : {}".format(k_str, v_str)
end_of_line = "\n" if tree_size <= 1 or i == tree_size else ",\n"
s = s + (" " * (self.indent_level * INDENT)) + k_v_str + end_of_line # ",\n"
if std_stl_item_to_int_address(node["_M_right"]) != 0:
node = node["_M_right"]
while std_stl_item_to_int_address(node["_M_left"]) != 0:
node = node["_M_left"]
else:
tmp_node = node["_M_parent"]
while std_stl_item_to_int_address(node) == std_stl_item_to_int_address(tmp_node["_M_right"]):
node = tmp_node
tmp_node = tmp_node["_M_parent"]
if std_stl_item_to_int_address(node["_M_right"]) != std_stl_item_to_int_address(tmp_node):
node = tmp_node
i += 1
self.indent_level -= 2
s = s + (" " * (self.indent_level * INDENT)) + "}"
return s
def parse_as_str(self):
return parse_std_str_from_hexa_address(str(self.val["m_value"][self.field_type_short]))
def parse_as_leaf(self):
s = "WTFBBQ !"
if self.field_type_short == "null" or self.field_type_short == "discarded":
s = self.field_type_short
elif self.field_type_short == "string":
s = self.parse_as_str()
else:
s = str(self.val["m_value"][self.field_type_short])
return s
def parse_as_array(self):
assert (self.field_type_short == "array")
o = self.val["m_value"][self.field_type_short]
start = o["_M_impl"]["_M_start"]
size = o["_M_impl"]["_M_finish"] - start
# capacity = o["_M_impl"]["_M_end_of_storage"] - start
# size_max = size - 1
i = 0
start_address = std_stl_item_to_int_address(start)
if size == 0:
s = "[]"
else:
self.indent_level += 1
s = "[\n"
while i < size:
# STL GDB scripts write "+1" which in my w10 x64 GDB makes a +16 bits move ...
offset = i * MAGIC_STD_VECTOR_OFFSET
i_address = start_address + offset
value_object = gdb.Value(long(i_address)).cast(nlohmann_json_type)
v_str = LohmannJSONPrinter(value_object, self.indent_level + 1).to_string()
end_of_line = "\n" if size <= 1 or i == size else ",\n"
s = s + (" " * (self.indent_level * INDENT)) + v_str + end_of_line
i += 1
self.indent_level -= 2
s = s + (" " * (self.indent_level * INDENT)) + "]"
return s
def is_leaf(self):
return self.field_type_short != "object" and self.field_type_short != "array"
def parse_as_aggregate(self):
if self.field_type_short == "object":
s = self.parse_as_object()
elif self.field_type_short == "array":
s = self.parse_as_array()
else:
s = "WTFBBQ !"
return s
def parse(self):
# s = "WTFBBQ !"
if self.is_leaf():
s = self.parse_as_leaf()
else:
s = self.parse_as_aggregate()
return s
def to_string(self):
try:
self.field_type_full_namespace = self.val["m_type"]
str_val = str(self.field_type_full_namespace)
if not str_val in enum_literal_namespace_to_literal:
return "TIMMY !"
self.field_type_short = enum_literal_namespace_to_literal[str_val]
return self.function_map[str_val]()
# return self.parse()
except:
show_last_exception()
return "NOT A JSON OBJECT // CORRUPTED ?"
def display_hint(self):
return self.val.type
# adapted from https://github.com/hugsy/gef/blob/dev/gef.py
# inspired by https://stackoverflow.com/questions/44733195/gdb-python-api-getting-the-python-api-of-gdb-to-print-the-offending-line-numbe
def show_last_exception():
"""Display the last Python exception."""
print("")
exc_type, exc_value, exc_traceback = sys.exc_info()
print(" Exception raised ".center(80, HORIZONTAL_LINE))
print("{}: {}".format(exc_type.__name__, exc_value))
print(" Detailed stacktrace ".center(80, HORIZONTAL_LINE))
for (filename, lineno, method, code) in traceback.extract_tb(exc_traceback)[::-1]:
print("""{} File "{}", line {:d}, in {}()""".format(DOWN_ARROW, filename, lineno, method))
print(" {} {}".format(RIGHT_ARROW, code))
print(" Last 10 GDB commands ".center(80, HORIZONTAL_LINE))
gdb.execute("show commands")
print(" Runtime environment ".center(80, HORIZONTAL_LINE))
print("* GDB: {}".format(gdb.VERSION))
print("* Python: {:d}.{:d}.{:d} - {:s}".format(sys.version_info.major, sys.version_info.minor,
sys.version_info.micro, sys.version_info.releaselevel))
print("* OS: {:s} - {:s} ({:s}) on {:s}".format(platform.system(), platform.release(),
platform.architecture()[0],
" ".join(platform.dist())))
print(horizontal_line * 80)
print("")
exit(-6000)
def build_pretty_printer():
pp = gdb.printing.RegexpCollectionPrettyPrinter("nlohmann_json")
pp.add_printer(nlohmann_json_type_namespace, "^{}$".format(nlohmann_json_type_namespace), LohmannJSONPrinter)
return pp
######
# executed at autoload (or to be executed by in GDB)
# gdb.printing.register_pretty_printer(gdb.current_objfile(),build_pretty_printer())
BEWARE :
- Contains shitty string formatting (defining lists and playing with ",".join(...) could be better; ident management is stoneage style)
- Parsing barely tested only with a live inferior process.
- It could possibly work with a core dump + debug symbols. TODO: read that stuff
https://doc.ecoscentric.com/gnutools/doc/gdb/Core-File-Generation.html
- Not idea what happens with no symbols available, lots of fields are retrieved by name and should be changed to offsets if possible
- NO LIB VERSION MANAGEMENT. TODO: determine if there are serious variants in nlohmann data structures that would justify working with structures
- PLATFORM DEPENDANT TODO: remove the black magic offsets or handle them in a nicer way
NB: If you are python-kaizer-style-guru, please consider helping or teaching how to improve all that mess
some (light tests):
gpr file:
project Debug_Printer is
for Source_Dirs use ("src", "include");
for Object_Dir use "obj";
for Main use ("main.cpp");
for Languages use ("C++");
package Naming is
for Spec_Suffix ("c++") use ".hpp";
end Naming;
package Compiler is
for Switches ("c++") use ("-O3", "-Wall", "-Woverloaded-virtual", "-g");
end Compiler;
package Linker is
for Switches ("c++") use ("-g");
end Linker;
end Debug_Printer;
main.cpp
#include // i am using the standalone json.hpp from the repo release
#include
using json = nlohmann::json;
int main() {
json fooz;
fooz = 0.7;
json arr = {3, "25", 0.5};
json one;
one["first"] = "second";
json foo;
foo["flex"] = 0.2;
foo["bool"] = true;
foo["int"] = 5;
foo["float"] = 5.22;
foo["trap "] = "you fell";
foo["awesome_str"] = "bleh";
foo["nested"] = {{"bar", "barz"}};
foo["array"] = { 1, 0, 2 };
std::cout << "fooz" << std::endl;
std::cout << fooz.dump(4) << std::endl << std::endl;
std::cout << "arr" << std::endl;
std::cout << arr.dump(4) << std::endl << std::endl;
std::cout << "one" << std::endl;
std::cout << one.dump(4) << std::endl << std::endl;
std::cout << "foo" << std::endl;
std::cout << foo.dump(4) << std::endl << std::endl;
json mixed_nested;
mixed_nested["Jean"] = fooz;
mixed_nested["Baptiste"] = one;
mixed_nested["Emmanuel"] = arr;
mixed_nested["Zorg"] = foo;
std::cout << "5th element" << std::endl;
std::cout << mixed_nested.dump(4) << std::endl << std::endl;
return 0;
}
outputs:
(gdb) source .gdbinit
Breakpoint 1, main () at F:\DEV\Projets\nlohmann.json\src\main.cpp:45
(gdb) p mixed_nested
$1 = {
"Baptiste" : {
"first" : "second"
},
"Emmanuel" : [
3,
"25",
0.5,
],
"Jean" : 0.69999999999999996,
"Zorg" : {
"array" : [
1,
0,
2,
],
"awesome_str" : "bleh",
"bool" : true,
"flex" : 0.20000000000000001,
"float" : 5.2199999999999998,
"int" : 5,
"nested" : {
"bar" : "barz"
},
"trap " : "you fell",
},
}
Edit 2019-march-24 : add precision given by employed russian.
Edit 2020-april-18 : after a long night of struggling with python/gdb/stl I had something working by the ways of the GDB documentation for python pretty printers. Please forgive any mistakes or misconceptions, I banged my head a whole night on this and everything is flurry-blurry now.
Edit 2020-april-18 (2): rb tree node and tree_size could be traversed in a more "in-memory" way (see above)
Edit 2020-april-26: add warning concerning the GDB python pretty printer.
My solution was to edit the ~/.gdbinit file.
define jsontostring
printf "%s\n", $arg0.dump(2, ' ', true, nlohmann::detail::error_handler_t::strict).c_str()
end
This makes the "jsontostring" command available on every gdb session without the need of sourcing any files.
(gdb) jsontostring object

Python convert C header file to dict

I have a C header file which contains a series of classes, and I'm trying to write a function which will take those classes, and convert them to a python dict. A sample of the file is down the bottom.
Format would be something like
class CFGFunctions {
class ABC {
class AA {
file = "abc/aa/functions"
class myFuncName{ recompile = 1; };
};
class BB
{
file = "abc/bb/functions"
class funcName{
recompile=1;
}
}
};
};
I'm hoping to turn it into something like
{CFGFunctions:{ABC:{AA:"myFuncName"}, BB:...}}
# Or
{CFGFunctions:{ABC:{AA:{myFuncName:"string or list or something"}, BB:...}}}
In the end, I'm aiming to get the filepath string (which is actually a path to a folder... but anyway), and the class names in the same class as the file/folder path.
I've had a look on SO, and google and so on, but most things I've found have been about splitting lines into dicts, rather then n-deep 'blocks'
I know I'll have to loop through the file, however, I'm not sure the most efficient way to convert it to the dict.
I'm thinking I'd need to grab the outside class and its relevant brackets, then do the same for the text remaining inside.
If none of that makes sense, it's cause I haven't quite made sense of the process myself haha
If any more info is needed, I'm happy to provide.
The following code is a quick mockup of what I'm sorta thinking...
It is most likely BROKEN and probably does NOT WORK. but its sort of the process that I'm thinking of
def get_data():
fh = open('CFGFunctions.h', 'r')
data = {} # will contain final data model
# would probably refactor some of this into a function to allow better looping
start = "" # starting class name
brackets = 0 # number of brackets
text= "" # temp storage for lines inside block while looping
for line in fh:
# find the class (start
mt = re.match(r'Class ([\w_]+) {', line)
if mt:
if start == "":
start = mt.group(1)
else:
# once we have the first class, find all other open brackets
mt = re.match(r'{', line)
if mt:
# and inc our counter
brackets += 1
mt2 = re.match(r'}', line)
if mt2:
# find the close, and decrement
brackets -= 1
# if we are back to the initial block, break out of the loop
if brackets == 0:
break
text += line
data[start] = {'tempText': text}
====
Sample file
class CfgFunctions {
class ABC {
class Control {
file = "abc\abc_sys_1\Modules\functions";
class assignTracker {
description = "";
recompile = 1;
};
class modulePlaceMarker {
description = "";
recompile = 1;
};
};
class Devices
{
file = "abc\abc_sys_1\devices\functions";
class registerDevice { recompile = 1; };
class getDeviceSettings { recompile = 1; };
class openDevice { recompile = 1; };
};
};
};
EDIT:
If possible, if I have to use a package, I'd like to have it in the programs directory, not the general python libs directory.
As you detected, parsing is necessary to do the conversion. Have a look at the package PyParsing, which is a fairly easy-to-use library to implement parsing in your Python program.
Edit: This is a very symbolic version of what it would take to recognize a very minimalistic grammer - somewhat like the example at the top of the question. It won't work, but it might put you in the right direction:
from pyparsing import ZeroOrMore, OneOrMore, \
Keyword, Literal
test_code = """
class CFGFunctions {
class ABC {
class AA {
file = "abc/aa/functions"
class myFuncName{ recompile = 1; };
};
class BB
{
file = "abc/bb/functions"
class funcName{
recompile=1;
}
}
};
};
"""
class_tkn = Keyword('class')
lbrace_tkn = Literal('{')
rbrace_tkn = Literal('}')
semicolon_tkn = Keyword(';')
assign_tkn = Keyword(';')
class_block = ( class_tkn + identifier + lbrace_tkn + \
OneOrMore(class_block | ZeroOrMore(assignment)) + \
rbrace_tkn + semicolon_tkn \
)
def test_parser(test):
try:
results = class_block.parseString(test)
print test, ' -> ', results
except ParseException, s:
print "Syntax error:", s
def main():
test_parser(test_code)
return 0
if __name__ == '__main__':
main()
Also, this code is only the parser - it does not generate any output. As you can see in the PyParsing docs, you can later add the actions you want. But the first step would be to recognize the what you want to translate.
And a last note: Do not underestimate the complexities of parsing code... Even with a library like PyParsing, which takes care of much of the work, there are many ways to get mired in infinite loops and other amenities of parsing. Implement things step-by-step!
EDIT: A few sources for information on PyParsing are:
http://werc.engr.uaf.edu/~ken/doc/python-pyparsing/HowToUsePyparsing.html
http://pyparsing.wikispaces.com/
(Particularly interesting is http://pyparsing.wikispaces.com/Publications, with a long list of articles - several of them introductory - on PyParsing)
http://pypi.python.org/pypi/pyparsing_helper is a GUI for debugging parsers
There is also a 'tag' Pyparsing here on stackoverflow, Where Paul McGuire (the PyParsing author) seems to be a frequent guest.
* NOTE: *
From PaulMcG in the comments below: Pyparsing is no longer hosted on wikispaces.com. Go to github.com/pyparsing/pyparsing

How to color specific part of the text with some separator

I'm trying to color some specific part of the text, i have tried to say:
if word.strip().startswith(":"):
self.setAttributesForRange(NSColor.greenColor(), None, highlightOffset, len(word))
When someone types the sign : it gets colored green. That is good, but it keeps coloring the word after it like this:
:Hello Hello :Hello <---- this all gets colored green, but I want something like:
:Hello Hello :Hello <---- where everything get colored except the middle "hello" because it doesn't start with the sign : , please help me out
from Foundation import *
from AppKit import *
import objc
class PyObjC_HighlightAppDelegate(NSObject):
# The connection to our NSTextView in the UI
highlightedText = objc.IBOutlet()
# Default font size to use when highlighting
fontSize = 12
def applicationDidFinishLaunching_(self, sender):
NSLog("Application did finish launching.")
def textDidChange_(self, notification):
"""
Delegate method called by the NSTextView whenever the contents of the
text view have changed. This is called after the text has changed and
been committed to the view. See the Cocoa reference documents:
http://developer.apple.com/documentation/Cocoa/Reference/ApplicationKit/Classes/NSText_Class/Reference/Reference.html
http://developer.apple.com/documentation/Cocoa/Reference/ApplicationKit/Classes/NSTextView_Class/Reference/Reference.html
Specifically the sections on Delegate Methods for information on additional
delegate methods relating to text control is NSTextView objects.
"""
# Retrieve the current contents of the document and start highlighting
content = self.highlightedText.string()
self.highlightText(content)
def setAttributesForRange(self, color, font, rangeStart, rangeLength):
"""
Set the visual attributes for a range of characters in the NSTextView. If
values for the color and font are None, defaults will be used.
The rangeStart is an index into the contents of the NSTextView, and
rangeLength is used in combination with this index to create an NSRange
structure, which is passed to the NSTextView methods for setting
text attributes. If either of these values are None, defaults will
be provided.
The "font" parameter is used as an key for the "fontMap", which contains
the associated NSFont objects for each font style.
"""
fontMap = {
"normal" : NSFont.systemFontOfSize_(self.fontSize),
"bold" : NSFont.boldSystemFontOfSize_(self.fontSize)
}
# Setup sane defaults for the color, font and range if no values
# are provided
if color is None:
color = NSColor.blackColor()
if font is None:
font = "normal"
if font not in fontMap:
font = "normal"
displayFont = fontMap[font]
if rangeStart is None:
rangeStart = 0
if rangeLength is None:
rangeLength = len(self.highlightedText.string()) - rangeStart
# Set the attributes for the specified character range
range = NSRange(rangeStart, rangeLength)
self.highlightedText.setTextColor_range_(color, range)
self.highlightedText.setFont_range_(displayFont, range)
def highlightText(self, content):
"""
Apply our customized highlighting to the provided content. It is assumed that
this content was extracted from the NSTextView.
"""
# Calling the setAttributesForRange with no values creates
# a default that "resets" the formatting on all of the content
self.setAttributesForRange(None, None, None, None)
# We'll highlight the content by breaking it down into lines, and
# processing each line one by one. By storing how many characters
# have been processed we can maintain an "offset" into the overall
# content that we use to specify the range of text that is currently
# being highlighted.
contentLines = content.split("\n")
highlightOffset = 0
for line in contentLines:
if line.strip().startswith("#"):
# Comment - we want to highlight the whole comment line
self.setAttributesForRange(NSColor.greenColor(), None, highlightOffset, len(line))
elif line.find(":") > -1:
# Tag - we only want to highlight the tag, not the colon or the remainder of the line
startOfLine = line[0: line.find(":")]
yamlTag = startOfLine.strip("\t ")
yamlTagStart = line.find(yamlTag)
self.setAttributesForRange(NSColor.blueColor(), "bold", highlightOffset + yamlTagStart, len(yamlTag))
elif line.strip().startswith("-"):
# List item - we only want to highlight the dash
listIndex = line.find("-")
self.setAttributesForRange(NSColor.redColor(), None, highlightOffset + listIndex, 1)
# Add the processed line to our offset, as well as the newline that terminated the line
highlightOffset += len(line) + 1
It all depends on what word is.
In [6]: word = ':Hello Hello :Hello'
In [7]: word.strip().startswith(':')
Out[7]: True
In [8]: len(word)
Out[8]: 19
Compare:
In [1]: line = ':Hello Hello :Hello'.split()
In [2]: line
Out[2]: [':Hello', 'Hello', ':Hello']
In [3]: for word in line:
print word.strip().startswith(':')
print len(word)
...:
True
6
False
5
True
6
Notice the difference in len(word), which I suspect is causing your problem.

Categories