Ignoring objects for PyYAML dump - python

I use PyYAML dump to print complex data structures but there is one class of objects that cannot, and also I do not want to, be dumped.
Currently I get:
yaml.representer.RepresenterError: cannot represent an object
I would like yaml.dump to completely ignore this particular class or just render the classname and continue as usual.
If this is possible, how can I do that?

You'll have to provide a representer for the object. There are multiple ways to do that, some involving changing the object.
When you explicitly register a representer, the object doesn't have to be changed:
import sys
from ruamel import yaml
class Secret():
def __init__(self, user, password):
self.user = user
self.password = password
def secret_representer(dumper, data):
return dumper.represent_scalar(u'!secret', u'unknown')
yaml.add_representer(Secret, secret_representer)
data = dict(a=1, b=2, c=[42, Secret(user='cary', password='knoop')])
yaml.dump(data, sys.stdout)
In secret_representer, the data is the instantiated Secret(), since the function doesn't use that, no "secrets" are leaked. You could also e.g. return the user name, but not the password. The represent_scalar function expects a tag (here I used !secret) and a scalar (here the string unknown).
The output of the above:
a: 1
b: 2
c: [42, !secret '[unknown]']
I am using ruamel.yaml in the above which is an upgraded version of PyYAML (disclaimer: I am the author of that package). The above should work with PyYAML as well.

Related

Dataclass in python when the attribute doesn't respect naming rules

If you have data like this (from a yaml file):
items:
C>A/G>T: "#string"
C>G/G>C: "#string"
...
How would load that in a dataclass that is explicit about the keys and type it has?
Ideally I would have:
#dataclasses.dataclass
class X:
C>A/G>T: str
C>G/G>C: str
...
Update:
SBS_Mutations = TypedDict(
"SBS_Mutations",
{
"C>A/G>T": str,
"C>G/G>C": str,
"C>T/G>A": str,
"T>A/A>T": str,
"T>C/A>G": str,
"T>G/A>C": str,
},
)
my_data = {....}
SBS_Mutations(my_data) # not sure how to use it here
if you want symbols like that, they obviously can't be Python identifiers, and then, it is meaningless to want to use the facilities that a dataclass, with attribute access, gives you.
Just keep your data in dictionaries, or in Pandas dataframes, where such names can be column titles.
Otherwise, post a proper code snippet with a minimum example of where you are getting the data from, and then, one can add in an answer, a proper place to translate your orignal name into a valid Python attribute name, and help building a dynamic data class with it.
This sounds like a good use case for my dotwiz library, which I have recently published. This provides a dict subclass which enables attribute-style dot access for nested keys.
As of the recent release, it offers a DotWizPlus implementation (a wrapper around a dict object) that also case transforms keys so that they are valid lower-cased, python identifier names, as shown below.
# requires the following dependencies:
# pip install PyYAML dotwiz
import yaml
from dotwiz import DotWizPlus
yaml_str = """
items:
C>A/G>T: "#string"
C>G/G>C: "#string"
"""
yaml_dict = yaml.safe_load(yaml_str)
print(yaml_dict)
dw = DotWizPlus(yaml_dict)
print(dw)
assert dw.items.c_a_g_t == '#string' # True
print(dw.to_attr_dict())
Output:
{'items': {'C>A/G>T': '#string', 'C>G/G>C': '#string'}}
✪(items=✪(c_a_g_t='#string', c_g_g_c='#string'))
{'items': {'c_a_g_t': '#string', 'c_g_g_c': '#string'}}
NB: This currently fails when accessing the key items from just a DotWiz instance, as the key name conflicts with the builtin attribute dict.items(). I've currently submitted a bug request and hopefully work through this one edge case in particular.
Type Hinting
If you want type-hinting or auto-suggestions for field names, you can try something like this where you subclass from DotWizPlus:
import yaml
from dotwiz import DotWizPlus
class Item(DotWizPlus):
c_a_g_t: str
c_g_g_c: str
#classmethod
def from_yaml(cls, yaml_string: str, loader=yaml.safe_load):
yaml_dict = loader(yaml_str)
return cls(yaml_dict['items'])
yaml_str = """
items:
C>A/G>T: "#string1"
C>G/G>C: "#string2"
"""
dw = Item.from_yaml(yaml_str)
print(dw)
# ✪(c_a_g_t='#string1', c_g_g_c='#string2')
assert dw.c_a_g_t == '#string1' # True
# auto-completion will work, as IDE knows the type is a `str`
# dw.c_a_g_t.
Dataclasses
If you would still prefer dataclasses for type-hinting purposes, there is another library you can also check out called dataclass-wizard, which can help to simplify this task as well.
More specifically, YAMLWizard makes it easier to load/dump a class object with YAML. Note that this uses the PyYAML library behind the scenes by default.
Note that I couldn't get the case-transform to work in this case, since I guess it's a bug in the underlying to_snake_case() implementation. I'm also going to submit a bug request to look into this edge case. However, for now it should work if the key name in YAML is specified a bit more explicitly:
from dataclasses import dataclass
from dataclass_wizard import YAMLWizard, json_field
yaml_str = """
items:
C>A/G>T: "#string"
C>G/G>C: "#string"
"""
#dataclass
class Container(YAMLWizard):
items: 'Item'
#dataclass
class Item:
c_a_g_t: str = json_field('C>A/G>T')
c_g_g_c: str = json_field('C>G/G>C')
c = Container.from_yaml(yaml_str)
print(c)
# True
assert c.items.c_g_g_c == c.items.c_a_g_t == '#string'
Output:
Container(items=Item(c_a_g_t='#string', c_g_g_c='#string'))

Why use MongoAlchemy when you could subclass a Python Dict?

A friend recently showed me that you can create an instance that is a subclass of dict in Python, and then use that instance to save, update, etc. Seems like you have more control, and it looks easier as well.
class Marker(dict):
def __init__(self, username, email=None):
self.username = username
if email:
self.email = email
#property
def username(self):
return self.get('username')
#username.setter
def username(self, val):
self['username'] = val
def save(self):
db.collection.save(self)
Author here. The general reason you'd want to use it (or one of the many similar libraries) is for safety. When you assign a value to a MongoAlchemy Document it does a check a check to make sure all of the constraints you specified are satisfied (e.g. type, lengths of strings, numeric bounds).
It also has a query DSL that can be more pleasant to use than the json-like built in syntax. Here's an example from the docs:
>>> query = session.query(BloodDonor)
>>> for donor in query.filter(BloodDonor.first_name == 'Jeff', BloodDonor.age < 30):
>>> print donor
Jeff Jenkins (male; Age: 28; Type: O+)
The MongoAlchemy Session object also allows you to simulate transactions:
with session:
do_stuff()
session.insert(doc1)
do_more_stuff()
session.insert(doc2)
do_even_more_stuff()
session.insert(doc3)
# note that at this point nothing has been inserted
# now things are inserted
This doesn't mean that these inserts are one atomic operation—or even that all of the write will succeed—but it does mean that if your application has errors in the "do_stuff" functions that you won't have done half of the inserts. So it prevents a specific and reasonably common type of error

True way to use setting in python project [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
Improve this question
In my endless quest in over-complicating simple stuff, I am researching the most 'Pythonic' way to provide global configuration variables inside the typical 'config.py' found in Python egg packages.
The traditional way (aah, good ol' #define!) is as follows:
MYSQL_PORT = 3306
MYSQL_DATABASE = 'mydb'
MYSQL_DATABASE_TABLES = ['tb_users', 'tb_groups']
Therefore global variables are imported in one of the following ways:
from config import *
dbname = MYSQL_DATABASE
for table in MYSQL_DATABASE_TABLES:
print table
or:
import config
dbname = config.MYSQL_DATABASE
assert(isinstance(config.MYSQL_PORT, int))
It makes sense, but sometimes can be a little messy, especially when you're trying to remember the names of certain variables. Besides, providing a 'configuration' object, with variables as attributes, might be more flexible. So, taking a lead from bpython config.py file, I came up with:
class Struct(object):
def __init__(self, *args):
self.__header__ = str(args[0]) if args else None
def __repr__(self):
if self.__header__ is None:
return super(Struct, self).__repr__()
return self.__header__
def next(self):
""" Fake iteration functionality.
"""
raise StopIteration
def __iter__(self):
""" Fake iteration functionality.
We skip magic attribues and Structs, and return the rest.
"""
ks = self.__dict__.keys()
for k in ks:
if not k.startswith('__') and not isinstance(k, Struct):
yield getattr(self, k)
def __len__(self):
""" Don't count magic attributes or Structs.
"""
ks = self.__dict__.keys()
return len([k for k in ks if not k.startswith('__')\
and not isinstance(k, Struct)])
and a 'config.py' that imports the class and reads as follows:
from _config import Struct as Section
mysql = Section("MySQL specific configuration")
mysql.user = 'root'
mysql.pass = 'secret'
mysql.host = 'localhost'
mysql.port = 3306
mysql.database = 'mydb'
mysql.tables = Section("Tables for 'mydb'")
mysql.tables.users = 'tb_users'
mysql.tables.groups = 'tb_groups'
and is used in this way:
from sqlalchemy import MetaData, Table
import config as CONFIG
assert(isinstance(CONFIG.mysql.port, int))
mdata = MetaData(
"mysql://%s:%s#%s:%d/%s" % (
CONFIG.mysql.user,
CONFIG.mysql.pass,
CONFIG.mysql.host,
CONFIG.mysql.port,
CONFIG.mysql.database,
)
)
tables = []
for name in CONFIG.mysql.tables:
tables.append(Table(name, mdata, autoload=True))
Which seems a more readable, expressive and flexible way of storing and fetching global variables inside a package.
Lamest idea ever? What is the best practice for coping with these situations? What is your way of storing and fetching global names and variables inside your package?
How about just using the built-in types like this:
config = {
"mysql": {
"user": "root",
"pass": "secret",
"tables": {
"users": "tb_users"
}
# etc
}
}
You'd access the values as follows:
config["mysql"]["tables"]["users"]
If you are willing to sacrifice the potential to compute expressions inside your config tree, you could use YAML and end up with a more readable config file like this:
mysql:
- user: root
- pass: secret
- tables:
- users: tb_users
and use a library like PyYAML to conventiently parse and access the config file
I like this solution for small applications:
class App:
__conf = {
"username": "",
"password": "",
"MYSQL_PORT": 3306,
"MYSQL_DATABASE": 'mydb',
"MYSQL_DATABASE_TABLES": ['tb_users', 'tb_groups']
}
__setters = ["username", "password"]
#staticmethod
def config(name):
return App.__conf[name]
#staticmethod
def set(name, value):
if name in App.__setters:
App.__conf[name] = value
else:
raise NameError("Name not accepted in set() method")
And then usage is:
if __name__ == "__main__":
# from config import App
App.config("MYSQL_PORT") # return 3306
App.set("username", "hi") # set new username value
App.config("username") # return "hi"
App.set("MYSQL_PORT", "abc") # this raises NameError
.. you should like it because:
uses class variables (no object to pass around/ no singleton required),
uses encapsulated built-in types and looks like (is) a method call on App,
has control over individual config immutability, mutable globals are the worst kind of globals.
promotes conventional and well named access / readability in your source code
is a simple class but enforces structured access, an alternative is to use #property, but that requires more variable handling code per item and is object-based.
requires minimal changes to add new config items and set its mutability.
--Edit--:
For large applications, storing values in a YAML (i.e. properties) file and reading that in as immutable data is a better approach (i.e. blubb/ohaal's answer).
For small applications, this solution above is simpler.
How about using classes?
# config.py
class MYSQL:
PORT = 3306
DATABASE = 'mydb'
DATABASE_TABLES = ['tb_users', 'tb_groups']
# main.py
from config import MYSQL
print(MYSQL.PORT) # 3306
Let's be honest, we should probably consider using a Python Software Foundation maintained library:
https://docs.python.org/3/library/configparser.html
Config example: (ini format, but JSON available)
[DEFAULT]
ServerAliveInterval = 45
Compression = yes
CompressionLevel = 9
ForwardX11 = yes
[bitbucket.org]
User = hg
[topsecret.server.com]
Port = 50022
ForwardX11 = no
Code example:
>>> import configparser
>>> config = configparser.ConfigParser()
>>> config.read('example.ini')
>>> config['DEFAULT']['Compression']
'yes'
>>> config['DEFAULT'].getboolean('MyCompression', fallback=True) # get_or_else
Making it globally-accessible:
import configpaser
class App:
__conf = None
#staticmethod
def config():
if App.__conf is None: # Read only once, lazy.
App.__conf = configparser.ConfigParser()
App.__conf.read('example.ini')
return App.__conf
if __name__ == '__main__':
App.config()['DEFAULT']['MYSQL_PORT']
# or, better:
App.config().get(section='DEFAULT', option='MYSQL_PORT', fallback=3306)
....
Downsides:
Uncontrolled global mutable state.
A small variation on Husky's idea that I use. Make a file called 'globals' (or whatever you like) and then define multiple classes in it, as such:
#globals.py
class dbinfo : # for database globals
username = 'abcd'
password = 'xyz'
class runtime :
debug = False
output = 'stdio'
Then, if you have two code files c1.py and c2.py, both can have at the top
import globals as gl
Now all code can access and set values, as such:
gl.runtime.debug = False
print(gl.dbinfo.username)
People forget classes exist, even if no object is ever instantiated that is a member of that class. And variables in a class that aren't preceded by 'self.' are shared across all instances of the class, even if there are none. Once 'debug' is changed by any code, all other code sees the change.
By importing it as gl, you can have multiple such files and variables that lets you access and set values across code files, functions, etc., but with no danger of namespace collision.
This lacks some of the clever error checking of other approaches, but is simple and easy to follow.
Similar to blubb's answer. I suggest building them with lambda functions to reduce code. Like this:
User = lambda passwd, hair, name: {'password':passwd, 'hair':hair, 'name':name}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3']['password'] #> password
config['blubb']['hair'] #> black
This does smell like you may want to make a class, though.
Or, as MarkM noted, you could use namedtuple
from collections import namedtuple
#...
User = namedtuple('User', ['password', 'hair', 'name']}
#Col Username Password Hair Color Real Name
config = {'st3v3' : User('password', 'blonde', 'Steve Booker'),
'blubb' : User('12345678', 'black', 'Bubb Ohaal'),
'suprM' : User('kryptonite', 'black', 'Clark Kent'),
#...
}
#...
config['st3v3'].password #> passwd
config['blubb'].hair #> black
I did that once. Ultimately I found my simplified basicconfig.py adequate for my needs. You can pass in a namespace with other objects for it to reference if you need to. You can also pass in additional defaults from your code. It also maps attribute and mapping style syntax to the same configuration object.
please check out the IPython configuration system, implemented via traitlets for the type enforcement you are doing manually.
Cut and pasted here to comply with SO guidelines for not just dropping links as the content of links changes over time.
traitlets documentation
Here are the main requirements we wanted our configuration system to have:
Support for hierarchical configuration information.
Full integration with command line option parsers. Often, you want to read a configuration file, but then override some of the values with command line options. Our configuration system automates this process and allows each command line option to be linked to a particular attribute in the configuration hierarchy that it will override.
Configuration files that are themselves valid Python code. This accomplishes many things. First, it becomes possible to put logic in your configuration files that sets attributes based on your operating system, network setup, Python version, etc. Second, Python has a super simple syntax for accessing hierarchical data structures, namely regular attribute access (Foo.Bar.Bam.name). Third, using Python makes it easy for users to import configuration attributes from one configuration file to another.
Fourth, even though Python is dynamically typed, it does have types that can be checked at runtime. Thus, a 1 in a config file is the integer ‘1’, while a '1' is a string.
A fully automated method for getting the configuration information to the classes that need it at runtime. Writing code that walks a configuration hierarchy to extract a particular attribute is painful. When you have complex configuration information with hundreds of attributes, this makes you want to cry.
Type checking and validation that doesn’t require the entire configuration hierarchy to be specified statically before runtime. Python is a very dynamic language and you don’t always know everything that needs to be configured when a program starts.
To acheive this they basically define 3 object classes and their relations to each other:
1) Configuration - basically a ChainMap / basic dict with some enhancements for merging.
2) Configurable - base class to subclass all things you'd wish to configure.
3) Application - object that is instantiated to perform a specific application function, or your main application for single purpose software.
In their words:
Application: Application
An application is a process that does a specific job. The most obvious application is the ipython command line program. Each application reads one or more configuration files and a single set of command line options and then produces a master configuration object for the application. This configuration object is then passed to the configurable objects that the application creates. These configurable objects implement the actual logic of the application and know how to configure themselves given the configuration object.
Applications always have a log attribute that is a configured Logger. This allows centralized logging configuration per-application.
Configurable: Configurable
A configurable is a regular Python class that serves as a base class for all main classes in an application. The Configurable base class is lightweight and only does one things.
This Configurable is a subclass of HasTraits that knows how to configure itself. Class level traits with the metadata config=True become values that can be configured from the command line and configuration files.
Developers create Configurable subclasses that implement all of the logic in the application. Each of these subclasses has its own configuration information that controls how instances are created.

How to dynamically create new python class files?

I currently have a CSV file with over 200 entries, where each line needs to be made into its own class file. These classes will be inheriting from a base class with some field variables that it will inherit and set values to based on the CSV file. Additionally, the name of the python module will need to be based off an entry of the CSV file.
I really don't want to manually make over 200 individual python class files, and was wondering if there was a way to do this easily. Thanks!
edit* I'm definitely more of a java/C# coder so I'm not too familiar with python.
Some more details: I'm trying to create an AI for an already existing web game, which I can extract live data from via a live stream text box.
There are over 200 moves that a player can use each turn, and each move is vastly different. I could possibly create new instances of a move class as it's being used, but then I would have to loop through a database of all the moves and its effects each time the move is used, which seems very inefficient. Hence, I was thinking of creating classes of every move with the same name as it would appear in the text box so that I could create new instances of that specific move more quickly.
As others have stated, you usually want to be doing runtime class generation for this kind of thing, rather than creating individual files.
But I thought: what if you had some good reason to do this, like just making class templates for a bunch of files, so that you could go in and expand them later? Say I plan on writing a lot of code, so I'd like to automate the boilerplate code parts, that way I'm not stuck doing tedious work.
Turns out writing a simple templating engine for Python classes isn't that hard. Here's my go at it, which is able to do templating from a csv file.
from os import path
from sys import argv
import csv
INIT = 'def __init__'
def csvformat(csvpath):
""" Read a csv file containing a class name and attrs.
Returns a list of the form [['ClassName', {'attr':'val'}]].
"""
csv_lines = []
with open(csvpath) as f:
reader = csv.reader(f)
_ = [csv_lines.append(line)
for line in reader]
result = []
for line in csv_lines:
attr_dict = {}
attrs = line[1:]
last_attr = attrs[0]
for attr in attrs[1:]:
if last_attr:
attr_dict[last_attr] = attr
last_attr = ''
else:
last_attr = attr
result.append([line[0], attr_dict])
return result
def attr_template(attrs):
""" Format a list of default attribute setting code. """
attr_list = []
for attr, val in attrs.items():
attr_list.append(str.format(' if {} is None:\n', attr, val))
attr_list.append(str.format(' self.{} = {}\n', attr, val))
attr_list.append(' else:\n')
attr_list.append(str.format(' self.{} = {}\n', attr, attr))
return attr_list
def import_template(imports):
""" Import superclasses.
Assumes the .py files are named based on the lowercased class name.
"""
imp_lines = []
for imp in imports:
imp_lines.append(str.format('from {} import {}\n',
imp.lower(), imp))
return imp_lines
def init_template(attrs):
""" Template a series of optional arguments based on a dict of attrs.
"""
init_string = 'self'
for key in attrs:
init_string += str.format(', {}=None', key)
return init_string
def gen_code(foldername, superclass, name, attrs):
""" Generate python code in foldername.
Uses superclass for the superclass, name for the class name,
and attrs as a dict of {attr:val} for the generated class.
Writes to a file with lowercased name as the name of the class.
"""
imports = [superclass]
pathname = path.join(foldername, name.lower() + '.py')
with open(pathname, 'w') as pyfile:
_ = [pyfile.write(imp)
for imp
in import_template(imports)]
pyfile.write('\n')
pyfile.write((str.format('class {}({}):\n', name, superclass)))
pyfile.write((str.format(' {}({}):\n',
INIT, init_template(attrs))))
_ = [pyfile.write(attribute)
for attribute
in attr_template(attrs)]
pyfile.write(' super().__init__()')
def read_and_generate(csvpath, foldername, superclass):
class_info = csvformat(csvpath)
for line in class_info:
gen_code(foldername, superclass, *line)
def main():
read_and_generate(argv[1], argv[2], argv[3])
if __name__ == "__main__":
main()
The above takes a csvfile formatted like this as its first argument (here, saved as a.csv):
Magistrate,foo,42,fizz,'baz'
King,fizz,'baz'
Where the first field is the class name, followed by the attribute name and its default value. The second argument is the path to the output folder.
If I make a folder called classes and create a classes/mysuper.py in it with a basic class structure:
class MySuper():
def __init__(*args, **kwargs):
pass
And then run the code like this:
$ python3 codegen.py a.csv classes MySuper
I get the files classes/magistrate.py with the following contents:
from mysuper import MySuper
class Magistrate(MySuper):
def __init__(self, fizz=None, foo=None):
if fizz is None:
self.fizz = 'baz'
else:
self.fizz = fizz
if foo is None:
self.foo = 42
else:
self.foo = foo
super().__init__()
And classes/king.py:
from mysuper import MySuper
class King(MySuper):
def __init__(self, fizz=None):
if fizz is None:
self.fizz = 'baz'
else:
self.fizz = fizz
super().__init__()
You can actually load them and use them, too!
$ cd classes
classes$ python3 -i magistrate.py
>>> m = Magistrate()
>>> m.foo
42
>>> m.fizz
'baz'
>>>
The above generates Python 3 code, which is what I'm used to, so you will need to make some small changes for it to work in Python 2.
First of all, you don't have to seperate python classes by files - it is more common to group them by functionality to modules and packages (ref to What's the difference between a Python module and a Python package?). Furthermore, 200 similar classes sound like a quite unusual design - are they really needed or could you e.g. use a dict to store some properties?
And of course you can just write a small python script, read in the csv, and generate one ore more .py files containing the classes (lines of text written to the file).
Should be just a few lines of code depending on the level of customization.
If this list changes, you even don't have to write the classes to a file: You can just generate them on the fly.
If you tell us how far you got or more details about the problem, we could help in completing the code...
Instead of generating .py files, read in the csv and do dynamic type creation. This way, if the csv changes, you can be sure that your types are dynamically regenerated.

Serializing and Deserializing object with JSON

Is there a way or a library made to deserialize a JSON string into a typed object in ActionScript and Python?
For eg.
class Person
{
String name;
int age;
}
Person person = new Person("John", "22");
String jsonString = JSON.Serialize(person);
Person person2 = (Person) JSON.Deserialize(jsonString);
So, the last statement basically casts the object we get after deserializing the jsonString into the Person object.
I can only speak for Python. There is a built in library for JSON access, it can be viewed in the docs here.
Unfortunately, out of the box, you cannot serialize/deserialize objects, just dicts, lists and simply types. You have to write specific object encoders to do so. This is pretty much covered in the docs.
For AS3 you can use as3corelib by Mike Chambers.
https://github.com/mikechambers/as3corelib/tree/master/src/com/adobe/serialization/json
Edit: After some Googling I ended up back on SO at this question: Typed AS3 JSON Encoder and Decoder? It seems that there is a library for doing typed deserialization, but it is not totally robust and fails on some data types. If you think you can handle the restrictions then it might be the best option short of writing your own parser or gettting into something heavy like BlazeDS.
http://code.google.com/p/ason/
Please try with this:
import json
class Serializer:
#staticmethod
def encode_obj(obj):
if type(obj).__name__ =='instance':
return obj.__dict__
#staticmethod
def serialize(obj):
return json.dumps(obj, default=Serializer.encode_obj)
class TestClass:
def __init__(self):
self.a = 1
t = TestClass()
json_str = Serializer.serialize(t)
Short answer: No there is not. JSON doesn't include typed objects except for a few such as Arrays. The as3Corelib does recognize these. But as you mentioned, you get back an Object with name value pairs. Since JSON does not contain your custom actionscript classes, there is no automatic way to convert a JSON object into a typed actionscript object.
The as3corelib is a great utility for JSON in flash. However the latest build of the flash player (version 10.3) includes JSON as a native data type.
But it is not very difficult to create a class with a constructor that takes the JSON object as an argument, and you can parse it into the class variables. I have to do this all the time when working with the facebook Graph API.

Categories