Examples
JSON to NestedText
This example implements a command-line utility that converts a JSON file to
NestedText. It demonstrates the use of dumps()
and
NestedTextError
.
#!/usr/bin/env python3
"""
Read a JSON file and convert it to NestedText.
usage:
json-to-nestedtext [options] [<filename>]
options:
-f, --force force overwrite of output file
-i <n>, --indent <n> number of spaces per indent [default: 4]
-s, --sort sort the keys
-w <n>, --width <n> desired maximum line width; specifying enables
use of single-line lists and dictionaries as long
as the fit in given width [default: 0]
If <filename> is not given, JSON input is taken from stdin and NestedText output
is written to stdout.
"""
from docopt import docopt
from inform import done, fatal, full_stop, os_error, warn
from pathlib import Path
import json
import nestedtext as nt
import sys
sys.stdin.reconfigure(encoding='utf-8')
sys.stdout.reconfigure(encoding='utf-8')
cmdline = docopt(__doc__)
input_filename = cmdline['<filename>']
try:
indent = int(cmdline['--indent'])
except Exception:
warn('expected positive integer for indent.', culprit=cmdline['--indent'])
indent = 4
try:
width = int(cmdline['--width'])
except Exception:
warn('expected non-negative integer for width.', culprit=cmdline['--width'])
width = 0
try:
# read JSON content; from file or from stdin
if input_filename:
input_path = Path(input_filename)
json_content = input_path.read_text(encoding='utf-8')
else:
json_content = sys.stdin.read()
data = json.loads(json_content)
# convert to NestedText
nestedtext_content = nt.dumps(
data,
indent = indent,
width = width,
sort_keys = cmdline['--sort']
)
# output NestedText content; to file or to stdout
if input_filename:
output_path = input_path.with_suffix('.nt')
if output_path.exists():
if not cmdline['--force']:
fatal('file exists, use -f to force over-write.', culprit=output_path)
output_path.write_text(nestedtext_content, encoding='utf-8')
else:
sys.stdout.write(nestedtext_content + "\n")
except OSError as e:
fatal(os_error(e))
except nt.NestedTextError as e:
e.terminate()
except UnicodeError as e:
fatal(full_stop(e))
except KeyboardInterrupt:
done()
except json.JSONDecodeError as e:
# create a nice error message with surrounding context
msg = e.msg
culprit = input_filename
codicil = None
try:
lineno = e.lineno
culprit = (culprit, lineno)
colno = e.colno
lines_before = e.doc.split('\n')[max(lineno-2, 0):lineno]
lines = []
for i, l in zip(range(lineno-len(lines_before), lineno), lines_before):
lines.append(f'{i+1:>4}> {l}')
lines_before = '\n'.join(lines)
lines_after = e.doc.split('\n')[lineno:lineno+1]
lines = []
for i, l in zip(range(lineno, lineno + len(lines_after)), lines_after):
lines.append(f'{i+1:>4}> {l}')
lines_after = '\n'.join(lines)
codicil = f"{lines_before}\n {colno*' '}▲\n{lines_after}"
except Exception:
pass
fatal(full_stop(msg), culprit=culprit, codicil=codicil)
Be aware that not all JSON data can be converted to NestedText, and in the conversion much of the type information is lost.
json-to-nestedtext can be used as a JSON pretty printer:
> json-to-nestedtext < fumiko.json
treasurer:
name: Fumiko Purvis
address:
> 3636 Buffalo Ave
> Topeka, Kansas 20692
phone: 1-268-555-0280
email: fumiko.purvis@hotmail.com
additional roles:
- accounting task force
NestedText to JSON
This example implements a command-line utility that converts a NestedText file
to JSON. It demonstrates the use of load()
and
NestedTextError
.
#!/usr/bin/env python3
"""
Read a NestedText file and convert it to JSON.
usage:
nestedtext-to-json [options] [<filename>]
options:
-f, --force force overwrite of output file
-d, --dedup de-duplicate keys in dictionaries
If <filename> is not given, NestedText input is taken from stdin and JSON output
is written to stdout.
"""
from docopt import docopt
from inform import done, fatal, os_error, full_stop
from pathlib import Path
import json
import nestedtext as nt
import sys
sys.stdin.reconfigure(encoding='utf-8')
sys.stdout.reconfigure(encoding='utf-8')
def de_dup(key, state):
if key not in state:
state[key] = 1
state[key] += 1
return f"{key} — #{state[key]}"
cmdline = docopt(__doc__)
input_filename = cmdline['<filename>']
on_dup = de_dup if cmdline['--dedup'] else None
try:
if input_filename:
input_path = Path(input_filename)
data = nt.load(input_path, top='any', on_dup=on_dup)
json_content = json.dumps(data, indent=4, ensure_ascii=False)
output_path = input_path.with_suffix('.json')
if output_path.exists():
if not cmdline['--force']:
fatal('file exists, use -f to force over-write.', culprit=output_path)
output_path.write_text(json_content, encoding='utf-8')
else:
data = nt.load(sys.stdin, top='any', on_dup=on_dup)
json_content = json.dumps(data, indent=4, ensure_ascii=False)
sys.stdout.write(json_content + '\n')
except OSError as e:
fatal(os_error(e))
except nt.NestedTextError as e:
e.terminate()
except UnicodeError as e:
fatal(full_stop(e))
except KeyboardInterrupt:
done()
PyTest
This example highlights a PyTest package parametrize_from_file that allows you to neatly separate your test code from your test cases; the test cases being held in a NestedText file. Since test cases often contain code snippets, the ability of NestedText to hold arbitrary strings without the need for quoting or escaping results in very clean and simple test case specifications. Also, use of the eval function in the test code allows the fields in the test cases to be literal Python code.
The test cases:
# test_expr.nt
test_substitution:
-
given: first second
search: ^\s*(\w+)\s*(\w+)\s*$
replace: \2 \1
expected: second first
-
given: 4 * 7
search: ^\s*(\d+)\s*([-+*/])\s*(\d+)\s*$
replace: \1 \3 \2
expected: 4 7 *
test_expression:
-
given: 1 + 2
expected: 3
-
given: "1" + "2"
expected: "12"
-
given: pathlib.Path("/") / "tmp"
expected: pathlib.Path("/tmp")
And the corresponding test code:
# test_misc.py
import parametrize_from_file
import re
import pathlib
@parametrize_from_file
def test_substitution(given, search, replace, expected):
assert re.sub(search, replace, given) == expected
@parametrize_from_file
def test_expression(given, expected):
assert eval(given) == eval(expected)
PostMortem
PostMortem is a program that generates a packet of information that is securely shared with your dependents in case of your death. Only the settings processing part of the package is shown here.
This example includes references, key normalization, and different way to implement validation and conversion on a per field basis with voluptuous. References allow you to define some content once and insert that content multiple places in the document. Key normalization allows the keys to be case insensitive and contain white space even though the program that uses the data prefers the keys to be lower case identifiers.
Here is a configuration file that Odin might use to generate packets for his wife and kids:
my GPG ids: odin@norse-gods.com
sign with: @ my gpg ids
name template: {name}-{now:YYMMDD}
estate docs:
- ~/home/estate/trust.pdf
- ~/home/estate/will.pdf
- ~/home/estate/deed-valhalla.pdf
recipients:
Frigg:
email: frigg@norse-gods.com
category: wife
attach: @ estate docs
networth: odin
Thor:
email: thor@norse-gods.com
category: kids
attach: @ estate docs
Loki:
email: loki@norse-gods.com
category: kids
attach: @ estate docs
Notice that estate docs is defined at the top level. It is not a PostMortem
setting; it simply defines a value that will be interpolated into settings
later. The interpolation is done by specifying @
along with the name of the
reference as a value. So for example, in recipients attach is specified as
@ estate docs
. This causes the list of estate documents to be used as
attachments. The same thing is done in sign with, which interpolates my gpg
ids.
Here is the code for validating and transforming the PostMortem settings. For more on report_voluptuous_errors, see Validate with Voluptuous.
#!/usr/bin/env python3
from inform import error, os_error, terminate
import nestedtext as nt
from pathlib import Path
from voluptuous import (
Schema, Invalid, MultipleInvalid, Extra, Required, REMOVE_EXTRA
)
from voluptuous_errors import report_voluptuous_errors
from pprint import pprint
# Settings schema
# First define some functions that are used for validation and coercion
def to_str(arg):
if isinstance(arg, str):
return arg
raise Invalid(f"expected text, found {arg.__class__.__name__}")
def to_ident(arg):
arg = to_str(arg)
if arg.isidentifier():
return arg
raise Invalid('expected simple identifier')
def to_list(arg):
if isinstance(arg, str):
return arg.split()
if isinstance(arg, dict):
raise Invalid(f"expected list, found {arg.__class__.__name__}")
return arg
def to_paths(arg):
return [Path(p).expanduser() for p in to_list(arg)]
def to_email(arg):
email = to_str(arg)
user, _, host = email.partition('@')
if '.' in host and '@' not in host:
return arg
raise Invalid('expected email address')
def to_emails(arg):
return [to_email(e) for e in to_list(arg)]
def to_gpg_id(arg):
try:
return to_email(arg) # gpg ID may be an email address
except Invalid:
try:
int(arg, base=16) # if not an email, it must be a hex key
assert len(arg) >= 8 # at least 8 characters long
return arg
except (ValueError, TypeError, AssertionError):
raise Invalid('expected GPG id')
def to_gpg_ids(arg):
return [to_gpg_id(i) for i in to_list(arg)]
def to_snake_case(key):
return '_'.join(key.strip().lower().split())
# provide user-friendly error messages
voluptuous_error_msg_mappings = {
"extra keys not allowed": ("unknown key", "key"),
"expected a dictionary": ("expected key-value pairs", "value"),
}
# define the schema for the settings file
schema = Schema(
{
Required('my_gpg_ids'): to_gpg_ids,
'sign_with': to_gpg_id,
'avendesora_gpg_passphrase_account': to_str,
'avendesora_gpg_passphrase_field': to_str,
'name_template': to_str,
Required('recipients'): {
Extra: {
Required('category'): to_ident,
Required('email'): to_emails,
'gpg_id': to_gpg_id,
'attach': to_paths,
'networth': to_ident,
}
},
},
extra = REMOVE_EXTRA
)
# this function implements references
def expand_settings(value):
# allows macro values to be defined as a top-level setting.
# allows macro reference to be found anywhere.
if isinstance(value, str):
value = value.strip()
if value[:1] == '@':
value = settings.get(to_snake_case(value[1:]))
return value
if isinstance(value, dict):
return {k:expand_settings(v) for k, v in value.items()}
if isinstance(value, list):
return [expand_settings(v) for v in value]
raise NotImplementedError(value)
def normalize_key(key, parent_keys):
if parent_keys != ('recipients',):
# normalize all keys except the recipient names
return to_snake_case(key)
return key
try:
# Read settings
config_filepath = Path('postmortem.nt')
if config_filepath.exists():
# load from file
settings = nt.load(
config_filepath,
keymap = (keymap:={}),
normalize_key = normalize_key
)
# expand references
settings = expand_settings(settings)
# check settings and transform to desired types
settings = schema(settings)
# show the resulting settings
print(nt.dumps(settings, default=str))
except nt.NestedTextError as e:
e.report()
except MultipleInvalid as e:
report_voluptuous_errors(e, keymap, config_filepath)
except OSError as e:
error(os_error(e))
terminate()
This code uses expand_settings to implement references, and it uses the
Voluptuous schema to clean and validate the settings and convert them to
convenient forms. For example, the user could specify attach as a string or
a list, and the members could use a leading ~
to signify a home directory.
Applying to_paths in the schema converts whatever is specified to a list and
converts each member to a pathlib path with the ~
properly expanded.
Notice that the schema is defined in a different manner than in the :ref:` Volupuous example <voluptuous example>`. In that example, you simply state which type you are expecting for the value and you use the Coerce function to indicate that the value should be cast to that type if needed. In this example, simple functions are passed in that perform validation and coercion as needed. This is a more flexible approach that allows better control of the conversions and the error messages.
This code does not do any thing useful, it just reads in and expands the information contained in the input file. It simply represents the beginnings of a program that would use the specified information to generate the postmortem reports. In this case it simply prints the expanded information in the form of a NestedText document, which is easier to read that if it were pretty-printed as Python or JSON.
Here are the processed settings:
my_gpg_ids:
- odin@norse-gods.com
sign_with: odin@norse-gods.com
name_template: {name}-{now:YYMMDD}
recipients:
Frigg:
email:
- frigg@norse-gods.com
category: wife
attach:
- ~/home/estate/trust.pdf
- ~/home/estate/will.pdf
- ~/home/estate/deed-valhalla.pdf
networth: odin
Thor:
email:
- thor@norse-gods.com
category: kids
attach:
- ~/home/estate/trust.pdf
- ~/home/estate/will.pdf
- ~/home/estate/deed-valhalla.pdf
Loki:
email:
- loki@norse-gods.com
category: kids
attach:
- ~/home/estate/trust.pdf
- ~/home/estate/will.pdf
- ~/home/estate/deed-valhalla.pdf