Linear time composable parser for PEG grammars.
frisby is a parser library that can parse arbitrary PEG grammars in linear time. Unlike other parsers of PEG grammars, frisby need not be supplied with all possible rules up front, allowing composition of smaller parsers.
PEG parsers are never ambiguous and allow infinite lookahead with no backtracking penalty. Since PEG parsers can look ahead arbitrarily, they can easily express rules such as the maximal munch rule used in lexers, meaning no separate lexer is needed.
In addition to many standard combinators, frisby provides routines to translate standard regex syntax into frisby parsers.
PEG based parsers have a number of advantages over other parsing strategies:
Traditionally, PEG parsers have suffered from two major flaws:
frisby attempts to address both these concerns.
frisby parsers achieve composability by having a compilation pass, recursive parsers are specified using the recursive do notation 'mdo' which builds up a description of your parser where the recursive calls for which memoized entries must be made are explicit. then runPeg takes this description and compiles it into a form that can be applied, during this compilation step it examines your composed parser, and collects the global table of rules needed for a packrat parser to work.
Memory consumption is much less of an issue on modern machines; tests show it is not a major concern, however frisby uses a couple of techniques for reducing the impact. First it attempts to create parsers that are as lazy as possible -- this means that no more of the file is read into memory than is needed, and more importantly, memory used by the parser can be reclaimed as you process its output.
frisby also attempts to optimize your parser, using specialized strategies when allowed to reduce the number of entries in your memoization tables.
frisby attempts to be lazy in reading the results of parsers, parsers tend to work via sending out 'feeler' predicates to get an idea of what the rest of the file looks like before deciding what pass to take, frisby attempts to optimize these feeler predicates via extra lazyness such that they do not cause the actual computation of the results, but rather just compute enough to determine whether a predicate would have succeeded or not.
(It is interesting to note that the memory efficiency of frisby depends vitally on being as lazy as possible, in contrast to traditional thoughts when it comes to memory consumption)
frisby is a work in progress, it has a darcs repo at http://repetae.net/repos/frisby which may be browsed at http://repetae.net/dw/darcsweb.cgi?r=frisby;a=summary
And its homepage is at http://repetae.net/computer/frisby
|The basic types|
|The type of primitive parsers|
|data P s a|
|The monad used to create recursive parsers via rules|
|data PM s a|
|newRule :: P s a -> PM s (P s a)|
create a new rule, which may be used recursively and caches its results.
This is intended to be use in an 'mdo' block. such as the following.
additive = mdo additive <- newRule $ multitive <> char '+' ->> additive ## uncurry (+) // multitive multitive <- newRule $ primary <> char '*' ->> multitive ## uncurry (*) // primary primary <- newRule $ char '(' ->> additive <<- char ')' // decimal decimal <- newRule $ many1 (oneOf ['0' .. '9']) ## read return additive
all recursive calls must be bound via a rule. Left recursion should be avoided.
|runPeg :: (forall s . PM s (P s a)) -> String -> a|
run a PEG grammar. takes the rank-2 argument in order to ensure a rule created in one PM session isn't returned and used in another PEG parser.
there is no need for special error handling, as it can be trivially implemented via
-- parse complete file, returning 'Nothing' if parse fails fmap Just (myParser <<- eof) // unit Nothing
there is also no need for the parser to return its unused input, as that can be retrieved via rest
-- now this returns (a,String) where String is the unconsumed input. myParser <> rest
|(//) :: P s a -> P s a -> P s a|
|ordered choice, try left argument, if it fails try the right one this does not introduce any backtracking or penalty.|
|(<>) :: P s a -> P s b -> P s (a, b)|
|match first argument, then match the second, returning both in a tuple|
|(<++>) :: P s [a] -> P s [a] -> P s [a]|
|match a pair of lists and concatenate them|
|(->>) :: P s a -> P s b -> P s b|
match first argument, then match the second, returning only the value on the right
x ->> y = x <> y ## snd
|(<<-) :: P s a -> P s b -> P s a|
match first argument, then match the second, returning only the value on the left
x <<- y = x <> y ## fst
|(//>) :: P s a -> a -> P s a|
|ordered choice, try left argument, if it fails then return right argument|
|(##) :: P s a -> (a -> b) -> P s b|
|map a parser through a function. a fancy version of fmap|
|(##>) :: P s a -> b -> P s b|
|Parse left argument and return the right argument|
|anyChar :: P s Char|
|match any character, fails on EOF|
|bof :: P s ()|
|am at the beginning of the string|
|eof :: P s ()|
|am at the end of string|
|getPos :: P s Int|
|get current position in file as number of characters since the beginning.|
|char :: Char -> P s Char|
|match a specified character|
|noneOf :: [Char] -> P s Char|
|match any character other than the ones in the list|
|oneOf :: [Char] -> P s Char|
|match one of the set of characters|
|text :: String -> P s String|
|match some text|
|unit :: a -> P s a|
|return a value, always succeeds|
|rest :: P s String|
|immediately consume and return the rest of the input equivalent to (many anyChar), but more efficient.|
|discard :: P s a -> P s ()|
throw away the result of something
discard p = p ->> unit ()
|parseFailure :: P s a|
|fails, is identity of (//) and unit of (<>)|
These are how a frisby parser decides what path to take, whereas a backtracking parser might try a path, then backtrack if it got it wrong, a frisby parser can look at all possible paths before deciding which one to take via these predicates. this is what allows much of the power of packrat parsing, a parser is free to evaluate every alternative fully before committing to a particular path.
packrat parsers have no past, but can 'see' arbitrarily far into all of its potential futures, traditional monadic parsers can accumulate a history, but cannot see more than a token or two into the future, and evaluating multiple futures to any degree imposes a significant run-time penalty due to backtracking.
|peek :: P s a -> P s a|
|parse something and return it, but do not advance the input stream.|
|doesNotMatch :: P s a -> P s ()|
|succeeds when the argument does not.|
|isMatch :: P s a -> P s Bool|
|always succeeds, returning true if it consumed something|
|onlyIf :: P s a -> (a -> Bool) -> P s a|
|succeed only if thing parsed passes a predicate|
|matches :: P s a -> P s ()|
|succeeds when the argument does, but consumes no input equivalant to p -> discard (peek p)|
|many :: P s a -> P s [a]|
|parse many of something. behaves like * in regexes. this eats as much as it possibly can, if you want a minimal much rule, then use manyUntil which stops when a|
|many1 :: P s a -> P s [a]|
|match one or more of something via maximal munch rule|
|manyUntil :: P s b -> P s a -> PM s (P s [a])|
|parse many of something via the minimal munch rule. behaves like *? in perl regexes. the final item is not consumed.|
|various utility combinators|
|choice :: [P s a] -> P s a|
|first matching parse wins, a simple iteration of (//)|
|option :: a -> P s a -> P s a|
parse something if you can, else return first value
option a p = p // unit a
|optional :: P s a -> P s ()|
parse something if you can, discarding it.
option a p = discard p // unit ()
|regular expression syntax|
take a string in extended regex format and return a frisby parser that has the same behavior.
the behavior is slightly different than POSIX regular expressions,
frisby regular expressions always follow the true maximal or minimal munch rules, rather than the (unuseful and inefficient) greedy rule of POSIX regexs.
what this means is something like x*x will never match, because the first x* will much every x available so the second won't match.
minimal munching can be expressed like in perl .*?y will grab everything up to the next y, in posix it would grab everything up til the last y in the file.
these are much more natural semantics and much more efficient to implement.
|newRegex :: Monad m => String -> m (PM s (P s String))|
|create a new regular expression matching parser. it returns something in a possibly failing monad to indicate an error in the regular expression iself.|
|regex :: String -> PM s (P s String)|
|make a new regex but abort on an error in the regex string itself|
|showRegex :: String -> IO ()|
|show a representation of the parsed regex, mainly for debugging|
|Produced by Haddock version 0.8|