b. Can either be "real", in which case it maps symbols to values,
or "fake", in which case it maps symbols to themselves, but with the env ID as it's for-progress
c. Chain up to an upper environment that may be fake or real
2. AST nodes that maintain on-node:
a. The IDs of environments that, if "real", can be used to make progress in this subtree
b. The hashes of infinite recursive calls that were detected and stopped - if this hash isn't in the current call chain, this subtree can make progress
c. Extra IDs of environments that are "real" but have "fake" environments in their chain - this is used to make return value checking fast O(1 or log n, depending)
3. Combiners, both user-defined and built in (including that maintain a "wrap level" that:
a. Is a property of this function value, *not* the function itself
* meaning that if wrap_level > 1, you can evaluate each parameter and decrement wrap_level, even if you can't execute the call
4. The return value of a combiner is checked for:
a. If it is a value, in which case it is good to be returned if it doesn't contain a reference to the envID of the function it is being returned from
b. If it is (veval something env) where env doesn't contain a reference to the envID of the function it is being returned from
c. If it is a call to a function (func params...) and func doesn't take in a dynamic environment and params... are all good to be returned
This makes it so that combiner calls can return partially-evaluated code - any macro-like combiner would calculate the new code and return
(eval <constructed-code> dynamic_env), which would do what partial evaluation it could and either become a value or a call like case "b" above.
Case "b" allows this code essentially "tagged" with the environment it should be evaluated in to be returned out of "macro-like" combiners,
and this dovetails with the next point
5. The (veval something env) form essentially "tags" a piece of code with the environment it should be evaluated in. At each stage where
it is possible, the system checks for redundent constructions like these, where the env in (veval something env) is the currently active env.
In this case, it unwraps it to just "something" and continues on - this completes the second half of the macro-like combiner evaluation where
after being returned to the calling function the code is essentially spliced in.
6. The compiler can emit if/else branches on the wrap_level of combiners and in each branch further compile/partial eval if appropriate, allowing
dynamic calls to either functions or combiners with the overhead of a single branch
Note that points 4&5 make it so that any macro written as a combiner in "macro-style" will be expanded just like a macro would and cause no runtime overhead!
Additionally, point 6 makes it so that functions (wrap level 1 combiners) and non-parameter-evaluating (wrap level 0) combiners can be dynamically passed around and called with very minimal overhead.
Combine them together and you get a simpler but more flexiable semantics than macro based (pure functional) languages with little-to-no overhead.
1. If you don't do the needed-for-progress tracking, you have exponential runtime
2. If you aren't careful about storing analysis information on the AST node itself or memoize, a naive tree traversal of the DAG has exponential runtime
3. Infinite recursion can hide in sneaky places, including the interply between the partial evaluator and the compiler, and careful use of multiple recursion blockers / memoization is needed to prevent all cases
4. The invarients needed to prevent mis-evaluation are non-trivial to get right. Our invarients:
a. All calls to user-combiners have the parameters as total values, thus not moving something that needs a particular environment underneath a different environment
b. All return values from functions must not depend on the function's environment (there are a couple of interesting cases here, see combiner_return_ok(func_result, env_id))
c. All array values are made up of total values
d. Some primitive combiners don't obey "a", but they must be written with extreme care, and often partially evaluate only some of their parameters and have to keep track of which.
(comb params...) if comb.wrap_level != -1 -> map drop_redundent_veval over params and if any change: partial_eval( (comb new_params...), dynamic_env, env_stack, memostuff)
The other key is that array only takes in values, that is an array value never hides something that isn't a total value and needs more partial-evaluation
(this makes a lot of things simpler in other places since we can treat array values as values no matter what and know things aren't hiding in sneaky places)
...The vcond is like cond but doesn't do any unvaling (as it's already been done) (and wrap_level is set to -1 so the function call machinery doesn't touch the params either)