Lisps generally have two different abstraction methods, functions and macros. Functions operate at runtime and always
evaluate their parameters, while macros operate at compiletime and do not evaluate their parameters. This generally
splits the language to a degree, and macros are not able to be used at runtime, though generally functions are
able to be used in macros, with various restrictions. The macro systems generally attempt to be hygenic, either preventing
or making it difficult to manipulate the environment of the code that the macro invocation will expand to. This is often
needed, however, and various escape hatches can be implemented.
Creating a powerful, safe, and easy to use macro system is quite difficult, and the resulting systems are often quite complex,
generally more complex than the base language in which the reside. Macros are also not first class, and cannot be passed
around as values and do not exist at all at runtime.
Vau and Fexprs, as formulated by John Shutt \cite{shutt2010fexprs}, (at \url{https://web.wpi.edu/Pubs/ETD/Available/etd-090110-124904/unrestricted/jshutt.pdf}),
provide a first class and more powerful alternative to macros, unifying functions, macros, and built-in language forms
into a single concept called a combiner. A combiner may evaluate its arguments 0 or more times,
and recieves the calling environment as an additional parameter. There is also an eval function which takes in an expression to evaluate
and an environment in which to do the evaluation. Note that functions, macros, and even built-in language constructs like if, cond, let can be implemented
as either user-defined or built in combiners, making both macros and what were previously Lisp special forms first class! They can be named,
passed to higher-order combiners, put into datastructures, etc.
On the other hand, naively executing a language using combiners instead of macros is exceedingly slow,
as the code of the fexpr (analogus to a macro invocation) is re-executed at runtime, every time it is encountered.
Additionally, because it is unclear what code will be evaluated as a parameter to a function call and what code
must be passed unevaluated to the combiner, little optimization can be done. We address this problem with, to our knowledge,
the first partial evaluation system that can completely optimize away fexprs that are used and written in the style of macros,
as well as some other more naturally written combiners. Our language is more restricted than Shutt's Kernel language, being
purely functional and allowing no mutation, making the tracking of environments and optimization of access tractable.
All code available at \url{https://github.com/limvot/kraken}
\item{} Axis of Eval list of 22 attempted implmentations - \url{https://axisofeval.blogspot.com/2011/09/kernel-underground.html}\\
None doing partial evaluation, to my knowledge. I belive all abandond or linkrotted with the seeming exception of \url{https://github.com/rocketnia/fexpress},
which is taking a very different approach (Lisp-2, explicit apply form, etc) in Racket.
\item{} Lambda The Ultimate small discussion of partial eval for Vau/Kernel - \url{http://lambda-the-ultimate.org/node/4346}\\
\item{} Implementing a Vau-based Language With Multiple Evaluation Strategies - \cite{kearsleyimplementing}\\
Talks about how partial evaluation could make efficient, doesn't do it.
\item{} Google Groups email thread by Andres Navarro - \url{https://groups.google.com/g/klisp/c/Dva-Le8Hr-g/m/pyl1Ufu-vksJ}\\
Andres Navarro talks about his experimental fklisp which is a "very simple functional dialect of Kernel" with no mutation or first class continuations.
It doesn't compile anything, but prints out the partially evalauted expression. Was a work in progress, ran into performance problems, seems abandond.