The post Building Fizzbuzz in Fractran from the Bottom Up appeared first on Macrology.

]]>This post is broken up into three parts. The first part covers what Fractran is and a way of understanding what a Fractran program does. Part 2 will go over the foundation of the language we will build and how it will map to Fractran. Finally, in Part 3, we will keep adding new features to the language until it becomes easy to write Fizzbuzz in it.

Before we can start writing programs in Fractran, we have to first understand what Fractran is. A Fractran program is represented as just a list of fractions. To execute a Fractran program, you start with a variable N=2. You then go through the list of fractions until you find a fraction F, such that N*F is an integer. You then set N=N*F and go back to the beginning of the list of fractions. You keep repeating this process until there is no fraction F such that N*F is an integer.

Since there is no way to print anything with the regular Fractran rules, we are going to add one additional rule on top of the ordinary ones. In addition to the list of fractions, each program will have a mapping from numbers to characters representing the “alphabet” of the program. After multiplying N by F, whenever the new N is a multiple of one of the numbers in the alphabet, that will “print” the character that the number maps to. I have written a function, **run-fractran**, which implements this version of Fractran and included it here. It takes a list of fractions and an alphabet as an alist and executes the program.

Let’s walk through a simple example. Let’s say we have the following Fractran program:

9/2, 1/5, 5/3

with the alphabet 5->’a’. To run this program, we start with N=2. We then go through the list fractions until we find a fraction F such that N*F is an integer. On this first step, F becomes 9/2, since N*F = 2 * 9/2 = 9 which is an integer. We then set N to N*F so that N now becomes 9. Repeating this process again, we get F=5/3 and N=N*F=15. Since the number 5 is in the alphabet, and N is now a multiple of 5, we output the character that 5 maps to, ‘a’. If we keep repeating these steps, we eventually reach a point where N=1 and we have outputted the string “aa”. Since 1 times any of the fractions does not result in an integer, the program terminates with the output “aa”.

At this point, you may be thinking that writing any program in Fractran is nearly impossible. The truth is that there is a simple trick you can use that makes it much easier program Fractran. All you need to do is look at the prime factorization of all of the numbers. Let’s see what the above Fractran program looks like if we convert every number into a tuple (a,b,c) where a is the how many times 2 divides the number, b is how many times 3 does, and c is how many times 5 does. The program then becomes:

(0, 2, 0) / (1, 0, 0) (0, 0, 0) / (0, 0, 1) (0, 0, 1) / (0, 1, 0)

We also have the tuple (0,0,1) mapping to ‘a’ for our alphabet. We start with N = (1,0,0). If you don’t know, multiplying two numbers is the same as adding the counts of each prime factors, and division is the same as subtracting the counts. For example, 2 * 6 = (1,0,0) + (1,1,0) = (2,1,0) = 12. With this way of looking at the program, finding a fraction F such that N*F is an integer becomes finding a “fraction” F such that each element in the tuple N is greater than or equal to the corresponding element in the tuple in the denominator of F. Once we find such F, instead of multiplying N by it, you subtract from each element of N the corresponding value in the denominator of F (equivalent to dividing by the denominator), and add the corresponding value in the numerator (equivalent to multiplying by the numerator). Executing the program with this interpretation proceeds as follows.

We start with N = (1,0,0). Since every value in N is greater than or equal to their corresponding values in the denominator of the first fraction, we subtract every value in the first denominator and then add every value in the numerator to get N = (1,0,0) – (1,0,0) + (0,2,0) = (0,2,0). Repeating this again, F becomes the third fraction. Subtracting the denominator and adding the numerator gets us N = (0,1,1). Then since every value in N is greater than or equal to their corresponding element in (0,0,1), we print ‘a’. The program continues, just like it did for the original Fractran program.

Basically we can think of every prime number as having a “register” which can take on non-negative integer values. Each fraction is an instruction that operates on some of the registers. You can interpret a fraction as saying if the current value of each register is greater than or equal to the the value specified by the denominator (the number of times the prime for that register divides the denominator), you subtract from the registers all of the values in the denominator, add all the values specified in the numerator (the number of times the prime for each register divides the numerator), and then jump back to the first instruction. Otherwise, if any register is less than the value specified in the denominator, continue to the next fraction. For example, the fraction 9/2 can be translated into the following pseudocode:

;; If the register corresponding to the prime number 2 ;; is greater or equal to 1 if reg[2] >= 1 ;; Decrement it by 1 and increment the register ;; corresponding to 3 by 2. reg[2] = reg[2] - 1 reg[3] = reg[3] + 2 goto the beginning of the program ;; Otherwise continue with the rest of the program.

Although programming Fractran is still difficult, this technique suddenly makes writing Fizzbuzz in Fractran tractable.

For our compiler, we are going to need to generate a lot of primes. To do so, we will use a function, **new-prime**, which will generate a different prime each time it is called.^{1}

(defun prime (n) "Is N a prime number?" (loop for i from 2 to (isqrt n) never (multiple n i))) (defparameter *next-new-prime* nil) (defun new-prime () "Returns a new prime we haven't used yet." (prog1 *next-new-prime* (setf *next-new-prime* (loop for i from (+ *next-new-prime* 1) if (prime i) return i))))

So now that we’ve got **new-prime**, we’ve we can start figuring out how we are going to compile to Fractran. The first detail we will need to figure out is how to express control flow in Fractran. In other words, we need a way to specify which fractions will execute after each other fractions. This is a problem because after a fraction executes, you always jump back to the first fraction.

Expressing control flow actually winds up being surprisingly easy. For each fraction we can designate a register. Then, we only execute a fraction if its register is set. It is easy to have a fraction conditionally execute depending on whether its register is set by using the trick we are using to interpret a Fractran program. All we need to do is multiply the denominator of each fraction by the prime for the register of that fraction. This way, we will pass over a fraction unless its register is set. Also, all we need to do to specify which fraction should execute after a given fraction is to multiply the numerator of the given fraction by the prime of the register for the next fraction. By doing this, after a fraction executes, it will set the register of the next fraction.

In order to keep track of the primes for the current fraction and for the next fraction, we will have two global variables. The first will be the prime number for the current instruction, and the second will be the prime number for the next instruction:

(defparameter *cur-inst-prime* nil) (defparameter *next-inst-prime* nil)

We will also need a function **advance** which will advance the values of the variables once we move on to the next instruction.

(defun advance () (setf *cur-inst-prime* *next-inst-prime* *next-inst-prime* (new-prime)))

Now that we’ve got a way of expressing control flow, we can start planning out what the language we will build will look like. From this point on, I am going to call the language we are building, Lisptran. An easy way we represent a Lisptran program is as just a list of expressions. We can have several different kinds of expressions each of which does something different.

The simplest kind of expression we will want is an inline fraction. If a Lisptran expression is just a fraction, we can just add that fraction to the Fractran program being generated.

Another kind of expression that would be useful are labels. Whenever a Lisptran expression is a Lisp symbol, we can interpret that as a label. Each label will be converted into that fraction that is the prime of the next instruction after the label divided by the prime of the label. This way we can jump to the instruction after the label by setting the register for the label. In order to make keeping track of the primes of labels easy, we are going to keep a hash-table, **lisptran-labels**, mapping from labels to the primes for those labels. We will also have a function **prime-for-label**, which will lookup the prime for a label or assign a new prime if one hasn’t been assigned yet:

(defparameter *lisptran-labels* nil) (defun prime-for-label (label) (or (gethash label *lisptran-labels*) (setf (gethash label *lisptran-labels*) (new-prime))))

One last kind of expression that will be useful are macro calls. A macro call will be a list whose first element is the name of a macro followed by a list of arbitrary Lisp expressions (The expressions don’t have to be Fractran expressions. They can be interpreted however the macro wants them to be.). In order to compile a macro call, we will lookup the function associated with the macro, and call it on the expressions in the rest of the macro call. That function should then return a list of Lisptran expressions which will then be compiled in place of the macro call. After that we just continue compiling the new code generated by the macro expansion.

To keep track of the definitions of macros, we will keep a hash-table **lisptran-macros**, which will map from the name of the macro to the function for that macro. In order to make defining Lisptran macros easy, we can create a Lisp macro **deftran**, that works in a similar way to **defmacro**. When defining a macro with **deftran**, you are really just defining a function which will take the expressions in the macro call, and return a list of Lisptran instructions to be compiled in its place. Here is the definition for **deftran**:

(defparameter *lisptran-macros* (make-hash-table)) (defmacro deftran (name args &body body) "Define a Lisptran macro." `(setf (gethash ',name *lisptran-macros*) (lambda ,args ,@body)))

And that’s all of the different kinds of expressions we will need in Lisptran.

Although we now have all of the expressions we need, there are a few more pieces of the compiler we need to figure out. For example, we still haven’t figured out how we are going to represent variables yet. Ultimately this is trivial. We can just assign a register to every variable and keep a mapping from variable names to primes in the same way we have the mapping for labels:

(defparameter *lisptran-vars* nil) (defun prime-for-var (var) (or (gethash var *lisptran-vars*) (setf (gethash var *lisptran-vars*) (new-prime))))

One last piece of the compiler we need to figure out is how we are going to represent the alphabet of the program. One way we can do this is just represent the characters in our alphabet as variables. The alphabet of a program could just be all of the variables that have characters for names and the primes of the registers for those variables. By doing it this way, we can print a character by just incrementing and then immediately decrementing a variable! Here is code that can be used to obtain the alphabet from **lisptran-vars**:

(defun alphabet (vars) "Given a hash-table of the Lisptran variables to primes, returns an alist representing the alphabet." (loop for var being the hash-keys in vars using (hash-value prime) if (characterp var) collect (cons var prime)))

Now that we can express control flow, variables, and macros, we have everything we need to write the actual Lisptran to Fractran compiler:

(defun assemble (insts) "Compile the given Lisptran program into Fractran. Returns two values. The first is the Fractran program and the second is the alphabet of the program." (let* ((*cur-prime* 2) (*cur-inst-prime* (new-prime)) (*next-inst-prime* (new-prime)) (*lisptran-labels* (make-hash-table)) (*lisptran-vars* (make-hash-table))) (values (assemble-helper insts) (alphabet *lisptran-vars*)))) (defun assemble-helper (exprs) (if (null insts) '() (let ((expr (car exprs)) (rest (cdr exprs))) (cond ;; If it's a number, we just add it to the ;; Fractran program and compile the rest ;; of the Lisptran program ((numberp expr) (cons expr (assemble-helper rest))) ;; If it's a symbol, we divide the prime for ;; the next instruction by the prime for the ;; label. ((symbolp expr) (cons (/ *cur-inst-prime* (prime-for-label expr)) (assemble-helper rest))) ;; Otherwise it's a macro call. We look up the ;; macro named by the first symbol in the ;; expression and call it on the rest of the ;; rest of the expressions in the macro call. ;; We then append all of the instructions ;; returned by it to the rest of the program ;; and compile that. (:else (let ((macrofn (gethash (car inst) *lisptran-macros*))) (assemble-helper (append (apply macrofn (cdr inst)) rest))))))))

The function **assemble** takes a Lisptran program and returns two values. It returns the generated Fractran program and the alphabet of that program. **assemble** first initializes all of the global variables for the program and then goes to **assemble-helper** which recursively processes the Lisptran program according to the specification above. Using the function **run-fractran** that I mentioned above, we can write a function that will execute a given Lisptran program as follows:

(defun run-lisptran (insts) "Run the given Lisptran program." (multiple-value-call #'run-fractran (assemble insts)))

Now that we’ve completed the core compiler, we can start adding actual features to it. From here on out, we will not touch the core compiler. All we are going to do is define a couple Lisptran macros. Eventually we will have enough macros such that programming Lisptran seems like programming a high level assembly language.

The first operations we are going should define are basic arithmetic operations. For example, addition. In order to add addition to Lisptran, we can define a macro **addi**, which stands for add immediate. Immediate just means that we know what number we are adding at compile time. The macro **a****ddi** will take a variable and a number, and will expand into a fraction which will add the given number to the register for the variable. In this case, the denominator for the fraction will just be the prime for the current instruction (execute this instruction when that register is set) and the numerator will be the prime for the next instruction (execute the next instruction after this one) times the prime for the variable raised to the power of the number we are adding (add the immediate to the register). Here is what the definition for **addi** looks like:

(deftran addi (x y) (prog1 (list (/ (* *next* (expt (prime-for-var x) y)) *cur*)) (advance)))

With are also going to want an operation that performs subtraction. It’s a bit tricky, but we can implement a macro **subi **(subtract immediate) in terms of **addi**, since adding a number is the same as adding the negative of that number:^{2}

(deftran subi (x y) `((addi x ,(- y))))

Now that we’ve got some macros for performing basic arithmetic, we can start focusing on macros that allow us to express control flow. The first control flow macro we will implement is **>=i** (jump if greater than or equal to immediate). In order to implement **>=i**, we will have it expand into three fractions. The first fraction will test if the variable is greater or equal to the immediate. If the test succeeds, we will then advance to the second fraction which will restore the variable (since when a test succeeds, all of the values from the denominator are decremented from the corresponding registers), and then jump to the label passed in to **>=i**. If the test fails, we will fall through to the third fraction which will just continue onto the next fraction after that.

The denominator of the first fraction will be the prime for current instruction (execute the instruction if that register is set) times the prime for the register raised to the power of the constant (how we test that the register is greater than or equal to the immediate) and the numerator will be the prime for the second instruction (so we go to the second instruction if the test succeeds). The second fraction is just the prime for the label passed into **>=i** (so we jump to wherever the label designates) divided the prime for that instruction. Lastly, the denominator of the third fraction is the prime for the current instruction (so we fall through to it if the test in the first fraction fails), and the numerator is just the prime for the next instruction so that we continue to that if the test fails:

(deftran >=i (var val label) (prog1 (let ((restore (new-prime))) (list (/ restore (expt (prime-for-var var) val) *cur-inst-prime*) (/ (* (prime-for-label label) (expt (prime-for-var var) val)) restore) (/ *next-inst-prime* *cur-inst-prime*))) (advance)))

Believe it or not, but after this point, we won’t need to even think about fractions anymore. Lisptran now has enough of a foundation that all of the further macros we will need can be expressed in terms of **addi**, **subi** and **>=i**. The only two functions that actually need to be implemented in terms of Fractran are **addi** and **>=i**. That means no more thinking about Fractran. From here on out, all we have is Lisptran!

We can easily define unconditional goto in terms of **>=i**. Since all of the registers start at 0, we can implement **goto** as greater than or equal to zero. We use the Lisp function **gensym** to generate a variable without a name so that the variable doesn’t conflict with any other Lisptran variables:

(deftran goto (label) `((>=i ,(gensym) 0 ,label)))

Then through a combination of **>=i** and **goto**, we can define **<=i**:

(deftran <=i (var val label) (let ((gskip (gensym))) `((>=i ,var (+ ,val 1) ,gskip) (goto ,label) ,gskip)))

Now that we have several macros for doing control flow, we can start building some utilities for printing. As mentioned previously printing a character is the same as incrementing the variable with the character as its name and then immediately decrementing it:

(deftran print-char (char) `((addi ,char 1) (subi ,char 1)))

Then if we want to write a macro that prints a string, it can just expand into a series of calls to **print-char**, each of which prints a single character in the string:

(deftran print-string (str) (loop for char across str collect `(print-char ,char)))

We are also going to need a function to print a number. Writing this with the current state of Lisptran is fairly difficult since we haven’t implemented several utilities such as mod yet, but we can start by implementing a macro **print-digit **that prints the value of a variable that is between 0 and 9. We can implement it, by having it expand into a series of conditions. The first one will check if the variable is less than or equal to zero. If so it will print the character zero and jump past the rest of the conditions. Otherwise it falls through to the next condition which tests if the variable is less than or equal to one and so on. We don’t have to manually write the code for **print-digit** because we can use Lisp to generate the code for us:

(deftran print-digit (var) (loop with gend = (gensym) for i from 0 to 9 for gprint = (gensym) for gskip = (gensym) append `((<=i ,var ,i ,gprint) (goto ,gskip) ,gprint (print-char ,(digit-char i)) (goto ,gend) ,gskip) into result finally (return `(,@result ,gend))))

At this point, now that we have macros for performing basic arithmetic, basic control flow, and printing, we can start writing some recognizable programs. For example here is a program that prints the numbers from zero to nine:

(start (>=i x 10 end) (print-digit x) (print-char #\newline) (addi x 1) (goto start) end)

If you are curious I have included the Fractran program generated by this Lisptran program here. It’s hard to believe that the above Lisptran program and the Fractran program are equivalent. They look completely different!

Now that we have a bunch of low level operations, we can start building some higher level ones. You may not have thought of it, but instructions don’t need to just have flat structure. For example, now that we have **goto**, we can use it to define while loops (just like in Loops in Lisp):

(deftran while (test &rest body) (let ((gstart (gensym)) (gend (gensym))) `((goto ,gend) ,gstart ,@body ,gend (,@test ,gstart))))

In order to implement **while**, we are assuming that all predicates take labels as their last argument which is where they will jump to if the predicate succeeds. Now that we have while loops, we can start writing some much more powerful macros around manipulating variables. Here’s two useful ones, one that sets a variable to zero, and one that copies the value in one variable to another:

(deftran zero (var) `((while (>=i ,var 1) (subi ,var 1)))) (deftran move (to from) (let ((gtemp (gensym))) `((zero ,to) (while (>=i ,from 1) (addi ,gtemp 1) (subi ,from 1)) (while (>=i ,gtemp 1) (addi ,to 1) (addi ,from 1) (subi ,gvar 1)))))

For **move**, we first have to decrement the number we are moving from and increment a temporary variable. Than we restore both the original variable and the variable we are moving the value to at the same time.

With all of these macros, we can finally start focusing on macros that are actually relevant to Fizzbuzz. One operation that is absolutely going to be necessary for Fizzbuzz is mod. We can implement a macro **modi** by repeatedly subtracting the immediate until the variable is less than the immediate.

(deftran modi (var val) `((while (>=i ,var ,val) (subi ,var ,val))))

We only need one more real feature before we can start writing Fizzbuzz. We are going to need a way of printing numbers. In order to print an arbitrary number, we are going to need a way of doing integer division. We can implement a macro **divi** by repeatedly subtracting the immediate until the variable is less than the immediate and keeping track of the number of times we’ve subtracted the immediate.

(deftran divi (x y) (let ((gresult (gensym))) `((zero ,gresult) (while (>=i ,x ,y) (addi ,gresult 1) (subi ,x ,y)) (move ,x ,gresult))))

Now for the final macro we will need. A macro for printing numbers. Actually, we are going to cheat a little. Printing numbers winds up being pretty difficult since you have to print the digits from left to right, but you can only look at the lowest digit at a time. To make things easier, we are only to write a macro that is able to print two digit numbers. We won’t need to print 100 since “buzz” will be printed instead.

(deftran print-number (var) (let ((gtemp (gensym)) (gskip (gensym))) `((move ,gtemp ,var) (divi ,gtemp 10) (>=i ,gtemp 0 ,gskip) (print-digit ,gtemp) ,gskip (move ,gtemp ,var) (modi ,gtemp 10) (print-digit ,gtemp) (print-char #\newline))))

Now our language is sufficiently high enough that Fizzbuzz is going to be practically as easy as it will get. Here is an implementation of Fizzbuzz in Fractran.

((move x 1) (while (<=i x 100) (move rem x) (modi rem 15) (<=i rem 0 fizzbuzz) (move rem x) (modi rem 3) (<=i rem 0 fizz) (move rem x) (modi rem 5) (<=i rem 0 buzz) (print-number x) (goto end) fizzbuzz (print-string "fizzbuzz") (goto end) fizz (print-string "fizz") (goto end) buzz (print-string "buzz") (goto end) end (addi x 1)))

I’ve also included the generated Fractran program here and included all of the full source code for this blog post here.

I find it absolutely amazing that we were able to build a pretty decent language by repeatedly adding more and more features on top of what we already had. To recap, we implemented a basic arithmetic operation (**addi**) in terms of raw Fractran and then defined a second (**subi**) in terms of that. From there we defined three macros for doing control flow (**>=i**, **goto**, **<=i**), with the second two being defined in terms of the first. Then we were then able to define macros for printing (**print-char**, **print-string**, **print-digit**). At this point we had all of the low level operations we needed so we could start implement while loops (**while)**, a high level control flow construct. With while loops, we were able to define several macros for manipulating variables (**zero**, **move)**. With these new utilities for manipulating variables we could define more advanced arithmetic operations (**modi**, **divi**). Then with these new operations we were able to define a way to print an arbitrary two digit number (**print-number**). Finally, using everything we had up to this point, we were able to write Fizzbuzz. It’s just incredible that we could make a language by always making slight abstractions on top of the operations we already had.

The post Building Fizzbuzz in Fractran from the Bottom Up appeared first on Macrology.

]]>The post How to Generate Self-Referential Programs appeared first on Macrology.

]]>Before we can start generating self-referential programs we are first going to need some techniques for generating programs in general. The first technique we need is a method of taking a given program and writing a second program that outputs the given program. As an example, given (+ 2 2), we would need to write a program that outputs (+ 2 2). In most languages this is easy. One way to do it in Lisp is to put a quote in front of the program:

'(+ 2 2) => (+ 2 2)

We are also going to need a function that automates this process. Such a function would take a program as its argument and return a new program that when ran, outputs the program that was originally passed to the function. In most languages doing this is fairly tricky. In Lisp, we can write this function easily through backquote:

(defun code-that-generates (program) `',program) (code-that-generates '(+ 2 2)) => '(+ 2 2)

If you don’t understand how backquote works, you can read this. Even though it’s for Emacs Lisp, everything there is still applicable to other Lisps. Just make sure that you understand that **code-that-generates** can be used to generate a program that outputs a given program.

Now that we have these two techniques, we can begin writing programs that are able to refer to themselves. The first self-referential program we will write will be an example of a *quine*. If you don’t know, a quine is a program that outputs its own source code. The quine we are going to write is made up of two parts, part A and part B, where part A is a function that is applied to part B:

(A B)

To describe how the quine works, it is easiest to start with part B. All that part B needs to do is return the source code of part A:

(A 'A)

Part A’s job is to take its own source code, and use it to obtain the source code of the entire quine. Since B is a program that outputs A, A can use **code-that-generates** on its own source code in order to obtain the source code of B. Once A has the source code of both A and B, it becomes trivial to combine the two to obtain the source code of the entire quine. Here is the complete quine, with the call to **code-that-generates** inlined:

((lambda (a) (let ((b `',a)) `(,a ,b))) '(lambda (a) (let ((b `',a)) `(,a ,b)))) => ((lambda (a) (let ((b `',a)) `(,a ,b))) '(lambda (a) (let ((b `',a)) `(,a ,b))))

Now this is where things start getting interesting. A quine can be thought of as a program that generates its own source code, and immediately returns it. What if instead of immediately returning its own source code, the quine applied a function to it first, and then returned the result of that. The steps for building such a program are almost exactly the same as the steps we took for building the quine. This time, there is a third part F, for the function we want to call. The structure of the program will look like the following:

(F AB)

Where AB has a similar structure to our quine. After breaking AB into the two parts, A and B, the program looks like the following:

(F (A B))

Part B in the above program has the same responsibilities as B in the quine, it returns the source code for A:

(F (A 'A))

Then once A has the source code for itself, it can use **code-that-generates **to obtain the source code for B. Now that it has the source of A and B, it is easy for it to construct AB. Once part A has the code for AB, it can easily generate the source of the entire program. Here is what the program becomes after filling in everything except F:

(F ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(F ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(F ,ab))))))

What makes this so awesome is that F can be any function we want, and the above program will run F with the source code of the entire program! For example, replacing F with **identity** causes the program to become a quine:

(identity ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(identity ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(identity ,ab)))))) => (identity ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(identity ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(identity ,ab))))))

But we can also do some much more impressive things. We can replace F with a function that lists its argument twice, and get a program that returns a list containing its own source code twice:

((lambda (x) (list x x)) ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `((lambda (x) (list x x)) ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `((lambda (x) (list x x)) ,ab)))))) => (((lambda (x) (list x x)) ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `((lambda (x) (list x x)) ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `((lambda (x) (list x x)) ,ab)))))) ((lambda (x) (list x x)) ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `((lambda (x) (list x x)) ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `((lambda (x) (list x x)) ,ab)))))))

To make writing these self-referential programs easier, we can define a function that fills in F for us. It just requires a little nested backquote trickery.^{1}

(defun self-referential-version-of (f) `(,f ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(,',f ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(,',f ,ab))))))) (self-referential-version-of '(lambda (x) (list x x)) => ((lambda (x) (list x x)) ((lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(,'(lambda (x) (list x x)) ,ab)))) '(lambda (a) (let ((b `',a)) (let ((ab `(,a ,b))) `(,'(lambda (x) (list x x)) ,ab))))))

Now that we’ve got a function that can generate self-referential programs for us, I am going to show you how to build something called a *quine-relay*. A quine-relay is like a normal quine, except it passes through multiple languages. The *quine-relay* we are going to write is a Lisp program that outputs a C program that outputs the original Lisp program. All we have to do is write a function that takes its argument and writes a C program that prints the argument it was given. Then we can pass that function to **self-referential-version-of** to get the quine-relay! That’s it! Here is a program that will generate the quine-relay:

(self-referential-version-of '(lambda (self) (format t "#include <stdio.h>~%int main(){printf(\"%s\",~(~s~));}" (remove #\newline (prin1-to-string self)))))

I’ve omitted the actual quine-relay for brevity, but you can find it here if you are curious. There are a few idiosyncrasies in the above program and in the quine-relay because of the differences in behavior between Lisp and C. For example, in C you can’t have multi-line strings, so it becomes easier to remove all of the newlines from the Lisp program, than it is to keep them.

And that’s all it takes to write self-referential programs. After seeing how easy it is to generate a quine-relay, it shouldn’t be hard to imagine how to write one with many more steps. You may even be able to get up to 100 if you work at it long enough.

The post How to Generate Self-Referential Programs appeared first on Macrology.

]]>The post Loops in Lisp Part 4: Series appeared first on Macrology.

]]>One of the many advantages of programming in a functional style (by this, I mean manipulating your data through the operations, map, fold, and filter) is that your program winds up being made up a bunch of tiny and composable pieces. Since each piece is so small, usually only a few lines each, it becomes trivial to unit test the entire program. Additionally, it is easy to express new features as just the composition of several existing functions. One disadvantage of programming through map and friends, is that there is fairly large time penalty for allocating the intermediate results. For example, every time filter is called on a list, a new list needs to be allocated. These costs add up pretty quickly and can make a functional program much slower than its imperative equivalent.

One solution to this problem is laziness. Instead of allocating a new list every time an operation is performed on a list, you instead keep track of all of the transformations made on the list. Then when you fold over the list, you perform all of the transformations as you are folding over it. By doing this, you don’t need to allocate intermediate lists. Although laziness doesn’t allocate any intermediate lists, there is still a small cost for keeping track of the laziness. An alternative solution that makes functional programming just as fast as imperative programming is provided by the Series library.^{1} Series lets you write your program in a functional style without *any* runtime penalty at all!

Personally, the Series library is my favorite example of the magic that can be pulled off with macros. In short, Series works by taking your functional code and compiling it down into a single loop. In this loop, there is one step per transformation performed on the original list. The loop iterates over the values of the original sequence on at a time. On each iteration, the loop takes a single element, performs all of the transformations performed on the list on that single element, and then accumulates that value into the result according to the folding operation. This loop requires no additional memory allocation at runtime, and their is no time penalty either! As an example, here is a program that sums the first N squares, written using Series:

(defun integers () "Returns a 'series' of all of the natural numbers." (declare (optimizable-series-function)) (scan-range :from 1)) (defun squares () "Returns a 'series' of all of the square numbers." (declare (optimizable-series-function)) (map-fn t (lambda (x) (* x x)) (integers))) (defun sum-squares (n) "Returns the sum of the first N square numbers." (collect-sum (subseries (squares) 0 n))) (sum-squares 10) => 385

The above code certainly looks functional, there are no side effects in sight. Now let’s look at the code generated by Series. Here is what the macroexpansion of **collect-sum** looks like:

(common-lisp:let* ((#:out-969 n)) (common-lisp:let ((#:numbers-966 (coerce-maybe-fold (- 1 1) 'number)) #:items-967 (#:index-965 -1) (#:sum-959 0)) (declare (type number #:numbers-966) (type (integer -1) #:index-965) (type number #:sum-959)) (tagbody #:ll-970 (setq #:numbers-966 (+ #:numbers-966 (coerce-maybe-fold 1 'number))) (setq #:items-967 ((lambda (x) (* x x)) #:numbers-966)) (incf #:index-965) (locally (declare (type nonnegative-integer #:index-965)) (if (>= #:index-965 #:out-969) (go end)) (if (< #:index-965 0) (go #:ll-970))) (setq #:sum-959 (+ #:sum-959 #:items-967)) (go #:ll-970) end) #:sum-959))

What series does it looks at the entire lifetime of the sequence from its creation until it is folded. It uses this information to build the above loop which simultaneously generates the original sequence, maps over it, filters elements out of it, and folds it into the final result. Here is the breakdown of the expansion. Lines 1-9 are just initialization. They define all of the variables the loop will be using and set them to their starting values. The important variables to keep track of are #:NUMBERS-966, #:ITEMS-967, and #:SUM-959. As the code “iterates” over the original sequence, #:NUMBERS-966 is the value of the original sequence, #:ITEMS-967 is the square of that value, and #:SUM-959 is the sum of the squares so far. The rest of the code is the actual loop.

The loop first takes #:NUMBERS-966, the previous value of the sequence, and increments it in order to set it to current value of the sequence (since the sequence is the range from 1 to infinity). Next the loop takes the square of #:NUMBERS-966 to get the ith square number and stores that in #:ITEMS-967. Then the loop checks if it ha taken more than N elements out of the sequence, and if so, terminates. Finally the loop takes the value in #:ITEMS-967 and accumulates that into #:SUM-959.

Although the imperative version is equivalent to the original functional code, it is much faster than the functional code if the functional code were to allocate intermediate results or use laziness. This idea of turning transformations on a list into a loop doesn’t just work for this simple example, it also works for much more complicated programs. I just find it incredible that Series is able to take such pretty code and compile it into code that is extremely fast.

The post Loops in Lisp Part 4: Series appeared first on Macrology.

]]>The post Loops in Lisp Part 3: Iterate appeared first on Macrology.

]]>The Iterate library is pretty awesome. It provides a macro **iterate** (and an alias for it, **iter**) that is basically a Lispy version of **loop**. The most obvious consequence of this is that **iterate** uses a lot more parens than **loop** does:

;; Loop code (loop for i from 1 to 10 collect i) ;; Iterate code (iter (for i from 1 to 10) (collect i))

Even though all of the extra parens make **iterate** much uglier than **loop**, they give **iterate** all of the advantages of Lisp syntax. One such advantage is the ability to embed **iterate** clauses within Lisp code and vice versa. While you can’t do this with **loop**, you can do it with **iterate** because the syntax of **iterate** is so similar to the syntax of ordinary Lisp code. Here is what happens when you try to embed a **collect** clause within Lisp code with **loop** and with **iterate**:^{1}

;; Not valid loop code. (loop for i from 1 to 10 do (when (evenp i) (collect i))) ;; Valid iterate code (iter (for i from 1 to 10) (when (evenp i) (collect i)))

Although the ability to seamlessly go between Lisp code and **iterate** is pretty awesome, the greatest feature provided by **iterate** is also the entire reason why Lisp syntax has so many parens in the first place. Lisp syntax (and by extension **iterate**) makes it easy to write macros! Because of this, you can add pretty much any feature you want to **iterate**. As a simple example, here’s how you could define an **iterate** clause specifically for looping over the digits of a number:^{2}

(defun digits (n) "Returns a list of the digits of N." (map 'list #'digit-char-p (princ-to-string n))) (defmacro-clause (for var in-digits-of n) `(for ,var in (digits ,n)))

And here is how you would use it:

(iter (for i in-digits-of 123) (sum i)) => 6

I cannot express how awesome this is. If you want an **iterate** clause for iterating over SQL queries, you can add it. If you want an **iterate** clause for looping over your own custom data structure, you can add it. You can add any feature you want all because **iterate** allows for the use of macros!

Personally, I prefer to use **iterate** over **loop**. Even though it is uglier, it is much more extensible than **loop** because it decides to use a Lisp like syntax.

The post Loops in Lisp Part 3: Iterate appeared first on Macrology.

]]>The post Loops in Lisp Part 2: Loop appeared first on Macrology.

]]>The **loop** macro is probably the most well known of all macros. It provides a DSL for performing any kind of iteration imaginable. To give you an idea of just how powerful **loop** is, here are the first two Project Euler problems, solved using just **loop**:

;; Solution for problem #1. (loop for i from 1 below 1000 if (or (= 0 (mod i 3)) (= 0 (mod i 5))) sum i) ;; Solution for problem #2. (loop for a = 1 then (+ a b) and b = 0 then a while (< a 4000000) if (evenp a) sum a)

The coolest part of **loop** is that it is just a macro! That means it would be possible to build **loop** in Common Lisp, even if it wasn’t provided as a builtin (here is one such implementation). That also means any **loop** code is eventually compiled down to goto! For example, here is the expansion of the solution to the first Project Euler problem:

(block nil (let ((i 1)) (declare (type (and real number) i)) (let ((#:loop-sum-2482 0)) (declare (type number #:loop-sum-2482)) (tagbody sb-loop::next-loop (if (>= i '1000) (progn (go sb-loop::end-loop)) nil) (if (let ((#:g2483 (= 0 (mod i 3)))) (if #:g2483 #:g2483 (the t (= 0 (mod i 5))))) (setq #:loop-sum-2482 (+ #:loop-sum-2482 i))) (setq i (1+ i)) (go sb-loop::next-loop) sb-loop::end-loop (return-from nil #:loop-sum-2482)))))

If you look carefully, the expansion is nothing more than a mix of a few gotos and conditionals. Also, even though the generated code is a complete mess, you are able to work with it through interface provided by **loop**. Even though **loop** is fairly complex, it is still much simpler than raw gotos. If you think about it, **loop** is really just a convenient way of specifying a combination of patterns of gotos and conditionals.

I don’t have much to add about **loop** that others haven’t already said. If you are looking for a basic introduction to **loop** you should read Peter Seibel’s guide which can be found here. If you are looking for a more complete reference, check out the **loop** chapter in Common Lisp the Language which can be found here.

While all of the features of loop compose well with each other, they do not compose well with the rest of Common Lisp. You cannot embed a loop clause (e.g. collect) within ordinary lisp code. That brings us to what will be next week’s topic, **i****terate**. **Iterate** is basically a more lispy version of **loop**. It allows you to seamlessly go between **iterate** clauses and regular Lisp code. More importantly, **iterate** allows you to define macros that then become part of the **iterate** DSL!

The post Loops in Lisp Part 2: Loop appeared first on Macrology.

]]>The post Loops in Lisp Part 1: Goto appeared first on Macrology.

]]>Goto is extremely powerful. It lets you manipulate the control flow of your program in anyway you can think of. This freedom to do whatever you want is also what makes goto so dangerous. In any given piece of code that uses goto, it is difficult to tell what the purpose of the goto is because it could be used for so many different reasons. Because of this, most languages provide various kinds of builtin loops instead of providing raw goto. Even though loops aren’t as general as goto, they express the intention of the code much more clearly.

As an example, let’s say you want to print all of the characters in a file. If your language provided while loops, you could do this by printing characters from the file one at a time while there are more characters left. If Common Lisp had while loops,^{1} the code for this procedure would look like this:

(while (peek-char file nil nil) (write-char (read-char file)))

If your language only had goto, it becomes much more difficult to implement the procedure. In the end, you have to, in some way, simulate a while loop. One way to code the procedure with just goto is the following. First check if there are any characters left in the file. If there aren’t any, goto the end. Otherwise print the next character and go back to the start. Here is Common Lisp code that implements this:^{2}

(tagbody start (if (not (peek-char file nil nil)) (go end)) (write-char (read-char file)) (go start) end)

Not only is the version with goto much more verbose, it is also much harder to understand. The code lacks clarity because goto is so general. It gives you no context into how it is being used. The reader of the code will have to think about the positioning of all of the gotos before they can think about the overall flow of the program. On the other hand, in the version with the while loop, merely the fact that a while loop is being used gives whoever is reading the code a decent idea of the control flow.

In reality all loops are eventually compiled down to gotos. Whenever the compiler for a language that provides loops sees a loop, it generates code that simulates the loop through goto. You can do the same thing with Lisp macros!

If you don’t know, Lisp macros are compile time functions which take code as their input and return code as their output. When Lisp code is being compiled, all of the macros in the code are called and each one is replaced with its result. This means you can write a macro that looks like a while loop when you use it, but at compile time generates code to simulate a while loop through goto. You are in effect adding while loops to the Lisp compiler! Here is code that defines such a macro:

(defmacro while (test &body body) (let ((gtop (gensym)) (gend (gensym))) `(tagbody ,gtop (if (not ,test) (go ,gend)) ,@body (go ,gtop) ,gend)))

With this macro, the first code example is now valid lisp code! The **while **macro takes as arguments a test and a body. It then generates code that uses the method used in the second example to simulate a while loop with goto. You can actually see what the first example looks like after expanding the macro by using the function **macroexpand**. Here is what the generated code looks like:

(tagbody #:g729 (if (not (peek-char file nil nil)) (go #:g730)) (write-char (read-char file)) (go #:g729) #:g730)

The generated code is the exact same as the code in the second example except for the names of the labels. This means the two examples are the same functionally! The only real difference between them is that the first one is expressed in terms of loops, and the second one is expressed in terms of goto. Since it is so much easier to think in terms of loops than goto, there is no reason why you wouldn’t use the first example over the second.

Macros allow you to build any feature you want as long as it is possible to simulate that feature through lower level features. With respect to goto, this means you can build any kind of control flow construct you want by simulating it with goto and then putting a macro on top. In Common Lisp, all of the looping constructs (**do**, **do***, **dotimes**, **dolist**, **loop**) are really just macros that expand into goto. This is what Alan Kay meant when he said “Lisp isn’t a language, it’s a building material”. It bears repeating. In Lisp, you can build any feature you want as long as it is possible to simulate that feature in terms of lower level features.

The post Loops in Lisp Part 1: Goto appeared first on Macrology.

]]>(let ((table (make-hash-table))) (defun fib (n) (or (gethash n table) (setf (gethash n table) (if (<= 0 n 1) n (+ (fib (- n 1)) (fib (- n 2))))))))

There are a couple problems with the above code. One problem is the boilerplate. If you wanted ten different memoized functions, you would have to copy lines 1, 3, and 4 for every single memoized function. Some people like to call programmers who do this needless duplication, “human compilers”, since they are writing code that the compiler should be writing for them.

Another issue with the above code is the lack of abstraction. If you wanted to change the caching mechanism to say, only cache the last hundred values, you would have to change the definition of every single function! Ideally you would only need to modify the code in one place in order to change how the caching is implemented.

**Defmemo** is one way to solve both of these problems. Here is what the above code would look like if it were were to use **defmemo**:

(defmemo fib (n) (if (<= 0 n 1) n (+ (fib (- n 1)) (fib (- n 2)))))

**Defmemo** solves both of the problems extremely well. It removes all of the differences between the memoized version on the regular version except for having to use “defmemo” instead of “defun”. **Defmemo** also solves the abstraction problem by moving all of the code relevant to memoization into the body of **defmemo**. If you want to change how memoization works, all you have to do is change the code for **defmemo**.

Now for the implementation of **defmemo**. The implementation is made up of two separate parts. First, a higher order function, **memo**, which takes a function as an argument, and returns a memoized version of that function. The second part is the actual macro, **defmemo**. Instead of just defining the function like **defun**, **defmemo** first builds a lambda expression for the body. Then it generates code that calls **memo** on that lambda function. Finally **defmemo** uses the result of **memo** as the implementation of the function being defined.^{2}

Here is the code for **memo**:^{3}^{4}

(defun memo (f) (let ((cache (make-hash-table :test #'equalp))) (lambda (&rest args) (or (gethash args cache) (setf (gethash args cache) (apply f args))))))

**Memo** works by returning a function that has an internal hash-table. When that function is called, it first checks its hash-table to see if it has been called with the same arguments before. If so, it returns the value it had calculated the first time it was called.^{5} If it hasn’t been called with the same arguments before, the function will instead call the function that was passed in to **memo**, and then store the result of that inside the table. This way, if the memoized function is called with the same arguments a second time, it can just look up the result in the table.

Next, for **defmemo** itself, we need to generate code that takes the body as a lambda expression, passes that lambda function through **memo**, and uses that as the implementation of the function. One way to set the implementation of a function to be a lambda function is to use **setf** with **symbol-function**.^{6} For example, here is how you could set the implementation of **square** to be a lambda function that squares its argument:

(setf (symbol-function 'square) (lambda (x) (* x x))) (square 5) => 25

Based on the paragraph above, here is the code for **defmemo**:

(defmacro defmemo (name args &body body) `(setf (symbol-function ',name) (memo (lambda ,args ,@body))))

Now instead of defining a function with **defun**, we can define it with **defmemo** and it will automatically be memoized! **Defmemo** is a great example of how you can define your own ways to define functions. Many libraries provide similar features in which you use the same syntax as **defun**, only with a bit of magic thrown in.

There are many cases where caching the results of a function (also called memoization), make a function much more efficient. For example a function that calculates the Fibonacci numbers:

(defun fib (n) (if (<= 0 n 1) n (+ (fib (- n 1)) (fib (- n 2)))))

If you try running **fib** on different values, you will notice that around 35 or so, it starts to take quite a long time to run. The problem is that **fib** calculates the smaller Fibonacci numbers many more times than it needs to. When calculating the 35th Fibonacci number, the second Fibonacci number is calculated a total of 5702887 times.^{1}

This is where memoization comes in. If the above function were memoized, it would only need to calculate each Fibonacci number once. Then, whenever **fib** is asked to calculate a number it has already calculated, it can just look up the result in the table. Here is what the above code would look like if it were to take advantage of memoization:

(let ((table (make-hash-table))) (defun fib (n) (or (gethash n table) (setf (gethash n table) (if (<= 0 n 1) n (+ (fib (- n 1)) (fib (- n 2))))))))

With the memoized version, you will hit a stack overflow before you find a value that takes more than a moment to calculate. The problem with the above implementation is that it has some duplicate code. There are two calls made to **gethash**. The first call checks to see if the value has already been calculated. If not, **fib** calculates the value manually, and then uses the second call to store it into the table. The fact that the **gethash** call is repeated may not seem like a problem, but when the expression for the place is more complicated, it can become a much bigger deal.

**Or=*** *is a macro that fixes this problem. It does so by first checking whether its first argument, which should be a place, has a non-nil value.^{2} If it does, **or=** will just return that value. Otherwise it evaluates its remaining arguments until one of them evaluates to a non-nil value. **Or=** will then write the value of that expression into the place designated by the first argument. Here is the above code rewritten to use **or=**.

(let ((table (make-hash-table))) (defun fib (n) (or= (gethash n table) (if (<= 0 n 1) n (+ (fib (- n 1)) (fib (- n 2)))))))

The implementation of **or=** looks very similar to the **incf** ‘template’^{3} that is used when writing a macro that works with places. Here is the implementation of **or=**:

(defmacro or= (place &rest args) (multiple-value-bind (temps exprs stores store-expr access-expr) (get-setf-expansion place) `(let* (,@(mapcar #'list temps exprs) (,(car stores) (or ,access-expr ,@args))) ,store-expr)))

This time, the value being stored to the place is the or of the place and whatever other arguments are passed in. Since **or** evaluates its arguments lazily, we get the desired behavior of **or****=** – evaluate the expression (and store the result) only if the place doesn’t have a value already. One problem with **or=** is that it determines if a value has already been stored in the place by testing if the value is non-nil. This can lead to a problem if the value stored in the place is actually nil! As an exercise, try writing a version of **or=** that takes advantage of the multiple values returned by **gethash** in order to properly handle nil.

In my next post, I am going to continue with the memoization example and demonstrate how to write a macro **defmemo**, which makes it easy to define memoized functions.

Many languages provide syntactic sugar for evaluating an expression involving a variable and assigning the result of that expression to the variable at the same time. In these languages you can do something such as the following:

x += 5

The above expression both adds five to the value of *x* and writes that new value back to *x*. In this post, I’m going to show you how you can write a macro **zap** that is a generalized version of this technique. With **zap** the above example would look like the following:

(zap #'+ x 5)

There are a couple things that make **zap** really cool. First of all, it can be used with any function. For example, if you wanted to cube the value in *x*, you could use the following:

(zap #'expt x 3)

The other thing that makes **zap** so awesome is that it can be used on any place. If you want to use **zap** on the value stored in a hash table with key 5, you can do that:

(zap #'+ (gethash 5 table) 5)

Now that you’ve seen how **zap** is used, here is how it can be implemented:

(defmacro zap (fn place &rest args) (multiple-value-bind (temps exprs stores store-expr access-expr) (get-setf-expansion place) `(let* (,@(mapcar #'list temps exprs) (,(car stores) (funcall ,fn ,access-expr ,@args))) ,store-expr)))

You should be able to see that the code for **zap** is eerily similar to that of **incf** (from Getting Places). They are the exact same except instead of binding the gensym that will hold the new value to one plus the value already in the place:

(,(car stores) (+ 1 ,access-expr))

The gensym is bound to the result of calling the function with the value in the place and all of the other arguments passed to **zap**:

(,(car stores) (funcall ,fn ,access-expr ,@args))

Although **zap** is just a nice syntactic shortcut, it is a great example of the crazy things you can do with places.

The post Getting Places appeared first on Macrology.

]]>Places are an incredible part of Common Lisp. In short, a “place” is any location that can hold a value. The obvious example of a place is a variable. Less obvious examples include the elements of an array, or the slots of an object. What makes the concept of places special is that Common Lisp provides a standard interface for reading and writing to them. You can write macros on top of this interface that work for every kind of place. As an example, look at the macro **incf**. It takes a place as an argument, adds one to its value, and stores the new value back into the place. If you want to increment a variable *x,* you would use:

(incf x)

And if you wanted to increment the element at index *x* of a sequence, you would use:

(incf (elt seq x))

They use the exact same syntax even though a variable is very different from an element of a sequence. Because it takes advantage of the interface for places, **incf** will work on any place, be it a variable, the slot of an object, or a user defined place.

So at this point you are probably wondering how does **incf** work and more generally, how do you write macros that use places? To write such a macro, you need to use the function **get-setf-expansion**.^{1} **Get-setf-expansion** takes an expression representing a place and returns a total of five values (if you are unfamiliar with multiple values, see my post on multiple-value-bind). Altogether, these five values tell you everything you need to know about the place in order to read and write to it.

To show you how you are supposed to use **get-setf-expansion**, I’m first going to demonstrate how you could use it to write the expansion of **incf** by hand. After that, I will show code that will automate this, which winds up being an implementation of **incf**. Let’s start by writing the expansion of the example above. The one where the element of a sequence is being incremented. To write the expansion of that by hand, you would first call **get-setf-expansion** to obtain all of the information:^{2}

(get-setf-expansion '(elt seq x))

In SBCL this call will return the following values:

;; (1) temps (#:seq1017 #:x1016) ;; (2) exprs (seq x) ;; (3) stores (#:new1015) ;; (4) store-expr (sb-kernel:%setelt #:seq1017 #:x1016 #:new1015) ;; (5) access-expr (elt #:seq1017 #:x1016))

From now on, I will refer to each value returned by **get-setf-expansion** by the name in the comment before it (e.g. *temps* refers to the first value).

In order to uniquely identify the element of a sequence (the place we are working with in this example), you need two things. You need the sequence itself and the index into the sequence. That is exactly what the two expressions in *exprs* evaluate to! Since **incf** needs to use these values multiple times, the two values have to be bound to gensyms in order to prevent multiple evaluation (see my post on once-only for why multiple evaluation is a problem). You are supposed to bind the values of the expressions to the gensyms in *temps* so that the other expressions returned by **get-setf-expansion** can use those gensyms to easily determine the place being worked with. The bindings need to be made with **let*** because it is possible for an expression in *exprs* to refer to the value of a previous expression in *exprs*. So the first part of the expansion will bind all of the symbols in *temps* to values of the expressions in *exprs* with **let***:

(let* ((#:seq1017 seq) (#:x1016 x)) ...)

Now the gensyms in *temps* can be used to uniquely identify the place. As I mentioned previously, the other expressions can now easily determine the place through the gensyms. For example, *access-expr* can be used to retrieve the value currently in the place. Since the place we are dealing with is the element of a sequence, *access-expr* is just a call to **elt** using the gensyms in *temps* as the arguments. We are going to use *access-expr* in a moment, but first I have to talk about how to write to the place.

In order to write to the place, you need to use *stores* and *store-expr*. *Stores* is a list of gensyms that need to be bound to the values that are to be stored in the place (it is possible for a single place to hold multiple values). In this case we want to bind the gensym in *stores* to one plus the value already in the place. We can easily obtain the value in the place through *access-expr*. Once the gensyms have been bound, you can use *store-expr* to actually write the values in *stores* to the place. Notice how *store-expr* is a call to an internal SBCL function **sb-kernel:setelt%** that uses the gensyms in *temps* and *stores* as arguments. Presumably **sb-kernel:setelt%** sets the element of a sequence. After adding the binding for the gensym in *stores* and *store-expr*, we wind up with the final expansion which looks like:^{3}

(let* ((#:seq1017 seq) (#:x1016 x) (#:new1015 (+ 1 (elt #:seq1017 #:x1016)))) (sb-kernel:%setelt #:seq1017 #:x1016 #:new1015))

To review, the above code first binds the gensyms in *temps* to the values of the expressions in *exprs*. This allows *access-expr* and *store-expr* to use the gensyms in *temps* in order to determine the place being worked with. Then the code uses *access-expr* to retrieve the value, adds one to that, and binds that value to the gensym in *stores*. This is because the value of the gensym in *stores* is ultimately going to be the one written to the place. Finally the code evaluates *store-expr* in order to actually store the value in the gensym into the place.

Now here is one possible implementation of **incf**,^{4} which is code for everything we just did by hand. I called it **incf%** so that it doesn’t have the same name as the builtin version.

(defmacro incf% (place) (multiple-value-bind (temps exprs stores store-expr access-expr) (get-setf-expansion place) `(let* (,@(mapcar #'list temps exprs) (,(car stores) (+ 1 ,access-expr))) ,store-expr)))

The above code first binds the five values returned by **get-setf-expansion** to variables. It then generates a **let*** binding which binds the symbols in *temps* to the expressions in *exprs* and also binds the gensym in *stores* to one plus the result of evaluating *access-expr*. Finally the above code splices in *store-expr* to actually write the value. And that is everything there is to **incf**.

**Incf** is but a single example of what can be done with places. In the next couple of posts, I plan to cover some really cool macros that encapsulate a bunch of common patterns related to places.

The post Getting Places appeared first on Macrology.

]]>