As anyone who has looked into functional reactive programming (FRP) knows, there are lots of competing approaches to it, and not a lot of conceptual clarity about how they relate to each other. In this post, I’ll try to shed some light, and in particular give you some guide posts for understanding how the different versions of FRP relate to each other. Plus, I’ll show some connections to a similar technique called self-adjusting computation (SAC).
The analysis here should mostly be credited to Evan Czaplicki, who gave a talk at Jane Street a week ago. Any confusions and mistakes are, of course, my own. Also, thanks to Jake McArthur for filling me in on a few key details.
In all of this I’m basically going to talk only about discrete FRP. A lot of the people involved in FRP think of continuous-time semantics as really important, but that’s not the focus here. (I am curious, however, if anyone can explain why people think continuous time semantics are important when writing programs for a largely discrete computer.)
First, some basics. Roughly speaking, FRP systems are meant to make it easier to write programs that react to external events by providing a natural way to program with and reason about time-varying signals.
Now, time to oversimplify.
An FRP program effectively ties signals together in a dependency graph, where each signal is either an external input or a derived signal that feeds off of other signals that have already been defined. A key aspect of a system like this is that the language runtime uses the dependency graph to minimize the amount of work that needs to be done in response to an external event.
Here are some properties that you might want from your FRP system:
- History-sensitivity, or the ability to construct calculations that react not just to the current state of the world, but also to what has happened in the past.
- Efficiency. This comes in two forms: space efficiency, mostly meaning that you want to minimize the amount of your past that you need to remember; and computational efficiency, meaning that you want to minimize the amount of the computation that must be rerun when inputs change.
- Dynamism, or the ability to reconfigure the computation over time, as the inputs to your system change.
- Ease of reasoning. You’d like the resulting system to have a clean semantics that’s easy to reason about.
It turns out you can’t have all of these at the same time, and you can roughly categorize different approaches to FRP by which subset they aim to get. Let’s walk through them one by one.
Pure Monadic FRP
This approach gives you dynamism, history-sensitivity and ease of reasoning, but has unacceptable space efficiency.
As you might expect, the signal combinators in pure monadic FRP can be described
as a monad. That means we have access to the usual monadic operators, the
simplest of which is return
, which creates a constant signal.
val return : 'a -> 'a signal
You also have map
, which lets you transform a signal by applying a function to
it at every point in time.
val map: 'a signal -> ('a -> 'b) -> 'b signal
Operators like map2
let you take multiple signals and combine them together,
again by applying a function to the input signals at every point in time to
produce the output signal.
val map2 : 'a signal -> 'b signal -> ('a -> 'b -> 'c) -> 'c signal
Note that all of the above essentially correspond to building up a static set of
dependencies between signals. To finish our monadic interface and to add
dynamism, we need one more operator, called join
.
val join: 'a signal signal -> 'a signal
Nested signals are tricky. In a nested signal, you can think of the outer signal
as choosing between different inner signals. When these are collapsed with
join
, you essentially get a signal that can change its definition, and
therefore its dependencies, in response to changing inputs.
We’re still missing history sensitivity. All of the operators thus far work on
contemporaneous values. We don’t have any operators that let us use information
from the past. foldp
is an operator that does just that, by folding forward
from the past.
val foldp: 'a signal -> init:'acc -> ('acc -> 'a -> 'acc) -> 'acc signal
With foldp
, history is at our disposal. For example, we can write a function
that takes a signal containing an x/y position, and returns the largest distance
that position has ever been from the origin. Here’s the code.
let max_dist_to_origin (pos : (float * float) signal) : float signal =
foldp pos ~init:0. ~f:(fun max_so_far (x,y) ->
Float.(max max_so_far (sqrt (x * x + y * y))))
Here, max_so_far
acts as a kind of state variable that efficiently summarizes
the necessary information about the past.
foldp
seems like it should be implementable efficiently, but we run into
trouble when we try to combine history sensitivity and dynamism. In particular,
consider what happens when you try to compute max_dist_to_origin
on the signal
representing the position of the mouse. And in particular, what if we only
decide to run this computation at some point in the middle of the execution of
our program? We then have two choices: either
(max_dist_to_origin mouse_pos)
always has the same meaning, or, its
meaning depends on when it was called.
In pure monadic FRP, we make the choice to always give such an expression the same meaning, and thus preserve equational reasoning. We also end up with something that’s impossible to implement efficiently. In particular, this choice forces us to remember every value generated by every input forever.
There are various ways out of this performance trap. In the following, I’ll describe the different escape paths chosen by different styles of FRP systems.
Pure Applicative FRP
The idea behind applicative FRP is simple enough: just drop the join
operator,
thus giving up dynamism. This means that you end up constructing static
dependency graphs. Without the ability to reconfigure, you don’t run into the
question of what happens when an expression like
(max_dist_to_origin mouse_pos)
is evaluated at multiple points in time.
This is the approach that Elm takes, and seems like the primary approach that is taken by practical systems concerned with describing UI interactions.
There’s a variant on pure applicative FRP called, confusingly, Arrowized FRP. Effectively, Arrowized FRP lets you create a finite collection of static graphs which you can switch between. If those static graphs contain history-dependent computations, then all the graphs will have to be kept running at all times, which means that, while it can be more efficient then applicative FRP, it’s not materially more expressive.
Impure Monadic FRP
Impure monadic FRP basically gives up on equational reasoning. In other words,
the meaning of (max_dist_to_origin mouse_pos)
depends on when you call it.
Essentially, evaluating an expression that computes a history-sensitive signal
should be thought of as an effect in that it returns different results depending
on when you evaluate it.
Loosing equational reasoning is not necessarily the end of the world, but my experience programming in this style makes me think that it really is problematic. In particular, reasoning about when a computation was called in a dynamic dependency graph is really quite tricky and non-local, which can lead to programs whose semantics is difficult to predict.
Self-Adjusting Computations
Self-adjusting computations are what you get when you give up on
history-sensitivity. In particular, SAC has no foldp
operator. The full set of
monadic operators, however, including join
are in place, which means that you
do have dynamism. This dynamism is quite valuable, it turns out. Among other
things, it allows you to build a highly configurable computation that can
respond to reconfiguration efficiently.
As you might expect, the lack of history-sensitivity makes SAC less suitable for writing user interfaces. Indeed, SAC was never intended for such applications; its original purpose was for building efficient on-line algorithms, i.e., algorithms that could be updated efficiently when the problem changes in a small way.
SAC is also easy to reason about, in that all an SAC computation is doing is incrementalizing an otherwise ordinary functional program. You get full equational reasoning as long as you avoid effects within your SAC computation.
History sensitivity for SAC
At Jane Street, we have our own SAC library called Incremental, which is used for a variety of different applications. In practice, however, a lot of our SAC applications do require some amount of history-sensitivity. The simplest and easiest approach to dealing with history within SAC is to create inputs that keep track of whatever history is important to your application. Then, the SAC computation can use that input without any complications.
Thus, if you there’s an input whose minimum and maximum value you want to be able to depend on in your calculation, you simply set up calculations outside of the system that create new inputs that inject that historical information into your computation.
You can keep track of history in a more ad-hoc way by doing side-effects within the functions that are used to compute a given node. Thus, we could write a node that computes the maximum value of a given input using a reference, as follows.
let max_of_signal i =
let max = ref None in
map i (fun x ->
match !max with
| None -> max := Some x; x
| Some y ->
let new_max = Float.max x y in
max := new_max;
new_max)
But this kind of trick is dangerous, particularly because of the optimizations
that are implemented in Incremental and other SAC implementations. In
particular, Incremental tries to avoid computing nodes whose values are not
presently in use, and as such, signals that are deemed unnecessary are not kept
up-to-date. Thus, if you create a signal by calling (max_of_signal s)
, and
then keep it around but don’t hook it into your final output, the computation
will stop running and will thus stop receiving updates. Then, if you pull it
back into your computation, it will have a value that reflects only part of the
true history.
There are some tricks for dealing with this in Incremental. In particular, we
have an operator called necessary_if_alive
, which forces the node in question
to remain alive even if it’s not necessary at the moment. That helps, but there
are still complicated corner cases. Our preferred approach to dealing with such
cases is to statically declare the set of history-sensitive signals, and make
sure that those are alive and necessary at all times.
Broader lessons
This is I think a theme in FRP systems: history is made tractable by limiting dynamism. From the work I’ve done with SAC systems, I think the usual approach in the FRP world is backwards: rather than start with a static system that supports history and extend it with increased dynamism, I suspect it’s better to start with a highly dynamic system like SAC and carefully extend it with ways of statically adding history-sensitive computations. That said, it’s hard to be confident about any of this, since this is a rather subtle area where no one has all the answers.