10 September 2017

ICFP 2017

I already blogged last year on ICFP, the awesome conference on Functional Programming, to share a neat trick.

This year I would like to do a small brain dump on some of the sessions which I really liked.

Workshop on higher-programming with effects

Programming a web server with effects

This is one of several talks showing the benefits of programming with a language with natively supports the description of effects in the type system: Koka. Not only Daan Leijen is a great presenter whose enthusiasm is contagious but with Koka it feels entirely natural to write asynchronous code with interleaving, cancellation, resource-cleanup and so on.

The also hints at something which got me really interested, the yield operator to create iterators can be seen as an effect. Coupled to the async support above this makes me think that a whole streaming library could be created as a set of effects!

Workshop on type-driven development

Type-directed diffing of structured data

A fantastic idea: can be do better than line-diffing to diff code and do AST diffs? The presenter introduced a language for representing AST patches (not as easy as it seems) then different possibilities for algorithms doing the diff (clearly a hard problem if you want to find the minimum diff).


Super 8 languages for making movies

I’m not a big fan of dynamic languages but the power of Racket is impressive. The presenter shows how she created a “Tower of DSLs” to create a video-editing tool. Complete with parser, GUI, documentation, integration with video libraries in a lot less lines of code than anything else I know.

Faster coroutine pipelines

Coroutine pipelines are the functional way to create streaming libraries. However they can be made faster. How? But using an encoding based on continuations rather than the “direct” encoding. This trick seems to be pervasive. For example, for algebraic effects, ScalaEffekt uses such an encoding and gets much better performances than eff.

A unified approach for solving seven programming problems

Racket is at it again but with a secret weapon, miniKanren. If you view a computation as a “logical” relationship between a program and some outputs, it is very reasonable to ask your solver to find the programs which give you the right outputs. And find, for example, Quines which are programs which output themselves!

Generic functional parallel algorithms: parallel scan

Scans, computing the prefix sums of a list of ints, are an important ingredient in some algorithms, like regular expression search. If they can be performed in parallel it’s even better! This talk shows that a scan algorithm can be defined for generic datastructures, just defined by sums, products and functor composition. Then by adjusting the decomposition you get different amounts of parallelism and work to compose results.

Effect-driven quickchecking of compilers

Very entertaining presentation showing that it is entirely possible to generate well-type terms to test a compiler.

Compiling to categories

My favourite talk this year. What if you consider lambda and apply in the lambda calculus as 2 operations which could be interpreted differently? Conal Elliott developed a GHC plugin to transform Haskell expressions into objects in a category where function composition corresponds to the composition of morphisms in a category. What does that give us? Instead of applying functions and getting a result we can generate circuits computing them, differentiate them or do interval computations where the results are intervals instead of just being values.

Staged Generic Programming

Generic programming is cool. Write a few lines of code and let the compiler figure the right data structure and / or program. Unfortunately the performances might be very bad compared to hand-written code. With staging we can let the compiler be smarter and generate “unrolled” code which will perform as well as the hand-written one.

I was also glad to get to talk to Jeremy Yallop, the presenter, who is a very nice guy, now working at Docker on unikernels, something to keep an eye on!

Whip: higher-order contracts for modern services

This is looks more like a CUFP talk, with real applications in the industry :-). The idea is to install some “decorators” on webservices to check where contracts are being broken. They have a nice website and I am tempted to try this at work.

Haskell Symposium

Algebraic graphs with Class

A simple but very powerful to describe graphs “algebraically”. Some atoms: the empty graph, the singleton graph and 2 operations: overlay (sort of addition) and connect (sort of multiplication). Based on that you can represent any graph and encode any algorithm against that representation, which can then optimised for different purposes.

Quickspec / Speculate

Those 2 tools will generate the equations that you API is supposed to satisfy. For example, given [], ++ and reverse they should be able to generate reverse ([] ++ xs) == reverse xs. But even more, they can now generate implications and inequalities!

Composable networks and remote monads

The remote monad explores different ways of “grouping” operations when addressing remote services. After tried to do something similar for eff I appreciate a lot more the subject :-).

Testing with crowbar

Impressive example about finding bugs in a unicode library in 10 minutes which QuickCheck can not find in 1 week. The secret? Fuzzing and code coverage. Generate random sequence, turn them to test cases and mutate the sequences until you find some “interesting” part of the code. Those are likely to contain bugs.

ML Family workshop

State machines all the way down

My periodic reminder that I need to find some time to use Idris. Even more so after Edwin Brady said that the state machine programming with the type system tracking before/after state subsumes the effects library! He also told me that he was implementing the Idris compiler in Idris and this was a good example for him about “when not to go too far with types”.

Effects without monads: non determinism

Oleg Kiselyov is back, implementing non-determinism with TTFI (typed tagless final interpreters). This example is particularly interesting because he shows that monads are not necessary to interpret this mini-language. Not only that but that but they cannot even be used if your translation is about generating code. Because you cannot write code which will write code!

Bioinformatics: the typed tagless final way

A concrete example of people using TTFI in real life for large and extensible DSLs.


Bonsai: DSL for serverless decisioning

This DSL is pretty simple, it is just decision trees (if / then / else) but compiled for maximum throughput for the ad industry.

Gens N’ Roses: appetite for reduction

Jacob Stanley talk on Hedgehog where shrinks come for free and failures are beautiful. A “must-use” for Haskell users.

Haskell implementors workshop

Syntactic musings

This was a lightning talk but it made me laugh. Those are proposals to improve haskell syntax with almost zero possibilities to be adopted :-):

How to write bracket without parenthesis:

Instead of

  (acquire some resources)
  (do your stuff which might be
   pretty long)
  (and clean up)

Go (yes, bullet points!)

  . acquire some resources
  . do your stuff which might be
     pretty long
  . and clean up

Then set a value for a bunch of functions

context fixed (value) where
  add a = value + a
  mul a = value * a

to avoid the repetition of value and to avoid accidentally messing with it. This screams like “module/functor envy” to me.

The last one is crazy but brilliant. Get away with grouping with parentheses by removing whitespace where you want to indicate higher precedence

a+b * c+d

Why not :-)?

An experiment in fragment-based code distribution

I thought this idea could really not roll but now I am intrigued. Following Joe Armstrong’s “Why do we need modules at all?” Philip Schuster implemented a build system for haskell where each program is decomposed into “slices” containing just one function and its dependencies. Everything identified by numbers. Advantages: just compile and package what you need, never care about version numbers. But the job of curating all of that to get a coherent snapshot, oh boy! As a data point, running anything on Scotty requires 12.000 slices.


A very intense week as in the previous editions I have been to. I don’t know yet what will stick with me and influence my future work but there’s plenty to think about. I also enjoyed very much the discussions I had with Jonathan and Philipp, the authors of scala-effekt.

20 August 2017

Specs2 4.x

It has been now almost 3 years since the last major version of specs2. But why would a new major version of a testing library be even needed? Because the world doesn’t stop revolving!

In particular Scala can now be compiled to JavaScript and even natively. It is then only natural that specs2 users want to use specs2 on their Scala.js projects:

How hard can this be? First of all specs2 3.x relies on scalaz-stream which doesn’t have a Scala.js build, and more largely on scalaz (which has a Scala.js build).

This is where an issue becomes an opportunity! If I need to find an alternative to scalaz-stream why not remove scalaz altogether? Indeed it is very inconvenient for a test library to rely on third-party libraries which might conflict with users libraries. Because of this conflict I have to publish several versions of specs2 for different versions of scalaz (7.0.x, 7.1.x, 7.2.x and so on). This is pretty time-consuming, not only because of having to wait for more than 30 minutes for all the jars to be published but also because I have to adapt to some small differences between each scalaz version.

Removing scalaz and scalaz-stream is easier said than done:

  1. specs2 relies heavily on functional programming so I need the usual FP suspects: Functor, Applicative, Monad and so on

  2. specs2 interacts with the file system, writes out to the console, executes specifications concurrently so it needs a way to control effects

  3. specs2 is structured around the idea of a “stream of specification fragments” so it needs a streaming abstraction

All of this before I can even consider moving to a Scala.js build!

specs2 own FP / streaming infrastructure

Fortunately I have developed on the side many of the pieces needed above:

  1. eff is a library to handle effects, including concurrency and resources management

  2. producer is a simple streaming library designed to work with any Monad, including the Eff monad

  3. origami implements a notion of composable “folds” which can be applied to any stream, which is a pattern I (re)-discovered on earlier versions of specs2: reporting the results of an executed specification is just “folding” the specification, a bit like using foldLeft on a List

A copy of those libraries is now included in the specs2-common module.

The only thing missing was the FP abstractions. It turns out that it is not too hard to implement the whole menagerie:

  • Functor, Applicative, Monad, Traverse, Foldable, Monoid. Those typeclasses are very tied together
  • Tree and TreeLoc for manipulating trees

I am greatly indebted to the Scalaz and cats projects for this code, which I reduced to the only parts I needed for specs2. Those abstractions and datatypes are now available in the specs2-fp module (not intended for external use).

Execution model

At the heart of specs2 is a Fragment, which is a Description and an Execution yielding a Result. The Description holds an informal description of the specified system and the Execution runs some Scala code to verify that behaviour.

Using Scala.js imposes a big change is specs2. We can never await to the get the result of executing a Fragment. Also since scalaz.concurrent.Task is out of the equation, in specs2 4.x an Execution is implemented using a scala.concurrent.Future, like this:

case class Execution(
 run:       Option[Env => Future[() => Result]],
 executing: Option[Throwable Either Future[Result]])

This is a lot more complicated than just Future[Result]. Why is that?

run takes an action to execute () => Result, possibly concurrently Future[() => Result] and which might depend on the environment Env => Future[() => Result]. You might wonder why we don’t simply use Env => Future[Result]. This is to allow the user to add some behaviour around the result, for example with the AroundEach trait and the def around[R : AsResult](r: =>R): Result method.

We also want to “stream” results, which are executed concurrently and display them as soon as they are available. This is done by having executing holding either a Throwable if we could not even start the execution or the Result being currently computed.

Note that we can never call await in a Scala.js program so when we are executing a Fragment the best we can get is the equivalent of Future[Result] (in reality an Action[Result] which is a type possibly having more effects than just concurrency).

This triggers major changes in the implementation of specs2 and there are rippling effects up to the Runners used to run a specification. However most of the users of specs2 should not be concerned with these changes as the “user” DSL for specs2 isn’t modified. However if you are on the JVM and really need to await to get the result of a fragment execution you can do so by using one of the methods in the org.specs2.control.ExecuteActions object. For example result.runAction(executionEnv) will return an Error Either Result. Running the same method on Scala.js will throw an exception.

Scala.js modules

specs2 runs on Scala.js” is ever going to be a partial truth. There are many limitations related to Scala.js and this means that, while specs2-core can be used on any Scala.js project, there are some specs2 features and modules which cannot be used with Scala.js. Here is a list (up to my current knowledge):

  1. the isolate argument to run each example in its own copy of the specification cannot be used (because this uses reflection to instantiate classes)

  2. no custom Selector/Executor/Printer/Notifier can be passed from the command line (for the same reason)

  3. the RandomSequentialExecution trait cannot be used because the implementation uses a TrieMap not available on Scala.js. This could be fixed in the future.

  4. running only examples which previously failed with was x on the command-line is not possible because this accesses file system

  5. all the modules where scala-xml is used don’t have a Scala.js version because there’s no scala-xml for Scala.js: specs2-form, specs2-html

  6. modules accessing the file system cannot be used: specs2-html, specs2-markdown

  7. the specs2-gwt module defines fragments based on the result of previous fragments, so it currently needs to await. This could be fixed potentially with a smart enough implementation in the future.

Breaking changes

A major version number is also the opportunity to fix some of the API flaws. I tried to limit the changes in the low-level API to the strict necessary but I also wanted to clean-up one important aspect of the high-level API.

When specs2 3.x came out I envisioned that there could be ways to create a specification from arguments passed on the command line, or from an ExecutionContext (to execute futures). With the addition of some traits like CommandLineArguments you could declare:

class MySpec extends Specification with CommandLineArguments {
  def is(args: CommandLine) = s2"""
    try this $ok


class MySpec extends Specification { def is = s2"""
  try this $ex1

  def ex1 = { implicit ec: ExecutionContext => Future(1) must be_==(1).await }

I later realized that it would be a lot more convenient to directly “inject” the command line arguments or the execution context in the specification like so:

class MySpec(args: CommandLine) extends Specification { def is = s2"""
  try this $ex1

 def ex1 = { (1 + 1) must be_==(2).when(args.contains("universe-is-sane"))}
class MySpec(implicit ec: ExecutionContext) extends Specification { def is = s2"""
  try this $ex1

  def ex1 = { Future(1) must be_==(1).await }

Now there were 2 ways to do the same thing, the second one being a lot easier than the first one. In specs2 4.x the traits supporting the first behavior org.specs2.specification.Environment, org.specs2.specification.CommandLineArguments, org.specs2.specification.ExecutionEnvironment have been removed. Same thing for the implicit conversions for functions like implicit ee: ExecutionEnv => Result to create examples.

This will demand some migration efforts for some users but the result will be more concise specifications.

Another word on injected execution environments. From specs2 3.8.9, 2 distinct execution environments are being used in specs2. One for the execution of the specification itself, the other one for the execution of the user examples (to map futures or await for results for example). This is to make sure that some incorrect user code doesn’t prevent the full specification from being executed.

With specs2 4.x we can go one step further and allow each specification to get its own dedicated execution environment and make sure that one specification execution doesn’t impact another one:

import org.specs2.specification.core.{Env, OwnExecutionEnv}

class MySpec extends(val env: Env) extends Specification with OwnExecutionEnv { def is = s2"""

The injected env is used to pass command-line argument to a copy of the user execution env, implicitly available as a member of the OwnExecutionEnv trait. That trait also takes care of shutting down the execution environment once the specification has finished executing so as not to lose resources.

Feedback is essential

In conclusion, specs2 4.x should be a smaller gap from specs2 3.x than specs2 2.x to specs2 3.x. With a lot more to offer!

  1. no more dependencies for specs2-core

  2. a Scala.js version for specs2-core

  3. better control on execution environments

You can find all of this in version 4.0.0-RC1 (4.0.0-RC4 is there already!).

Please, please, please ask on gitter, on the mailing-list, on twitter if you have any issue. And a big thank you for reporting any mistake, big or small, that I may have made on this release!