24 December 2014


This post presents the next major version of specs2, specs2 3.0:

  • what are the motivations for a major version?
  • what are the main benefits and changes?
  • when will it be available?

The motivations

I started working on this new version a bit more than one year ago now. I had lots of different reasons for giving specs2 a good face-lift.

The Open Source reason

specs2 has largely been the effort of a single person. This has probably some advantages, like the possibility to maintain some kind of vision for the project, but also lots of drawbacks. Quality is one of them.

As a programmer I have all sorts of shortcomings. I have always been amazed by other people taking a look at my code and spotting obvious deficiencies either big or small (for example @jedws introduced named threads to me, much easier for debugging!). I want to maximize the possibility that other people will jump in to fix and extend the library as necessary (and be able to go on holidays for 3 weeks without a laptop :-)). Improving the code base can only help other people review my implementation.

The Design reason

In specs2 I’ve had this vision of a flow of “specification fragments” that would get created, executed, and then reported, possibly with different reporters for various output formats. This is not as easy at it seems:

  • I want fragments to be executed concurrently while being printed in sequence
  • the fragments should also be displayed as soon as executed
  • I want to be able to recreate a view of the sequence of fragments as a tree to be displayed in IDEs like Eclipse and Intellij

This is all done and working in specs2 < 3.0 but in a very clumsy way, subverting Scalaz Reducers to maintain state and try to compose reporters.

One of these reporters is a HTML reporter and I’ve always wanted to improve it. This was not something I was eager to change given the situation 1 year ago. Luckily scalaz-stream version 0.2 came out in December 2013 and allowed me to try out new ideas.

The Functional Programming reason

The major difference between specs and specs2 was the use of immutable data structures and the avoidance of exceptions for control flow. Yet there was still lots of side-effects!

I hadn’t fully grasped how to use the IO monad to structure my program. Fortunately I happen to work with the terrific @markhibberd and he showed me how to use a proper monad stack to track IO effects but also how to thread in configuration data and track errors.

The main benefits and changes

First of all, a happy maintainer! That goes without saying but my ability to fix bugs and add features will be improved a lot if I can better reason about the code :-).

Now for users…

For casual users there should be no changes! If you just use org.specs2.Specification or org.specs2.mutable.Specification with no other traits, you should not see any change (except in the User Guide, see below). For “advanced” users there are new benefits and API changes (in no particular order).

Refactored user guide

The existing User Guide has been divided into a lot more pages (around 60) and follows a pedagogical progression:

  • a Quick Start presenting a simple specification (and the mandatory link to the installation page)

  • some links from the Quick Start to the most common concepts: what is the structure of a Specification? Which matchers are available? How to run a Specification?

  • then, on each other page there is a presentation focusing on one topic plus additional links: "Now learn how to..." (what is the next thing you will probably need?) and "If you want to know more" (what is some more advanced topic that is related to this one?)

In addition to this refactoring there are some “tools” to help users find faster what they are looking for:

  • a search box

  • reference pages to summarize in one place some topics (matchers and run arguments for example)

  • a Troubleshooting page with the most common issues

You can have a first look at it here.

Generalized Reader pattern

One consequence of the “functional” re-engineering is that the environment is now available at different levels. By “environment”, I mean the org.specs.specification.core.Env class which gives you access to all the components necessary to execute a specification, among which:

  • the command line arguments
  • the lineLogger used to log results to the console (from Sbt)
  • the systemLogger used to log issues when instantiating the Specification for example
  • the execution environment, containing a reference to the thread pool used to execute the examples
  • the statsRepository to get and store execution statistics
  • the fileSystem which mediates all interactions with the file system (to read and write files)

I doubt that you will ever need all of this, but parts of the environment can be useful. For example, you can define the structure of your Specification based on command line arguments:

class MySpec extends Specification with CommandLineArguments { def is(args: CommandLine) = s2"""
  Do something here with a command line parameter ${args.valueOr("parameter1", "not found")}

The CommandLineArguments uses your definition of the def is(args: CommandLine): Fragments method to build a more general method Env => Fragments which is the internal representation of a Specification (fragments that depend on the environment). This means that now you don’t have to skip examples based on condition (isDatabaseAvailable for example), you can simply remove them!

You can also use the environment, or part of it, to define examples:

class MySpec extends Specification { def is = s2"""

 Here are some examples using the environment.
 You can access
   the full environment                            $e1
   the command line arguments                      $e2
   the execution context to create a Scala future  $e3
   the executor service to create a Scalaz future  $e4


  def e1 = { env: Env =>
    env.statisticsRepository.getStatistics(getClass.getName).runOption.flatten.foreach { stats =>
      println("the previous results for this specification are "+stats)

  def e2 = { args: CommandLine =>
    if (args.boolOr("doit", false)) success
    else skipped

  def e3 = { implicit executionContext: ExecutionContext =>
    scala.concurrent.Future(1); ok

  def e4 = { implicit executorService: ExecutorService =>
    scalaz.concurrent.Future(1); ok

Better reporting framework

This paragraph is mostly relevant to people who want to extend specs2 with additional outputs. The reporting framework has been refactored around 4 concepts:

A Runner (for example the SbtRunner)

  • instantiates the specification class to execute
  • creates the execution environment (arguments, thread pool)
  • instantiates a Reporter
  • instantiates Printers and starts the execution

A Reporter

  • reads the previous execution statistics if necessary
  • selects the fragments to execute
  • executes the specification fragments
  • calls the printers for printing out the results
  • saves the execution statistics

A Printer

  • prepares the environment for printing
  • uses a Fold to print or to gather execution data. For example the TextPrinter prints results to the console as soon as they are available and the HtmlPrinter

A Fold

  • has a Sink[Task, (T, S)] (see scalaz-stream for the definition of a Sink) to perform side-effects (like writing to a file)
  • has a fold: (T, S) => S method to accumulate some state (to compute statistics for example, or create an index)
  • has an init: S element to initialize the state
  • has a last(s: S): Task[Unit] method to perform one last side-effect with the final state once all the fragments have been executed

It is unlikely that you will create a new Runner (except if you build an Eclipse plugin for example) but you can create custom reporters and printers by passing the reporter <classname> and printer <classname> options as arguments. Note also that Folds are composable so if you need 2 outputs you can create a Printer that will compose 2 folds into 1.

The Html printer


The Html printer has been reworked to use Pandoc as a templating system and Markdown engine. I decided this move to Pandoc for several reasons:

  • Pandoc is one of the libraries that is officially endorsing the CommonMark format
  • I’ve had less corner cases with rendering mixed html/markdown with Pandoc than previously
  • Pandoc opens the possibility to render other markup languages than CommonMark, LaTeX for example

However this comes with a huge drawback, you need to have Pandoc installed as a command line tool on your machine. If Pandoc is not installed, specs2 will use a default template renderer, but won’t render CommonMark.


I’ve extracted a specs2.html template (and a corresponding specs2.css stylesheet) and it is possible for you to substitute another template (with the html.template option) if you want your html files to be displayed differently. This template is using the Pandoc template system so it is pretty primitive but should still cover most cases.

Better API

The specs2 API has been split into a lot more traits to support various objectives:

  • support the new execution model with scalaz-stream

  • make it possible to separate the DSL methods from the core ones (see Lightweight spec)

  • offer a better Fragment API

Let’s start with the heart of specs2, the Fragment.

FragmentFactory methods

Advanced specs2 users need to tweak the creation of fragments. For example, when using a “template specification”:

abstract class DatabaseSpec extends Specification {
  override def map(fs: => Fragments): Fragments =
    step(startDb) ^ fs ^ step(closeDb)

In the DatabaseSpec you are using different methods to work with Fragments. The ^ method to append them, the step method to create a Step fragment. Those 2 methods are part of the Fragment API. Here is a list of the main changes, compared to specs2 < 3.0

  • first of all there is only one Fragment type (instead of Text, Step, Example,…). This type contains a Description and an Execution. By combining different types of Descriptions and Executions it is possible to recreate all the previous specs2 < 3.0 types

  • however you don’t need to create a Fragment by yourself, what you do is invoke the FragmentFactory methods: example, step, text,… This now unifies the notation between immutable and mutable specifications because in specs2 < 3.0 you would write step in a mutable specification and Step in an immutable one (Step is now deprecated)

  • there is no ExampleFactory trait anymore since it has been subsumed by methods on the FragmentFactory trait (so this will break code for people who were intercepting Example creation to inject additional behaviour)

Finally those “core” objects have been moved under the org.specs2.specification.core package, in order to restructure the org.specs2.specification package into

  • core: Fragment Description, SpecificationStructure

  • dsl: all the syntactic sugar FragmentsDsl, ExampleDsl, ActionDsl

  • process: the “processing” classes Selector, Executor, StatisticsRepository

  • create: traits to create the specification FragmentFactory, AutoExamples, S2StringContext (for s2 string interpolation)…

FragmentsDsl methods

When you want to assemble Fragments together you will need the FragmentsDsl trait to do so (mixed-in the Specification trait, you don’t have to add it).

The result of appending 2 Fragments is a Fragments object. The Fragments class has changed in specs2 3.0. It doesn’t hold a reference to the specification title and the specification arguments anymore, this is now the role of the SpecStructure. So in summary:

  • a Specification is a function Env => SpecStructure

  • a SpecStructure contains: a SpecHeader, some Arguments and Fragments

  • Fragments is a sequence of Fragments (actually a scalaz-stream Process[Task, Fragment])

The FragmentsDsl api allows to combine almost everything into Fragments with the ^ operator:

  • a String to a Seq[Fragment]
  • 2 Fragments
  • 1 Fragments and a String

One advantage of this fine-grained decomposition of the fragments API is that there is now a Spec lightweight trait.

Lightweight Spec trait

Compilation times can be a problem with Scala and specs2 makes it worse by providing lots of implicit methods in a standard Specification to provide various DSLs. In specs2 3.0 there is a Spec trait which contains a reduced number of implicits to:

  • create a s2 string for an “Acceptance Specification”
  • create should and in blocks in a “Unit Specification”
  • to create expectations with must
  • to add arguments to the specification (like sequential)

If you use that trait and you find yourself missing an implicit you will have to either:

  • use the Specification class instead

  • search specs2 for the trait or object providing the missing implicit. There is no magic recipe for this but the MustMatchers trait and the Spec2StringContext trait should bring most of the missing implicits in scope

It is possible that this trait will be adjusted to find the exact balance between expressivity and compile times but I hope it will remain pretty stable.


When specs2 started, the package scala.concurrent.duration didn’t exist. This is why there was a Duration type in specs2 < 3.0 and a TimeConversions trait. Of course this introduced annoying collisions with the implicits coming from scala.concurrent.duration when that one came around.

There is no reason to go on using specs2 Durations anymore now so you can use the standard Scala durations everywhere specs2 expects a Duration.


Context management has been slowly evolving in specs2. In specs2 3.0 we end up with the following traits:

  • BeforeAll do something before all the examples (you had to use a Step in specs2 < 3.0)
  • BeforeEach do something before each example (was BeforeExample in specs2 < 3.0)
  • AfterEach do something after each example (was AfterExample in specs2 < 3.0)
  • BeforeAfterEach do something before/after each example (was BeforeAfterExample in specs2 < 3.0)
  • ForEach[T] provide an element of type T (a “fixture”) to some each example (was FixtureExample[T] in specs2 < 3.0)
  • AfterAll do something after all the examples examples (you had to use a Step in specs2 < 3.0)
  • BeforeAfterAll do something before/after all the examples examples (you had to use a Step in specs2 < 3.0)

There are some other cool things you can do. For example set a time-out for all examples based on a command line parameter:

trait ExamplesTimeout extends EachContext with MustMatchers with TerminationMatchers {

  def context: Env => Context = { env: Env =>
    val timeout = env.arguments.commandLine.intOr("timeout", 1000 * 60).millis

  def upTo(to: Duration)(implicit es: ExecutorService) = new Around {
    def around[T : AsResult](t: =>T) = {
      lazy val result = t

      val termination =
        result must terminate(retries = 10,
                              sleep = (to.toMillis / 10).millis).orSkip((ko: String) => "TIMEOUT: "+to)

      if (!termination.toResult.isSkipped) AsResult(result)
      else termination.toResult

The ExamplesTimeout trait extends EachContext which is a generalization of the xxxEach traits. With the EachContext trait you get access to the environment to define the behaviour used to “decorate” each example. So, in that case, we use a timeout command line parameter to create an Around context that will timeout each example if necessary. You can also note that this Around context uses the executorService passed by the environment so you don’t have to worry about resources management for your Specification.

Included specifications

As I was reworking the implementation of specs2 I also looked for ways to simplify its internal model. In specs2 < 3.0 you can nest a specification inside another one. This adds some significant complexity because a nested specification have its own arguments, its own title. For example during the execution of the inner specification we need to be careful enough to override the outer specification arguments with the inner ones.

I decided to let go of this functionality in favor of a view of specifications as “referencing” each other, with 2 types of references:

  • “link” reference
  • “see” reference

The idea is to model dependency relationships with “link” and weaker relationships with “see” (when you just want to mention that some information is present in another specification).

Then there are 2 modes of execution:

  • the default one
  • the “all” mode

By default when a specification is executed, the Runner will try to display the status of “linked” specifications but not “see” specifications. If you use the all argument then we collect all the “linked” specifications transitively and run them respecting dependencies (if s1 has a link to s2, then s2 is executed first).

This is particularly important for HTML reporting when the structure of “link” references is used to produce a table of contents and “see” references are merely used to display HTML links.

Online specifications

I find this exciting even if I don’t know if I will ever use this feature! (it has been requested in the past though).

In specs2 < 3.0 there is a clear distinction between the “creation” time and the “execution” time for specification. Once you have defined your examples you can not add new ones based on your execution results. But wait! this is more or less the property of a Monad! “Produce an action based on the value returned by another action”. Since specs2 3.0 is using scalaz-stream Process under the covers which is a Monad, this means that it is now possible to do the following:

class WikipediaBddSpec extends Specification with Online { def is = s2"""

  All the pages mentioning the term BDD must contain a reference to specs2 $e1


    def e1 = {
      val pages = Wikipedia.getPages("BDD")

      // if the page is about specs2, add more examples to check the links
      (pages must contain((_:Page) must mention("specs2"))) continueWith

    /** create one example per linked page */
    def pagesSpec(pages: Seq[Page]): Fragments = {
      val specs2Links = pages.flatMap(_.getLinks).filter(_.contains("specs2"))

       The specs2 links must be active

    def active(link: HtmlLink) =
      The page at ${link.url} must be active ${ link must beActive }"""

The specification above is “dynamic” in the sense that it creates more examples based on the tested data. All Wikipedia pages for BDD must mention “specs2” and for each linked page (which we can’t know in advance) we create a new example specifying that the link must be active.


The ScalaCheck trait has been reworked and extended to provide the following features:

  • you can specify Arbitrary[T], Gen[T], Shrink[T], T => Pretty instances at the property level (for any or all of the arguments)
  • you can easily collect argument values by appending .collectXXX to the property (XXX depends on the argument you want to collect. collect1 for the first, collectAll for all)
  • you can override default parameters from the command line. For example pass scalacheck.mintestsok 10000
  • you can set individual before, after actions to be executed before and after the property executes to do some setup/teardown

Also, specs2 was previously doing some message reformatting on top of ScalaCheck but now the ScalaCheck original messages have been preserved to keep the consistency between the 2 libraries.

Note: the ScalaCheck trait stays in the org.specs2 package but all the traits it depends on now live in the org.specs2.scalacheck package.

Bits and pieces

This section is about various small things which have changed with specs2 3.0:

Implicit context

There is no more implicit context when you use the .await method to match futures. This means that you have to either import the context or to use a function ExecutionContext => Result to define your examples:

An example using an ExecutionContext $e1

  def e1 = { implicit ec: ExecutionContext =>
    // use the context here
Foreach methods

It is possible now to create several examples or results with a foreach method which will not return Unit:

// create several examples
Fragment.foreach(1 to 10)(i => "example "+i ! ok)

// create several examples with breaks in between
Fragments.foreach(1 to 10)(i => ("example "+i ! ok) ^ br)

// create several results for a sequence of numbers
Result.foreach(1 to 10)(i => i must_== i)
Removed syntax
  • (action: Any).before to create a “before” context is removed (same thing for after)
  • function.forAll to create a Prop from a function
  • specs2 3.0 uses scalacheck 1.12.1
  • you need to use a recent version of sbt, like 0.13.7
  • you need to upgrade to scalaz-specs2 0.4.0-SNAPSHOT for compatibility

Can I use it?

specs2 3.0 is now available as specs2-core-3.0-M2 on Sonatype. I am making it available for early testing and feedback. Please use the mailing-list or the github issues to ask questions and tell me if there is anything going wrong with this new version. I will incorporate your comments in this blog post, serving as a migration guide.

Special thanks

  • to Clinton Freeman who started the re-design of the specs2 home page more than one year ago and sparked this whole refactoring
  • to Pavel Chlupacek and Frank Thomas for patiently answering many of my questions about scalaz-stream
  • to Paul Chiusano for starting scalaz-stream in the first place!
  • to Mark Hibberd for his guidance with functional programming

06 March 2014

Streaming with previous and next

The Scalaz streams library is very attractive but it might feel unfamiliar because this is not your standard collection library.

This short post shows how to produce a stream of elements from another stream so that we get a triplet with: the previous element, the current element, the next element.

With Scala collections

With regular Scala collections, this is not too hard. We first create a list of all the previous elements. We create them as options because there will not be a previous element for the first element of the list. Then we create a list of next elements (also a list of options) and we zip everything with the input list:

def withPreviousAndNext[T] = (list: List[T]) => {
  val previousElements = None +:
  val nextElements     = list.drop(1).map(Some(_)) :+ None

  // plus some flattening of the triplet
  (previousElements zip list zip nextElements) map { case ((a, b), c) => (a, b, c) }

withPreviousAndNext(List(1, 2, 3))

> List((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

And streams

The code above can be translated pretty straightforwardly to scalaz processes:

def withPreviousAndNext[F[_], T] = (p: Process[F, T]) => {
  val previousElements = emit(None) fby
  val nextElements     = p.drop(1).map(Some(_)) fby emit(None)

  (previousElements zip p zip nextElements).map { case ((a, b), c) => (a, b, c) }

val p1 = emitAll((1 to 3).toSeq).toSource

> Vector((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

However what we generally want with streams is combinators which you can pipe onto a given Process. We want to write

def withPreviousAndNext[T]: Process1[T, T] = ???

val p1 = emitAll((1 to 3).toSeq).toSource
// produces the stream of (previous, current, next)
p1 |> withPreviousAndNext

How can we write this?

As a combinator

The trick is to use recursion to keep state and this is actually how many of the process1 combinators in the library are written. Let's see how this works on a simpler example. What happens if we just want a stream where elements are zipped with their previous value? Here is what we can write:

def withPrevious[T]: Process1[T, (Option[T], T)] = {

  def go(previous: Option[T]): Process1[T, (Option[T], T)] =
    await1[T].flatMap { current =>
      emit((previous, current)) fby go(Some(current))


val p1 = emitAll((1 to 3).toSeq).toSource
(p1 |> withPrevious)

> Vector((None,1), (Some(1),2), (Some(2),3))

Inside the withPrevious method we recursively call go with the state we need to track. In this case we want to keep track of each previous element (and the first call is with None because there is no previous element for the first element of the stream). Then go awaits a new element. Each time there is a new element, we emit it, then call recursively go which is again going to wait for the next element, knowing that the new previous element is now current.

We can do something similar, but a bit more complex for withNext:

def withNext[T]: Process1[T, (T, Option[T])] = {
  def go(current: Option[T]): Process1[T, (T, Option[T])] =
    await1[T].flatMap { next =>
      current match {
        // accumulate the first element
        case None    => go(Some(next))
        // if we have a current element, emit it with the next
        // but when there's no more next, emit it with None
        case Some(c) => (emit((c, Some(next))) fby go(Some(next))).orElse(emit((c, None)))


val p1 = emitAll((1 to 3).toSeq).toSource
(p1 |> withNext)

> Vector((1,Some(2)), (2,Some(3)), (2,None))

Here, we start by accumulating the first element of the stream, and then, when we get to the next, we emit both of them. And we make a recursive call remembering what is now the current element. But the process we return in flatMap has an orElse clause. It says "by the way, if you don't have anymore elements (no more next), just emit current and None".

Now with both withPrevious and withNext we can create a withPreviousAndNext process:

def withPreviousAndNext[T]: Process1[T, (Option[T], T, Option[T])] = {
  def go(previous: Option[T], current: Option[T]): Process1[T, (Option[T], T, Option[T])] =
    await1[T].flatMap { next => { c =>
        emit((previous, c, Some(next))) fby go(Some(c), Some(next))
          go(previous, Some(next))
        ).orElse(emit((current, next, None)))
  go(None, None)

val p1 = emitAll((1 to 3).toSeq).toSource
(p1 |> withPreviousAndNext)

> Vector((None,1,Some(2)), (Some(1),2,Some(3)), (Some(2),3,None))

The code is pretty similar but this time we keep track of both the "previous" element and the "current" one.

emit(last paragraph)

I hope this will help beginners like me to get started with scalaz-stream and I'd be happy if scalaz-stream experts out there leave comments if there's anything which can be improved (is there an effective way to combine withPrevious and withNext to get withPreviousAndNext?

I finally need to add that, in order to get proper performance/side-effect control for the withNext and withPreviousAndNext processes you need to use the lazy branch of scalaz-stream. It contains a fix for orElse which prevents it to be evaluated more than necessary.