The speaker made an important distinction for testing tools: most of them are "mechanical tools".
- Some of them are just a way to store data and to support a process. They are nothing more than a specialized database.
- Other ones are dealing with the execution of test cases on different machines, with different environments. They are nothing more than a specialized batch file.
- The last ones are experts at interacting with gui components to insert and retrieve values. They are nothing more than a specialized keyboard and mouse simulator.
- Test management tools must deal effectively with concurrent modifications of shared data, they must provide nice edit and search capabilities, they must support every possible workflow
- Distributed test execution tools must help with different hardwares, different OS: reexecuting the same C script remotely on 2 different systems is not straightforward for instance
- Graphical test tools must allow the creation of test cases independently of the application-to-be and allow a flexible and evolutive mapping between the test cases and the graphical components
- get objective information about your actual coverage
- refactor test sequences and extract the common parts
- propagate modifications of the specifications to the test plan
- analyse test dependencies
- generate test cases from the specifications
- propose test strategies from the system typology
- combine components test cases into system test cases
All these tools would be called "analytical tools". They support the human mind when it is somehow limited:
- exploration of exponential number of possibilities
- analysis of complex relationships
- metrics-based decisions
- exhaustive application of systematic rules
The "low-hanging fruit" rule
The answer is "the low-hanging fruit" rule.
Most of the time, the mechanical tools are easy to implement and bring immediate return on investment. This is why big companies such as Mercury or Borland focus on "process" tools and their integration. Moreover, those tools appeal to the common manager, they tend to give the impression that everything is under control.
You can use TestDirector and feel that you can really manage a fine test plan covering your requirements. But how do you know you've written the right test cases? How do you know you got the minimum number of them for the best coverage?...
By the way, there is a similar argument for the CMM process improvement. You may set up reviews and yet do terrible ones, you may have source control configuration and yet create unnecessary branches, and so on.
In the end, one of the speech conclusions was that the innovative companies should be supported to get greater benefit for special analytic tasks. The answers won't come from the big corps.
Mercury and co. will provide mechanical tools to support your mechanics, innovative companies will provide analytical tools to support your brain!