Frequently Asked Questions about TestWeaver

Many users ask us to compare TestWeaver with various existing testing methods and tools. Here we address some of these questions. This is not a comprehensive analysis, the subject is very broad and very deep... If you have comments or insights related to the discussed topics, please send us an email at:

  1. What analysis tasks can be solved with TestWeaver?

    TestWeaver is a tool for test and validation of systems. The reporting component alone can be used for measurement evaluation. However, the applicability scope of TestWeaver goes significantly beyond that of traditional testing tools. Further supported tasks: tolerance analysis, failure analysis, safety and robustness analysis. Read more...

    TestWeaver does more than just checking the violation of certain properties on simulated scenarios: the incorporated reactive scenario generation heuristics help to automate state reachability studies. A versatile report generator helps to visualize, with different abstractions and perspectives, the qualitative states reached by a system. As such, the applicability of TestWeaver goes far beyond the one of traditional testing tools.

    TestWeaver can be used to search and find system weaknesses. For instance, scenarios where certain quality criteria are below acceptable limits, scenarios with high energy losses, vibrations, spurious control sequences, or reduced comfort. For this purpose, the pure functional simulation of the system has to be augmented with quantitative / qualitative observers of the supervised quality criteria. TestWeaver is able to document the coverage of the source code and the coverage of the system states that have been reached.

    TestWeaver can also be used to analyze the effects of parameter tolerance deviations. It can be used to assess the effects of failures, such as failing sensors and actuators. This in turn can be used to assess the safety and robustness of systems.

  2. Is TestWeaver a tool restricted to embedded software / systems?

    No.

    TestWeaver is actually not restricted in any way to embedded systems. Almost anything that can be "simulated" can be connected to TestWeaver - we offer, for instance, interfacing libraries for C, Python, Modelica/Dymola, MATLAB/Simulink. However, currently we do not have any specific tool support for testing e.g. GUIs, Web-Applications, installers, and other aspects that are not common in the embedded software domain. If you can integrate our C or Python interface in your test frame, you are set.

  3. How does the method of TestWeaver compare with model-checking and other formal verification methods (e.g. as used by EmbeddedValidator, SPIN and others)?

    Verification techniques can mathematically prove that a model does not have certain undesired properties. Testing tools, like TestWeaver, can be used to find problems, but not proof their absence. On the other side, verification is limited to discrete models with limited complexity. TestWeaver does not have these limitations. There are also other severe limitations of formal verification that do not apply to TestWeaver. Read more...

    If a certain property can be proved using model-checking or other formal verification methods, than verification delivers a more complete answer than testing. However, many significant properties of systems that include physical components and processes cannot be checked by formal verification at all, and one should be aware of the limitations of formal verification. Some of them are listed below:

    • Scope of analysis: narrow vs. broad
      This is the most serious problem faced by the nowadays formal analysis methods and is the result of the other limitations discussed below: e.g. limited expressiveness, limited complexity, etc. The effect is that only small parts of the complete system can be formally analyzed, and thus the analysis does not cover the goal of the complete system and the interaction with the physical world. As such, certain formal properties of some module-models can be proved indeed, but it cannot be proved that the system meets the goals. Usually it cannot be proven that the system is safe, much less that it has good quality or it meets the complete specification. TestWeaver and other methods based on simulation do not have this limitation, the simulations can address much larger systems, as well as complex interactions with the physical world.
    • Specification language: limited expressiveness
      There are significant limitations of what can be expressed and verified with formal methods, e.g. usually discrete models such as state machines, or static model properties. There is no tool able to analyze the mixture of complex discrete and continuous models, i.e. including (partial) differential equations. For testing methods, this is no issue usually. If it can be simulated, it can be tested.
    • Specification language: one language vs. mix of languages
      Formal techniques apply to models and model properties that are to be specified in a certain restricted language. The complete model has to be specified in that language in order to be analyzed. If the scope of the analysis is a single module (or a few ones) isolated from their environment, then this might not be a problem. For complete systems this is impossible or impracticable. TestWeaver is not bound to any specification language - it controls and evaluates simulation runs. Modules can be specified and implemented with a bunch of languages: C, Python, MATLAB/Simulink, Modelica. If an executable can be produced and linked to TestWeaver, there is virtually no limitation imposed by TestWeaver. Modules specified with differing languages can be mixed in one simulation.
    • Specification language: check model vs. check implementation
      Usually formal analysis applies to models, not to implementations. One should consider the gap between the two. Being based on simulation, e.g. including software-in-the-loop or even hardware-in-the-loop simulation, TestWeaver is much closer to checking properties of implementations.
    • Size and complexity of the model
      The existing methods and tools doing some form of verification are competing to continuously push the limits of the size of the models that can be formally analyzed. Still, the size of the model is also a serious limitation. For TestWeaver there is no "hard-limit" imposed to the size of the module sources that can be tested, e.g. customers use TestWeaver to check systems with more than 1 million lines of C code.
  4. How does TestWeaver compare to tools doing static code analysis, such as PolySpace, Coverity and others?

    As with model-checking and other formal methods: the methods have differing strengths, weaknesses and application scope. No method can completely replace the others. In complex projects TestWeaver can find further bugs, even when previously the code analysis was performed. Moreover, TestWeaver can help find bugs that cannot be found at all with static code analysis. Read more...

    Static code analysis and symbolic code evaluation can find certain software problems, such as divisions by zero, access violations, array bound violations and others. Theoretically, these methods can guarantee that the corrected code will not contain any such problems afterwards.

    In practice this is not always the case. One reason, that we will not discuss further, is related to the limited size of the source code that can be analyzed. Another reason is more subtle and is quite critical for embedded software: The physical system that is controlled by the software is not part of the analysis. The sensors, actuators and bus communication are seen as "independent" inputs and outputs. Because it is nearly impossible to give good correlations among the values of these signals, often they provoke a lot of "border cases" (e.g. marked "orange" in PolySpace), i.e. cases where the analysis tools cannot decide if they provoke a real problem in operation or not. A human has to check these cases, and, due to the complexity, it is often the case that the analyzed classes of bugs will not completely be eliminated. Similar effects can be provoked by the configuration and calibration parameters that are not part of the analyzed code, but are flashed at some later stage from configuration databases. TestWeaver can help to find the bugs that have not been found before by other methods. In turn, TestWeaver cannot guarantee that it will always discover all of the bugs that static code analysis can identify.

    However, TestWeaver can find bugs that cannot be found at all with static code analysis. All the problems that manifest themselves in the interaction between the control systems and the environment, such as, non-converging controllers, oscillations, bad state estimation, overheating of physical components, bad fault diagnosis and failure reaction, etc., cannot be addressed by static code analysis, but can be detected by TestWeaver.

  5. How does TestWeaver compare to model-based test generation tools, such as Embedded Tester, Conformiq Qtronic, Rhapsody Automated TestGenerator and others?

    Like formal verification, model-based test generation (MBTG) is based on a model of the system formulated in a restricted specification language, usually restricted to discrete models. TestWeaver does not need a complete model, since it relies on a simulation of the system. Moreover, TestWeaver is not restricted to Simulink, or to any particular specification language, nor is it restricted to the analysis of discrete phenomena. MBTG is useful for analyzing the properties of discrete models (and of software in isolation), but it is less useful for analyzing the properties of systems that include physical components and processes characterized by differential equations. Read more...

    Similar to formal verification methods, a model of the system (or of the usage of the system) has to be developed in a specific language. Often the specification languages are a variant of state automata. Almost always they describe discrete models, i.e. models that do not include (partial) differential equations. As such, the test focuses almost exclusively on properties of the controllers, and neglect the interaction with the physical world, and the hidden states of the hydraulic or mechanical systems that are controlled. The test generation is usually done off-line with certain coverage goals, such as state reachability or transition reachability in state machines.

    TestWeaver does not require to develop (formal) models: modules that are only available in compiled form can also be analyzed, or can be included in the analysis. In particular, also the complex interaction of the software with the controlled physical subsystems can be analyzed by simulation.

    By having access to the source of the models, MBTG has good information about the way "decisions are taken" inside the model. In TestWeaver this information is not directly available - it is compensated by the reactive component of TestWeaver that analyzes the results of past simulations in order to guide the future scenarios that are simulated. Additional knowledge about the model, if available, can be supplied to TestWeaver in the instrumentation and in the experiment specification.

  6. How does TestWeaver compare to test pattern / test vector generation tools?

    TestWeaver is not restricted to combinatorial functions. The scenarios generated by TestWeaver comprise time-varying signals and are influenced by the dynamic reaction of the system.

  7. Is TestWeaver a tool for module test or one for system test?

    TestWeaver can be used for both module and system tests. While for module tests more good testing methods are competing, for large and complex systems, no other test and validation method seems to achieve a cost-benefit ratio, which is nearly comparable to the one achieved by TestWeaver. Read more...

    Until now most of our customers use TestWeaver for system tests. One reason is that, for module tests, there are many other competing test automation tools that can be used and have been used with good results, for instance tools that use test scripts. Starting with version 2.0 TestWeaver integrates also classical test automation methods, for instance using test scripts and / or interactive recording and replay of scenarios. Classical test automation methods fail to scale for complex systems - the overhead for test specification, implementation and maintenance becomes overwhelming. With TestWeaver you can mix the classical test automation methods with the intelligent test generation methods, thus maximize the advantages offered of both testing methods according to your particular application domain.

  8. How does TestWeaver compare to test automation tools? For instance, tools based on TTCN3, Python, other scripting languages, or state automata, such as TPT?

    Irrespective which specification language is used, for all classical test automation tools the test scenarios have to be specified manually. The big advantage of TestWeaver is that it can work without manually specified test scenarios. Read more...

    Test automation tools allow the user to specify a set of tests using a test specification language, such as Python, state automata or TTCN3. From this specification test implementations for different platforms can be generated, for instance for HiL or for MATLAB/Simulink. Each such test describes a unique control sequence for the input signals, and one or several test objectives, that have to be evaluated to decide if the test passed or failed. This testing method, based on manually specified test scenarios, does not scale well with increasing system complexity: there are too many scenarios and cases that have to be tested in a short time, with limited project resources.

    The distinguishing feature of TestWeaver is that it can work without manually specified test scenarios. TestWeaver can generate thousands of high-quality test scenarios on its own, without human interaction. For this, TestWeaver requires only a compact specification of the domains of the input signals and of the most relevant signals that characterize the system state.

    The more complex a system is, the better the systematic generative approach of TestWeaver squares when compared with the cost-benefit ratio of manually specified scenarios. For complex systems, it becomes nearly impossible to formulate by hand all relevant tests under all operating conditions.

  9. What kind of problems can be found with TestWeaver? How much specification effort is required to find these problems?

    No specification effort: divisions by zero, access violations, infinite loops, non-determinism, and others. Low effort: expected min-max boundary violation, false fault detection, unintended signal oscillations, discrepancies model vs. implementation, and others. More effort: system specific quality observers. Read more...

    There are many kinds of problems that can be found with TestWeaver. Some of the problems are clearly located in the software or in a specific module, others show at the system level only and cannot be easily associated with any single function or module. Some of the problems require more specification effort in order to be detected, such as the implementation of quality observers that assess system-specific quality aspects. Here is a list with the most common problems that can be found with TestWeaver, ordered by the specification effort associated.

    • No specification effort: infinite loops, non-determinism, divisions by zero, access violations, as well as other faults that cause a crash of the simulation process.
    • Low effort:
      • expected min-max boundary violation for controller and plant model signals. For instance: too high temperatures, too high or too low pressures, engine overspeed or engine stalled, violation of the min-max boundaries declared in the ECU A2L databases
      • false fault detection, e.g. when simulating nominal behavior
      • discrepancies between models (MIL) and implementations (SIL), discrepancies between differing versions of the same module
      • discrepancies between the estimated plant state in the controller and the plant state in the simulation, etc.
      • too long transitions between certain system states, for instance long shifts in an automatic transmission
      • certain spurious control sequences: controllers oscillating between operating conditions, repeated closing and opening of valves, repeated up and down shifts in an automatic transmission, etc.
    • More effort: complex system specific quality observers, for instance bad shift quality for automatic transmissions.
  10. How can one measure the coverage of the tests generated by TestWeaver?

    TestWeaver can show in different overview tables and with different abstractions the system states reached by the test. When connected to source coverage measurement tools, such as Testwell CTC++ or gcov, one can inspect the source code coverage for software modules.

  11. Can the user influence the focus of an experiment?

    Yes. The user can specify constraints and coverage goals to set the focus of an experiment.

  12. Can TestWeaver be used to generate tests for MIL, SIL and HIL?

    TestWeaver can be connected to MIL, SIL and HIL simulation platforms.

  13. How many TestWeaver instruments (specifying the connection with the SUT) are used in a typical complex application?

    Example from complex automatic transmissions controllers: about 10-15 choosers for input signals and about 40-60 reporters for system states and quality observers.

  14. How fast should the simulation be in order to be useful for the analysis with TestWeaver?

    Often, having the tests run over night, or over the weekend is considered reasonable. For such times, it is recommended to have simulations that run close to real time on the average. Slower simulations can be in a certain degree compensated by running the tests on several PCs in parallel.

  15. What is the running time relation between interpreted models and compiled models, for instance, interpreted MATLAB/Simulink vs. models compiled with Real Time Workshop?

    Interpreted models run often very slow, e.g. 100 times slower than real time, depending on the complexity of the model. We have seen factors of 300 times speed-up and more only due to the model compilation.

  16. How can we decide how long to run an experiment with TestWeaver?

    Depends on the project. Once a first experiment with a system has been conducted, for instance over the night, or over the weekend, the coverage of system states and the coverage of the source code can be analyzed in order to make an estimation and a recommendation about the future running times.

  17. What simulation tools can be connected to TestWeaver?

    Direct connection to: C, Python, Dymola, MATLAB/Simulink, Real Time Workshop, Silver, dSpace Real-Time Testing Library. Further connections via Silver: AMESim, SimulationX, SIMPACK. Easy connection to other tools (C-library).

  18. Can TestWeaver be used in projects that use TargetLink?

    Yes. For an example see:

  19. What advantages do we get by running the simulation under Silver if we use MATLAB/Simulink, since TestWeaver can directly connect to MATLAB/Simulink?

    ASAM MCD-1 and MCD-2 automotive interfaces (XCP, ASAP2/A2L); Python scripting; parameter flashing; automatic monitoring of A2L min-/max-boundaries; failure simulation; comparison between differing module and SUT versions; comfortable GUI for control and visualization of simulations; reading and writing of scenarios and measurements in MDF or CSV format; connection to Visual Studio Debugger, and others.

  20. What methods and strategies are used by the TestWeaver scenario generation?

    Complex heuristics from genetic algorithms, optimization, game theory as well as experience with model-checking and model-based diagnosis previously developed and applied by the authors to the analysis of technical systems.