Test::FAQ

From Perl QA
Jump to: navigation, search

Contents

NAME

Test::FAQ - Frequently Asked Questions about testing with Perl


DESCRIPTION

Frequently Asked Questions about testing in general and specific issues with Perl.

Is there any tutorial on testing?

Test::Tutorial

Are there any modules for testing?

A whole bunch. Start with Test::Simple then move onto Test::More.

How do I use Test::More without depending on it?

Are there any modules for testing web pages/CGI programs?

Several. They include CGI::Test, HTTP::WebTest, and WWW::Automate (which has been taken over by WWW::Mechanize, which has a testing subclass Test::WWW::Mechanize)

CGI::Test provides extensions to the usual ok() functions for testing CGI scripts either in a real web environment or by running the CGI from the command line.

HTTP::WebTest is another module that provides web testing capabilities without copying the standard perl "ok" style. This one works very closely with Apache, doing things like running a local version of Apache on a high port, checking error logs, etc. It is quite a heavyweight testing infrastructure, with lots of features.

WWW::Automate is not a testing module per se, but provides an API which can easily be interspersed with normal Test::More ok() tests. The author (Skud) specifically wrote it to work nicely with the standard Perl QA/testing tools. Test::WWW::Mechanize is WWW::Mechanize with some wrappers for standard testing calls.


Are there any modules for testing external programs?

Test::Cmd?

Can you do XUnit/JUnit style testing in Perl?

Yep! There's Test::Class that merges xUnit with the Perl's standard Test::Builder / Test::Harness based testing framework.

There is also the older Test::Unit (see TestUnit) which no longer appears to be maintained. Test::Unit is a pretty straight Perl port of JUnit and doesn't use Test::Builder (although it can output TAP so can be integrated with Test::Harness based code.)

How do I test my module is backwards/forwards compatible?

First, install a bunch of perls of commonly used versions. At the moment, you could try these

   5.7.2
   5.6.1
   5.005_03
   5.004_05

if you're feeling brave, you might want to have on hand these

   bleadperl
   5.6.0
   5.004_04
   5.004

going back beyond 5.003 is probably beyond the call of duty.

You can then add something like this to your Makefile.PL. It overrides the MakeMaker test_via_harness() method to run the tests against several different versions of Perl.

   # If PERL_TEST_ALL is set, run "make test" against 
   # other perls as well as the current perl.
   {
       package MY;
       sub test_via_harness {
           my($self, $orig_perl, $tests) = @_;
           # names of your other perl binaries.
           my @other_perls = qw(perl5.004_05 perl5.005_03 perl5.7.2);
           my @perls = ($orig_perl);
           push @perls, @other_perls if $ENV{PERL_TEST_ALL};
           my $out;
           foreach my $perl (@perls) {
               $out .= $self->SUPER::test_via_harness($perl, $tests);
           }
           return $out;
       }
   }

and re-run your Makefile.PL with the PERL_TEST_ALL environment variable set

   PERL_TEST_ALL=1 perl Makefile.PL

now "make test" will run against each of your other perls.


If I'm testing Foo::Bar, where do I put tests for Foo::Bar::Baz?

How do I know when my tests are good enough?

Answer 1: Use tools for measuring the code coverage of your tests, e.g. how many of your source code lines/subs/expressions/paths are executed (aka covered) by the test suite. The more, the better, of course, although you may not be able achive 100%. If your testsuite covers under 100%, then the rest of your code is, basically, untested. Which means it may work in surprising ways (e.g. doesn't do things like they are intended or documented), have bugs (e.g. return wrong results) or it may not work at all.

Answer 2: "Test Everything That Could Possibly Break". This means ignore the above answer and use your judgement. Why? Because you can have good line coverage and not actually cover enough cases, or you could cover enough cases but not have 100% line coverage.

Answer 3: Use Mutation Testing. Test::LectroTest does this for you; you specify the *form* your tests should take, and Test::Lectrotest writes tests for you, looking for possible test failures and reporting any it finds.

How do I measure the coverage of my test suite?

Devel::Cover

How do I get tests to run in a certain order?

Use prove (part of recent Test:::Harness distributions), or use Test::Manifest

What should I name my tests?

How do I deal with tests that sometimes pass and sometimes fail?

[TODO: Should we reword that question to something like this?: "I have a test that sometimes passes, and sometimes fails. How do I figure out what's going wrong?"]

I have a test that's causing my tests to die. How do I handle this?

If this is because your platform lacks a particular feature, use a SKIP test. However, if it's because some feature is broken and needs fixing, use a special case of a TODO test that eval()s the dying code.

Normally, TODO tests get executed to see if they accidentally pass. Using eval() gets around this problem.

   TODO: {
       local TODO =  "rrule() on BYMONTH => [-10] causes tests to die";
       eval {
           $b = $a->rrule( BYMONTH => [ -10 ] );
       };
       is( $@, , "rrule() with a negative BYMONTH runs without dying");
       is("$b",'[19970201Z..19970301Z)', 
               "rrule() interprets BYMONTH with negative numbers correctly");        
   };


How do I test with a database/network/server that the user may or may not have?

Even if you do have access to such things, you might want to consider using Mock Objects instead of a real database/server/etc. (See: http://c2.com/cgi/wiki?MockObject) If you mock the database, you're testing your code in isolation, not your code B<plus> the database it uses. When each test only tests one unit, you do three things:

  * Gives validity to the term "Unit Test"
  * Increases the test accuracy 
  * Increases the test predictability.

Hopefully we'll get more Mock DBI classes on CPAN.

[REVIEWME] [Also, that *-list needs converting to POD list]

What's a good way to test lists?

[Huh?]

Is there such a thing as untestable code?

There's always compile/export checks.

Code must be written with testabilty in mind. Seperation of form and functionality.

What do I do when I can't make the code do the same thing twice?

Force it to do the same thing twice.

Even a random number generator can be tested.

How do I test a GUI?

How do I test an image generator?

How do I test that my code handles failures gracefully?

Forcing resource failure.

How do I check the right warnings are issued?

The Test::Warn module may be used for this purpose

   warning_is { $myobj->add(0) } 'illegal value 0 received',
       "complain about zero";

You supply the code, what you expect the warning to be (other routines allow warnings to be matched against regular expressions), and a string to name the test.

How do I test code that prints?

I want to test that my code dies when I do X

use eval { ... };

I want to print out more diagnostic info on failure.

ok(...) || diag("...");

How can I simulate failures to make sure that my code does the Right Thing in the face of them?

You can use Mock Objects (http://c2.com/cgi/wiki?MockObject) and fake the failures.

If you're a glutton for complexity, you can try actually create failure conditions (such as out-of-memory, no network), but this is kind of sketchy.

[REVIEWME]

Why use an ok() function?

   On Tue, Aug 28, 2001 at 02:12:46PM +0100, Robin Houston wrote:
   > Michael Schwern wrote:
   > > Ah HA!  I've been wondering why nobody ever thinks to write a simple
   > > ok() function for their tests!  perlhack has bad testing advice.
   > 
   > Could you explain the advantage of having a "simple ok() function"?

Because writing:

   print "not " unless some thing worked;
   print "ok $test\n";  $test++;

gets rapidly annoying. This is why we made up subroutines in the first place. It also looks like hell and obscures the real purpose.

Besides, that will cause problems on VMS.

   > As somebody who has spent many painful hours debugging test failures,
   > I'm intimately familiar with the _disadvantages_. When you run the
   > test, you know that "test 113 failed". That's all you know, in general.

Second advantage is you can easily upgrade the ok() function to fix this, either by slapping this line in:

       printf "# Failed test at line %d\n", (caller)[2];

or simply junking the whole thing and switching to Test::Simple or Test::More, which does all sorts of nice diagnostics-on-failure for you. Its ok() function is backwards compatible with the above.

There's some issues with using Test::Simple to test really basic Perl functionality, you have to choose on a per test basis. Since Test::Simple doesn't use pack() it's safe for t/op/pack.t to use Test::Simple. I just didn't want to make the perlhack patching example too complicated.


Dummy Mode

   > One compromise would be to use a test-generating script, which allows
   > the tests to be structured simply and _generates_ the actual test
   > code. One could then grep the generated test script to locate the
   > failing code.

This is a very interesting, and very common, response to the problem. I'm going to make some observations about reactions to testing, they're not specific to you.

If you've ever read the Bastard Operator From Hell series, you'll recall the Dummy Mode.

   The words "power surging" and "drivers" have got her.  People hear
   words like that and go into Dummy Mode and do ANYTHING you say.  I
   could tell her to run naked across campus with a powercord rammed
   up her backside and she'd probably do it...  Hmmm...

There seems to be a Dummy Mode WRT testing. An otherwise competent person goes to write a test and they suddenly forget all basic programming practice.


The reasons for using an ok() function above are the same reasons to use functions in general, we should all know them. We'd laugh our heads off at code that repeated as much as your average test does. These are newbie mistakes.

And the normal 'can do' flair seems to disappear. I know Robin. I *know* that in any other situation he would have come up with the caller() trick in about 15 seconds flat. Instead weird, elaborate, inelegant hacks are thought up to solve the simplest problems.


I guess there are certain programming idioms that are foreign enough to throw your brain into reverse if you're not ready for them. Like trying to think in Lisp, for example. Or being presented with OO for the first time. I guess writing test is one of those.


How do I update the plan as I go?

Sometimes you just don't know in advance how many tests you're going to run. One solution is to just not bother and say use Test::More 'no_plan'.

But some people want a calculated plan. One thing you can do is to take advantage of BEGIN blocks to update the plan in small increments.

  use Test::More;
  
  plan tests => my $tests;
  
  {
      require_ok( 'MyModule' );
      my $obj = MyModule->new();
      isa_ok( $obj, 'MyModule' );
      BEGIN { $tests += 2 }
  }
  
  {
      my @cities;
      BEGIN { @cities = ( "Brasilia", "Rio de Janeiro", "Salvador" ) }
      for( @cities ) {
          ok( is_hot( $_ ), "$_ is hot" );
          ok( me_fits( $_ ), "$_ is nice to me" );
      }
      BEGIN { $tests += 2 * @cities }
  }
Personal tools