JMRI Code: Unit testing with JUnit
- Introduction
- Running the Tests
- Continuous Integration Test Execution
- Error Reporting
- Code Coverage Reports
- Writing Tests
- Key Metaphors
- Testing Swing Code
- Testing Script Code
- Issues
- Migrating to JUnit4
For more information on JUnit, see the JUnit home page. We now use JUnit version 4 (JUnit4), although a lot of JMRI code originally had tests written in the previous version, JUnit3. For instructions on how to convert existing JUnit3 tests to JUnit4, see the "Migrating to JUnit4" section below.
A very interesting example of test-based development is available from Robert Martin's book.
All of the JMRI classes have JUnit tests available; we have decided that our Continuous Integration system will insist on that. It's good to add JUnit tests as you make changes to classes (they test your new functionality to make sure that it is working, and keeps working as other people change it later), when you have to figure out what somebody's code does (the test documents exactly what should happen!), and when you track down a bug (make sure it doesn't come back).
Running the Tests
To run the existing tests, sayant alltestThis will compile the test code, which lives in the "test" subdirectory of the "java" directory in our usual code distributions, and then run the tests under a GUI. (To make sure you've recompiled everything, you may want to do
ant
clean
first)If you know the name of your test class, or the test class for your package, you can run that directly with the "runtest" script:
ant tests ./runtest.csh jmri.jmrit.powerpanel.PowerPanelTestThe first line compiles all the test code, and the second runs a specific test or test suite.
(Hint: How to set this up using IntelliJ)
Optional Checks
There are a number of run-time optional checks that can be turned on by setting environmental variables. We periodically run them to check on how the overall test system is working, but they're too time intensive to leave on all the time.- jmri.skipschematests
- If true, JUnit tests will skip checking the schema of all the test XML files.
- jmri.skipjythontests
- If true, JUnit tests will skip running the jython/tests scripts.
- jmri.log4jconfigfilename
- Override the default "tests.lcf" logging control file name.
- jmri.demo
- When set true, leave certain windows open to demonstrate their capability
- jmri.migrationtests
- When set true, run some extra tests; usually used during code migration, where not everything is right yet but you want to be able to include tests for individual running.
- jmri.util.JUnitUtil.printSetUpTearDownNames
- If true, JUnit tests will print out each JUnitUtil.setUp() and JUnitUtil.teardown() call. This can be useful if i.e. the CI tests are hanging, and you can't figure out which test class is the problem.
- jmri.util.JUnitUtil.checkSetUpTearDownSequence
- If true, check for whether JUnitUtil.setUp() and JUnitUtil.teardown() follow each other int the proper sequence. Print a message if not. (This slows execution a bit due to the time needed to keep history for the message)
- jmri.util.JUnitUtil.checkSequenceDumpsStack
- If true, makes jmri.util.JUnitUtil.checkSetUpTearDownSequence more verbose by also including the current stack trace along with the traces of the most recent setUp and tearDown calls.
- jmri.util.JUnitUtil.checkRemnantThreads
- If true, checks for any threads that have not yet been terminated during the test tearDown processing. If found, the context is logged as a warning.
- jmri.util.JUnitUtil.checkTestDuration
- If true, issues a warning if a test takes too long. The default limit is 5000 msec, but you can change it defining the jmri.util.JUnitUtil.checkTestDurationMax environment variable.
A Note on Internationalization (I18N)
Tests check the correctness of text in GUI elements, warning messages, and other places. Many of these are internationalized, varying depending on the computer's Locale.To avoid false failures, the Ant and Maven build control files set the locale to en_US before running tests. This covers continuous integration running, and running locally using e.g. "ant headlesstest" or "ant alltest".
The ./runtest.csh mechanism does not automatically set the locale. To do that, the easiest approach is to set the JMRI_OPTIONS environment variable via one of:
setenv JMRI_OPTIONS "-Duser.language=en -Duser.region=US" export JMRI_OPTIONS="-Duser.language=en -Duser.region=US"depending on what kind of OS and shell you're using. For more on how this works, see the page on startup scripts.
Continuous Integration Test Execution
The continuous integration environment senses changes in the code repository, rebuilds the code, performs a variety of checks. If no fatal issues are found, the continuous integration process executes the "alltest" ant target against the build to run the tests against the successful build of the code base.Error Reporting
If a test fails during the continuous integration execution of "alltest", an e-mail is sent to the jmri-build e-mail list as well as to the developers who have checked in code which was included in the build.You may visit the web site to subscribe to the jmri-builds e-mail list to get the bad news as quickly as possible, or monitor to view the archives of the e-mail list and see past logs. Or you can monitor the "dashboard" at the continuous integration web site.
(When the build succeeds, nothing is mailed, to cut down on traffic)
Code Coverage Reports
As part of running the tests, Jenkins accumulates information on how much of the code was executed, called the "code coverage". We use the JaCoCo tool to do the accounting. It provides detailed reports at multiple levels:- A plot of coverage as a whole. Click on the graph to see a
- summary by Java package. Click on a package to see a
- summary by file (e.g. class). Click on a class to see a
- summary by method. Click on a method to see
- how each part of the code was covered (may require scrolling down).
Writing Tests
By convention, we have a "test" class shadowing (almost) every real class. The "test" directory contains a tree of package directories parallel to the "src" tree. Each test class has the same name as the class to be tested, except with "Test" appended, and will appear in the "test" source tree. For example, the "jmri.Version" class's source code is in "src/jmri/Version.java", and it's test class is "jmri.VersionTest" found in "test/jmri/VersionTest.java".There are additional classes which are used to group the test classes for a particular package into JUnit test suites.
Writing Additional Tests for an Existing Class
To write additional tests for a class with existing tests, first locate the test class. (If one doesn't exist, see the section below about writing tests for a new class)If the test suite has not been converted to JUnit4 yet, one or more test methods can be added to the class using the JUnit conventions. Basically, each method needs a name that starts with "test", e.g. "testFirst", and has to have a "public void" signature. JUnit will handle everything after that.
If the test suite has been converted to JUnit4, the JUnit4
conventions require that the test be preceeded by the
"@Test
" annotation:
@Test public void testSomething() { ... }
See the section on JUnit4 Migration for more information on JUnit4.
In general, test methods should be small, testing just one piece of the classes operation. That's why they're called "unit" tests.
Writing Tests for a New Class
To write a test for a new class, you need to create a file that
shadows your new class. For our example, consider creating a
test for a new class that appears in
"src/jmri/jmrix/foo/Foo.java
". The new test would
be created in a file named "test/jmri/jmrix/foo/FooTest.java
".
Assuming that the Foo class has a default constructor named foo()
,
Then the following would be minimal contents for the test/jmri/jmrix/foo/FooTest.java
file:
package jmri.jmrix.foo; import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Ignore; import org.junit.Test; /* * Tests for the Foo Class * @author Your Name Copyright (C) 2016 */ public class FooTest { @Test public void testCtor() { Assert.AssertNotNull("Foo Constructor Return",new foo()); } @Before public void setUp() { jmri.util.JUnitUtil.setUp(); } @After public void tearDown() { jmri.util.JUnitUtil.tearDown(); } }
Note that you should be invoking
jmri.util.JUnitUtil.setUp()
and
jmri.util.JUnitUtil.tearDown()
as above. Some older tests use calls to
apps.tests.Log4JFixture.setUp()
and
apps.tests.Log4JFixture.tearDown()
;
these should be replaced by the corresponding calls
to the JUnitUtil
methods.
In addition, the tearDown() method should set all member variable references to null. This is because JUnit4 keeps the test class objects around until all the tests are complete, so any memory you allocate in one class can't be garbage collected until all the tests are done. Setting the references to null allows the objects to be collected. (If the allocated objects have a dispose() method or similar, you should call that too). You should not reset the InstanceManager or other managers in the tearDown method; any necessary manager resets will be done automatically, and duplicating those wastes test time.
You may also choose to copy an existing test file and make modifications to suite the needs of your new class. Please make sure you're copying a file in the new JUnit4 format, with the @Test statements, to keep us from having to update your new file later.
After the test class is created it needs to be added to the
package test for the package. In the case of our example,
that should be the file test/jmri/jmrix/foo/PackageTest.java
If the PackageTest has not been converted to JUnit4 format yet, the following line needs to be added to the list of test classes in the "suite" method:
suite.addTest(new junit.framework.JUnit4TestAdapter(FooTest.class));
If the PackageTest has been converted to JUnit4 format, then "FooTest.class
" needs to be added to the list test classes in the @Suite.SuiteClasses
annotation that appears before the beginning of the PackageTest class.
Writing Tests for a New Package
To write a tests for a new package, in addition to writing tests for each class, you need to create a "PackageTest.java
file that calls your new tests. For our example, we will create the file "test/jmri/jmrix/foo/PackageTest.java
" and call the tests in "test/jmri/jmrix/foo/FooTest.java
".
test/jmri/jmrix/foo/PackageTest.java
file:
package jmri.jmrix.foo; import org.junit.runner.RunWith; import org.junit.runners.Suite; /** * tests for the jmri.jmrix.foo package * * @author Your Name Copyright (C) 2016 */ @RunWith(Suite.class) @Suite.SuiteClasses({ FooTest.class }) public class PackageTest{ }
You may also choose to copy an existing test file and make modifications to suite the needs of your new class.
After the PackageTest class is created it needs to be added to the
PackageTest for the enclosing package. In the case of our example,
the enclosing package test would be the file test/jmri/jmrix/PackageTest.java
If the enclosing PackageTest has not been converted to JUnit4 format yet, the following line needs to be added to the list of test classes in the "suite" method:
suite.addTest(new junit.framework.JUnit4TestAdapter(jmri.jmrix.foo.PackageTest.class));
If the enclosing PackageTest has been converted to JUnit4 format, then "jmri.jmrix.foo.PackageTest.class
" needs to be added to the list test classes in the @RunWithSuite
annotation that appears before the beginning of the enclosing PackageTest class.
Key Test Metaphors
Handling Log4J Output From Tests
JMRI uses Log4j to handle logging of various conditions, including error messages and debugging information. Tests are intended to run without error or warning output, so that it's immediately apparent from an empty standard log that they ran cleanly.Log4j usage in the test classes themselves has two aspects:
- It's perfectly OK to use log.debug(...) statements to make it easy to debug problems in test statements. log.info(...) can be used sparingly to indicate normal progress, because it's normally turned off when running the tests.
- In general, log.warn or log.error should only be used when the test then goes on to trigger a JUnit assertion or exception, because the fact that an error is being logged does not show up directly in the JUnit summary of results.
On the other hand, you might want to deliberately provoke errors in the code being tested to make sure that the conditions are being handled properly. This will often produce log.error(...) or log.warn(...) messages, which must be intercepted and checked.
To allow this, JMRI runs it's using tests with a special log4j appender, which stores messages so that the JUnit tests can look at them before they are forwarded to the log. There are two aspects to making this work:
- All the test classes should include common code in
their setUp() and tearDown() code to ensure that log4j is
properly initiated, and that the custom appender is told
when a test is beginning and ending.
// The minimal setup for log4J protected void setUp() throws Exception { super.setUp(); // Note: skip this line when using JUnit4 jmri.util.JUnitUtil.setUp(); } protected void tearDown() throws Exception { jmri.util.JUnitUtil.tearDown(); super.tearDown(); // Note: skip this line when using JUnit4 }
- When a test is deliberately invoking a message, it
should then use
JUnitAppender class
methods to check that the message was
created. For example, if the class under test is expected
to do
log.warn("Provoked message");
the invoking test case should follow the under-test calls that provoke that with the line:
jmri.util.JUnitAppender.assertWarnMessage("Provoked message");
It will be a JUnit error if a log.warn(...) or log.error(...) message is produced that isn't matched to a JUnitAppender.assertWarnMessage(...) call.
-
The
Log4JUtil.warnOnce(..)
requires some special handling in tests. We want each test to be independent, so we reset the "want only once" logic early in theJUnitUtil.setUp()
that's routinely invoked@Before
the tests. This means that the first invocation, and only the first invocation, for each message will be logged. -
We want to make it easy to add a
Log4JUtil.deprecationWarning
call when a method is deprecated. This will log a message the first time it's invoked. We want to warn that deprecated code is being invoked during normal operation, so this is normally becomes aLog4JUtil.warnOnce(..)
call. When you see those warnings messages, you should remove them by completing the migration away from the deprecated method.The one exception is during unit and CI testing of the actual deprecated method. We want to keep those tests around until the deprecated method is finally removed. That ensures it keeps working until it's deliberately removed, and not inadvertently broken in the meantime. In this case, you should turn off the
Log4JUtil.deprecationWarning
in just that test method usingLog4JUtil.setDeprecatedLogging(false)
before invoking the deprecated method. (You can also do anJUnitAppender.assertWarn
for all the messages emitted, but it's easier to just turn them off.)
Note: Our CI test executables are configured to fail if any FATAL or ERROR messages are emitted instead of being handled. This means that although you can run your tests successfully on your own computer if they're emitting ERROR messages, but you won't be able to merge your code into the common repository until those are handled. It's currently OK to emit WARN-level messages during CI testing, but that will also be restricted (cause the test to fail) during tne 4.15.* development series, so please suppress or handle those messages too.
Resetting the InstanceManager
If you are testing code that is going to reference the InstanceManager, you should clear and reset it to ensure you get reproducible results.Depending on what managers your code needs, your
setUp()
implementation could start with:
super.setUp(); // Note: skip this line when using JUnit4 jmri.util.JUnitUtil.setUp(); jmri.util.JUnitUtil.resetInstanceManager(); jmri.util.JUnitUtil.resetProfileManager(); jmri.util.JUnitUtil.initConfigureManager(); jmri.util.JUnitUtil.initShutDownManager(); jmri.util.JUnitUtil.initDebugCommandStation(); jmri.util.JUnitUtil.initInternalTurnoutManager(); jmri.util.JUnitUtil.initInternalLightManager(); jmri.util.JUnitUtil.initInternalSensorManager(); jmri.util.JUnitUtil.initReporterManager();(You can omit the initialization managers not needed for your tests) See the jmri.util.JUnitUtil class for the full list of available ones, and please add more if you need ones that are not in JUnitUtil yet.
Your tearDown()
should end with:
jmri.util.JUnitUtil.tearDown(); super.tearDown(); // Note: skip this line when using JUnit4
Working with Listeners
JMRI is a multi-threaded application. Listeners for JMRI objects are notified on various threads. Sometimes you have to wait for that to take place.If you want to wait for some specific condition to be true, e.g. receiving a reply object, you can use a waitFor method call which looks like:
JUnitUtil.waitFor(()->{reply!=null}, "reply didn't arrive");The first argument is a lambda closure, a small piece of code that'll be evaluated repeatedly until true. The String second argument is the text of the assertion (error message) you'll get if the condition doesn't come true in a reasonable length of time.
Waiting for a specific result is fastest and most reliable. If you can't do that for some reason, you can do a short time-based wait:
JUnitUtil.releaseThread(this);This uses a nominal delay. But you might want to consider the structure of either your code (that you're testing) or the test itself: If you can't tell whether it succeeded, what's the purpose of the operation?
Note that this should not be used to synchronize with Swing threads. See the Testing Swing Code section for that.
In general, you should not have calls to sleep(), wait() or yield() in your code. Use the JUnitUtil and JFCUtil support for those instead.
Working with Threads
(See a following section for how to work with Swing (GUI) objects and the Swing/AWT thread)Some tests will need to start threads, for example to test signal controls or aspects of layout I/O.
General principles your tests must obey for reliable operation:
- At the end of each test, you need to stop() any threads
you started. Doing this in tearDown() can be most reliable,
because tearDown runs even if your test method exists due
to an error.
If you're doing multiple tests with threads, you should wait for thread to actually stop before moving on to the next operation. You can do that with a
JUnitUtil.waitFor(..)
call that waits on some flag in the thread. - If your thread does any operations at
code()
that need to happen before you test its operation, you also have to wait for those to complete.
For example, if creating a thread based on AbstractAutomat, you can check the start with:
AbsractAutomat p = new MyThreadClass(); p.start(); JUnitUtil.waitFor(()->{return p.isRunning();}, "logic running");and ensure termination with
p.stop(); JUnitUtil.waitFor(()->{return !p.isRunning();}, "logic stopped");
Please make sure your unit tests clean up after themselves! They should not leave any threads running. Any threads they start should have either terminated normally by the end of the test (don't let them just time out and crash later during some other test!) or you should add code to terminate them.
You can check whether you've left any threads running by
setting the
jmri.util.JUnitUtil.checkRemnantThreads
environment variable to true, with i.e.
setenv JMRI_OPTIONS -Djmri.util.JUnitUtil.checkRemnantThreads=trueor the equivalent for your computer type. This tells the
jmri.util.JUnitUtil.tearDown()
method to check for any (new) threads that are
still running at the end of each test. This check
is a bit time-intensive, so we don't leave it on
all the time.
Testing I/O
Some test environments don't automatically flush I/O operations such as streams during testing. If you're testing something that does I/O, for example a TrafficController, you'll need to add "flush()" statements on all your output streams. (Having to wait a long time to make a test reliable is a clue that this is happening somewhere in your code)Temporary File Creation in Tests
Testcases which create temporary files must be carefully created so that there will not be any problems with file path, filesystem security, pre-existence of the file, etc. These tests must also be written in a way that will operate successfully in the continuous integration build environment. And the temporary files should not become part of the JMRI code repository. This section discusses ways to avoid these types of problems.If you need a temporary file or directory, and your test uses JUnit4, you can use a Rule to create a file or directory before each test runs.
import org.junit.Rule; import org.junit.rules.TemporaryFolder; ... @Rule public TemporaryFolder folder = new TemporaryFolder();
You then reference "folder" in your test code:
// create a temporary file File randomNameFile = folder.newFile(); // create a temporary directory File randomNameDir = folder.newFolder();JUnit4 will make sure the file or folder is removed afterwards regardless of whether the test succeeds or fails. For more information on this, see the Javadoc for TemporaryFolder.
If you're not using JUnit4, consider converting the test class (see below) and use the technique above. But if you can't do that, and have to stick with JUnit3:
-
Place the temporary file(s) in the "temp" directory
which is a sub- directory of the jmri run-time directory.
This directory is used by some testcases and is already
configured as excluded from the JMRI code repository. It
may be convenient to create a subdirectory there for files
created by a particular test. Be sure that the directory
exists before creating files in the directory, and create
the directory if necessary. An example is shown here:
String tempDirectoryName = "temp"; if ( ! (new File(tempDirectoryName).isDirectory())) { // create the temp directory if it does not exist FileUtil.createDirectory(tempDirectoryName); }
- Allow the JRE to define the directory hierarchy
separator character automatically:
String filename = tempDirectoryName + File.separator + "testcaseFile.txt";
- Code the testcase in a way that will not break if the
file already exists before the testcase is run. One way to
do this is to code the testcase to check for existence of
the testcase temporary file(s), and delete if necessary,
before writing to the file(s). The following example will
delete the previous file if it exists:
String filename = tempDirectoryName + File.separator + "testcaseFile.txt"; File file = new File(filename); if (file.exists()) { try { file.delete(); } catch (java.lang.Exception e) { Assert.fail("Exception while trying to delete the existing file " + filename + ".\n Exception reported: " + e.toString()); // perform some appropriate action in this case } }
- Make sure to "close" the temporary file after it has been written.
- Delete the temporary file(s) as part of the test once
it is no longer needed by the testcase(s). To allow
debugging of testcases, it may be convenient to display the
path and filename when logging debug messages (without
deleting the temporary file), and to perform the delete
only when debug logging is not enabled, such as:
if (log.isDebugEnabled()) { log.debug("Path to written hex file is: "+filename); } else { file.delete(); }
- It is unclear whether native Java library routines
which create temporary files in an
operating-system-specific way such as:
java.io.File.createTempFile("testcasefile","txt")
will work reliably within the continuous integration build environment. An issue was identified with one test case which executed properly on a Windows-based PC for both the "alltest" and "headlesstest" ant target, regardless of how many times it was run. In the continuous integration environment, the test ran properly the first time after it was checked in, but failed for every subsequent continuous integration environment execution of "headlesstest". Once the test was modified based on the temporary file recommendations shown here, the test became stable over multiple continuous integration executions of "headlesstest".
JUnit Rules
JUNit4 added the concept of "Rules" that can modify the execution of tests. They work at the class or test level to modify the context JUnit4 uses for running the classes tests. Generally, you start by creating one:@Rule public final TemporaryFolder tempFolder = new TemporaryFolder();
Some standard JUnit4 rules you might find useful:
- TemporaryFolder - work with temporary files and folders, ensuring they're cleaned up at the end. See the above section for more on using that.
- ExpectedException - handles test methods that are expected to throw exceptions
Timeout - limit duration
The Timeout rule imposes a timeout on all test methods in a class.@Rule public org.junit.rules.Timeout globalTimeout = org.junit.rules.Timeout.seconds(10);
Note that you can also add a timeout option to an individual test via an argument to the @Test annotation. For example,
@Test(timeout=2000)will put a 2 second (2,000 milliseconds) timeout on that test. If you use both the rule and the option, the option will control the behavior. For a bit more info, see the JUnit4 Timeouts for Tests page.
RetryRule - run test several times
JMRI hasjmri.util.junit.rules.RetryRule
which can rerun a test multiple times until
it reaches a limit or finally passes. Although it's
better to write reliable tests, this can be a way
to make the CI system more reliable while you
try to find out why a test isn't reliable.
For a working example, see java/test/jmri/jmrit/logix/LearnWarrantTest.java
Briefly, you just add the lines
import jmri.util.junit.rules.RetryRule; @Rule public RetryRule retryRule = new RetryRule(3); // allow 3 retriesto your test class. This will modify how JUnit4 handles errors during all of the tests in that class.
Tools for Controlling JUnit4 tests
- Categories - useful ones in our case could be headless/not headless, hardware specific (loco buffer attached, NCE PowerPro attached, etc)
-
Assumptions - to conditionally ignore a test. For
example, a test that would fail in a headless environment
can be ignored in headless mode if the first line of the
test method is:
Assume.assumeFalse(GraphicsEnvironment.isHeadless());
-
Ignore -
mark a test to be unconditionally ignored. For
example, a test that fails because it isn't fully implemented yet can be marked to be ignored:
@org.junit.Ignore("not done yet") @jmri.util.junit.annotations.ToDo("Need to create some mock Framistat Objects") @Test public void notDoneYet() { // some code that compiles but doesn't run }
You should provide the reason for ignoring this test in the@Ignore
argument.@Ignore
without an argument will compile, but Jenkins will mark it as an error.Also note the @jmri.util.junit.annotations.ToDo annotation which indicates that this needs work and provides some more information about what needs to be done.
In general, we'd rather have working tests rather than ignored ones, so we track the number that have been ignored in a Jenkins job, see the image to the right.
- On the other hand, sometimes a test super class (i.e. some abstract base)
requires implementation of a test method that's not applicable to this
particular concrete test
class. It might, for example, test a feature or message that's not
applicable for a specific system's hardware. In that case, you
provide a null body to do nothing, and mark the test as not applicable
with the
@jmri.util.junit.annotations.NotApplicable
annotation like this:
@Override @jmri.util.junit.annotations.NotApplicable("System X doesn't use Framistat Objects") @Test public void testFramistatUsage() {}
Testing Swing Code
AWT and Swing code runs on a separate thread from JUnit tests. Once a Swing or AWT object has been displayed (viashow()
or
setVisible(true)
), it cannot be reliably
accessed from the JUnit thread. Even using the listener delay
technique described above isn't reliable.
For the simplest possible test, displaying a window for manual interaction, it's OK to create and invoke a Swing object from a JUnit test. Just don't try to interact with it once it's been displayed!
Because we run tests in "headless" mode during the continuous integration builds, it's important that tests needing access to the screen start with:
Assume.assumeFalse(GraphicsEnvironment.isHeadless());
This will run the test only when a display is available.
GUI tests should close windows when they're done, and in general clean up after themselves. If you want to keep windows around so you can manipulate them, e.g. for manual testing or debugging, you can use the jmri.demo system parameter to control that:
if (!System.getProperty("jmri.demo", "false").equals("false")) { myFrame.setVisible(false); myFrame.dispose(); }
For many tests, you'll both make testing reliable and improve the structure of your code by separating the GUI (Swing) code from the JMRI logic and communications. This lets you check the logic code separately, but invoking those methods and checking the state them update.
For more complicated GUI testing, two tools are generally used. The older one, JFCUnit is no longer maintained, so we recommend that new tests be written with Jemmy. Note: Only use one of these in your test class! Don't try to mix them.
Using Jemmy
Recently, e.g. since 2016, some tests have been developed using the jemmy tool. See e.g. the samples on the NetBeans pages.For more information on Jemmy, please see the tutorial Wiki page and the Jemmy Javadoc.
Locating GUI Items using Jemmy
Jemmy must be able to find the objects on the screen. Jemmy Operators are generally used to both locate and manipulate items on the screen.
Here are a few tips for locating items with Jemmy:
- Some of the Jemmy Operator Constructors allow leaving off an identifier. If there is only one object of a given type on the screen at any time, it is acceptable to use this version of the constructor, but the test may be fragile.
- It is easiest to find objects if they have a unique identifier. In the case where no unique identifier exists, Jemmy provides a version of most searches that allows you to specify an ordinal index. Using these may result in tests that break when GUI elements are added or removed from the frame.
- If an item contains its own text (Buttons, for example), it is recommended you use the text to search for a component.
- If an item does not contain its own description, but the GUI contains a JLabel describing that component, be certain the JLabel's LabelFor property is set. A Jemmy JLabelOperator can then be used to find the label, and retrieve the object.
Example of Closing a Dialog Box
If you want to test a method that pops a have a JDialog box with a title (in top bar of the dialog window) of "Foo Warning" and an "OK" button to close it, put this in your JUnit test:new Thread(() -> { // constructor for d will wait until the dialog is visible JDialogOperator d = new JDialogOperator("Foo Warning"); JButtonOperator bo = new JButtonOperator(d,"OK"); jmri.util.ThreadingUtil.runOnGUI(() -> {bo.push();}); }).start(); showTheDialog();The thread is started before your code runs and starts Jemmy looking for the Dialog box. Once it appears, Jemmy will push the "OK" button. You can't put the Jemmy calls in advance: They'll wait forever for the dialog to appear, and never proceed to your code to show the dialog. And you can't put them after the call, because your call won't exist until somebody presses "OK".
Jemmy timeouts
Jemmy has an extensive system of built-in timeouts. It'll wait to perform an operation until e.g. the requested button or menu is on the screen, but will eventually timeout if it can't find it. In that case it throws aorg.netbeans.jemmy.TimeoutExpiredException
with some included diagnostic test.
Please don't catch this exception: the problem here is not the exception, it's that
Jemmy wasn't able to do what you asked. That's the thing that needs to be debugged.
If you want to change one of Jemmy's timeouts, do
myMenuOperator.getTimeouts().setTimeout("JMenuOperator.WaitBeforePopupTimeout", 30L);where "
myMenuOperator
" is a reference to a Jemmy operator object,
30L is the new value (a long
) in milliseconds, and the
particular timeout name comes from the Javadoc. Sometimes, setting the "WaitBeforePopupTimeout"
from it's default of zero to a few milliseconds can improve the reliability of tests.
Also, setting "JMenuOperator.WaitPopupTimeout
"
and "ComponentOperator.WaitComponentTimeout
" to a lower value from their
defaults of 60000L (a minute) can speed work when you're trying to debug the cause of a timeout.
Jemmy hints and issues
Actions like operator.click()
work like clicking on a real screen:
if the click target isn't the foremost component when the click happens, the click
will go to some other component. This is particularly annoying when you're running
a long series of tests on your own computer will doing other work, as
pushing test windows to the background will likely cause (false-negative) failures.
If your test thread invokes a method that causes a lot of
Swing/AWT activity, that might not all be complete when the method returns.
For example, if you create a JFrame and either explicitly or implicitly call
pack()
, that starts the Swing thread working on that frame;
that can proceed in parallel to the return to the test thread.
If the test thread will continue to do more Swing operations, like create and
pack another frame, you'll have problems unless you either:
- Do all those operations on the GUI thread by enclosing them in jmri.util.ThreadingUtil.runOnGUI calls so that the entire sequence of operations is done on the Swing. This is what's normally done in Swing applications, and it's a test of the real operation.
- But
Assert
operations need to be done on the test thread, so if you want to intermix Swing and test operations you can synchronize the threads by callingnew org.netbeans.jemmy.QueueTool().waitEmpty();
on the test thread.In some rare cases, you need to wait for the Swing queue to stay empty for a non-zero interval. In that case, use
waitEmpty(20)
. where the argument (in this example 20) is how many milliseconds the queue has to remain empty before proceeding. We're not sure what the best value is; see Issue #5321 for some background discussion.
Using JFCUnit
You should not write new test classes using JFCUnit. Use Jemmy instead. This section is to document test classes we already have.
The JFCUnit library can control interactions with Swing objects. For a very simple example of the use of JFCUnit, see the test/jmri/util/SwingTestCaseTest.java file.
To use JFCUnit, you first inherit your class From
SwingTestCase
instead of TestCase
.
This is enough to get basic operation of Swing tests; the
base class pauses the test thread until Swing (actually, the
AWT event mechanism) has completed all processing after every
Swing call in the test. (For this reason, the tests will run
much slower if you're e.g. moving the mouse cursor around
while they're running)
For more complex GUI testing, you can invoke various aspects of the interface and check internal state using test code.
Testing Script Code
JMRI ships with sample scripts. This section discussions how you can write simple tests for those to ensure they keep working.Testing Jython sample scripts
Test scripts can be placed injython/test
are automatically invoked by
java/test/jmri/jmrit/jython/SampleScriptTest.java
.
See the
jmri_bindings_test.py
sample for syntax, including
examples of how to signal test failures.
In the future, this could be extended to pick up files automatically, to support xUnit testing, etc.
Issues
JUnit uses a custom classloader, which can cause problems finding singletons and starting Swing. If you get the error about not being able to find or load a class, suspect that adding the missing class to the test/junit/runner/excluded.properties file would fix it.As a test only, you can try setting the
"-noloading" option in the main
of whichever
test class you're having trouble with:
static public void main(String[] args) { String[] testCaseName = {"-noloading", LogixTableActionTest.class.getName()}; junit.swingui.TestRunner.main(testCaseName); }
Please don't leave "-noloading" in place, as it prevents
people from rerunning the test dynamically. Instead, the
right long-term fix is to have all classes with JUnit loader
issues included in the
test/junit/runner/excluded.properties
file.
JUnit uses those properties to decide how to handle loading
and reloading of classes.
Migrating to JUnit4
JUnit4 is a significant upgrade of the JUnit tool from our previous JUnit3 standard. It brings new capabilities, but also changes how tests are structured. As of this writing (Fall 2018), we're committed to migrating all our tests to JUnit4, though the work is being done on a time-available basis (i.e. slowly).The rest of this section discusses some aspects of moving to and using JUnit4.
Example of JUnit4 test and corresponding PackageTest.java file; note lack of main() procedure and other previously-present boilerplate code. These files are the result of this commit that migrated a test package to JUnit4
Example of JUnit4 tests with a main()
procedure
A possible set of steps for conversion:
- Change the imports. Typically, these can be
removed:
import junit.framework.Assert; import junit.framework.Test; import junit.framework.TestCase; import junit.framework.TestSuite;
and you'll typically need
import org.junit.After; import org.junit.Assert; import org.junit.Before; import org.junit.Ignore; import org.junit.Test;
- The test class no longer inherits from TestCase, so
replace
public class MyClassTest extends TestCase
withpublic class MyClassTest
. - Mark test methods with the "
@Test
" annotation:@Test public void testSomething() { ... }
- If you have bare
assert
calls, likeassertEquals(k, ind);
change them to the JUnit4 form:Assert.assertEquals(k, ind);
- More of our Test classes have a "from here down is
testing infrastructure" comment, followed by setup code.
Some of that can be removed. First, remove the class
constructor, e.g.
public MyClassTest(String s) { super(s); }
- Next, remove the main method if there is one, e.g.:
// Main entry point static public void main(String[] args) { apps.tests.Log4JFixture.initLogging(); String[] testCaseName = {"-noloading", MyClassTest.class.getName()}; junit.textui.TestRunner.main(testCaseName); }
- Annotate the
setUp()
andtearDown()
methods. Note: They have to be "public" methods, not "protected" or "private". They should end up looking like (plus your own content, of course):@Before public void setUp() { jmri.util.JUnitUtil.setUp(); } @After public void tearDown() { jmri.util.JUnitUtil.tearDown(); }
Make sure no more references to the
super.
method remain in either setUp() or tearDown(). Just remove those lines.A note on nesting: If you have @Before and/or @After in both a class and one its superclasses, they will be reliably invoked in the useful way: First the @Before of the super class, then the @Before of your class, then the test, then the @After of your class, and finally the @After of the superclass.
In rare cases, you might want to use
@BeforeClass
and@AfterClass
methods to wrap the entire test class. These have to bestatic
and are invoked around (i.e. before and after) anything else in the test class. This can be used to e.g. wrap static initializers in the class under test. - Finally, replace the test suite definition. JUnit3 was
normally used with "run everything in this class
automatically" by having a method that looked like this:
// test suite from all defined tests public static Test suite() { TestSuite suite = new TestSuite(Z21MessageTest.class); return suite; }
If that's what you've got, just remove the whole block.
If you've got more logic in the suite() routine, ask for help on the jmri-developers list. An example of the result of migrating a more complex case:@RunWith(Suite.class) @Suite.SuiteClasses({ CMRISystemConnectionMemoTest.class, jmri.jmrix.cmri.serial.PackageTest.class})
-
In the
PackageTest.java
for this test package, under Test suite(), replace the line
suite.addTest(MyClassTest.suite());
by
suite.addTest(new junit.framework.JUnit4TestAdapter(MyClassTest.class));
leaving the order of tests unchanged to prevent possible side effects. - Run the tests and check that all your tests were successfully included and run.
That's it! You've successfully migrated to native JUnit4.