unitizer offers three functions to access the interactive review environment:
unitize is used when you either want to generate a
unitizer from a test file, or when you want to compare the re-evaluation of a test file to an existing
untize_dir does what
unitize does, except for a whole directory at a time.
review is a helper function used when you want to review the contents of an existing
unitizer. This is useful if you grow uncertain about tests that you previously approved and want to ensure they actually do what you want them to. You can review and potentially remove items from a
Both these functions use the same interactive environment, though rules therein are slightly different. For example, in
review, all the tests are considered passed since there is nothing to compare them to, and the interactive environment will step you through all the passed tests.
unitize will normally omit passed tests from the review process.
We will focus on
unitize for the rest of this vignette since most of the commentary about it applies equally to
To examine the interactive environment more thoroughly we will go back to the demo (you can run it with
demo(unitizer)). This is the
unitizer prompt right after our first failed test when our
unitizer.fastlm implementation was returning the wrong values:
> get_slope(res) unitizer test fails on value mismatch: *value* mismatch: Mean relative difference: 6943055624 @@ .ref @@ -  101 @@ .new @@ +  701248618125
Much like the
browser() prompt, the
unitizer prompt accepts several special expressions that allow you to control
unitizer behavior. What the expressions are and what they do depends on context. We will review them in the context of the failed test described above. Look at what the
unitizer prompt stated before we started reviewing our failed tests:
- Failed ----------------------------------------------------------------------- The 2 tests in this section failed because the new evaluations do not match the reference values from the store. Overwrite with new results ([Y]es, [N]o, [P]rev, [B]rowse, [R]erun, [Q]uit, [H]elp)?
This clearly lays out all the special commands available to us:
Ywill accept the new value as the correct reference value to use for a test.
Nwill keep the previous reference value as the reference value for future tests.
Ptakes us back to the previously reviewed test (see “Test Navigation” next).
Ballows us to navigate to any previously reviewed test (see “Test Navigation” next).
Rtoggles re-run mode; when you complete review or exit,
unitizerwill re-run the tests, which is useful if you made changes to your source code and re-installed your package from the
Hprovides contextual help.
If you type any of those letters into the
unitizer prompt you will cause
unitizer to respond as described above instead of evaluating the expression as it would be at the normal R console prompt. If you have a variable assigned to one of those letters and you wish to access it, you can do so with any of
Y, then to access it all these commands would work:
unitizer checks for an exact match of a user expression to the special command letters, so something like
(Y) does not match
Y which allows you to reach the value stored in
If at any time you forget what
unitizer options are available to you you can just hit the “ENTER” key and
unitizer will re-print the options to screen.
You can accept all unreviewed tests in a sub-section, section, or unitizer with
YYYY respectively. You can also reject them with
NNNN. Please note that accepting multiple tests without reviewing them is a really bad idea, and you should only resort to these shortcuts when you are absolutely certain of what you are doing. The most common use case for these shortcuts is to drop multiple removed tests from a
unitizer prompt is designed to emulate the standard R prompt. For the most part you can type any expression that you would type at the R prompt and get the same result as you would there. This means you can examine the objects created by your test script, run R computations, etc.
There are, however, some subtle differences created by the structure of the evaluation environments
.REFthat let you review the results of tests (we will discuss these next).
.tracebackare masked by special
unitizerversions (we will also discuss this next); you can use
base::tracebackif you need the originals.
unitize, though they will not show up in a call to
As we saw in the demo there are special objects available at the prompt:
.new (except for removed/deleted tests), and for all but new tests,
.ref. These objects contain the values produced by the newly evaluated test (
.new) and by the test when it was previously run and accepted (
.new might seem a bit superfluous since the user can always re-evaluate the test expression at the
unitizer prompt to review the value, but if that evaluation is slow you can save a little time by using
.ref is the only option you have to see what the test used to produce back when it was first accepted into the
.ref contain the values produced by the tests, but sometimes it is useful to access other aspects of the test evaluation. To do so you can use
.NEWprints general information about the test.
.NEW$valuereturns the test value; equivalent to typing
.newat the prompt.
.NEW$conditionsreturns the list of conditions produced by the test.
.NEW$messsagereturns the stderr captured during test evaluation.
.NEW$outputreturns the screen output captured during test evaluation (note often this will be similar to what you get from
.NEW$valuesince typing those expressions at the prompt leads to the value being printed).
.NEW$callreturns the test expression.
.NEW$abortedreturns whether the test expression invoked an “abort” restart (e.g. called
stopat some point).
You can substitute
.NEW in any of the above, provided that
.REF is defined (i.e. that will not work when you are reviewing new tests since there is no corresponding reference test for those by definition).
.REF are defined, then
.DIFF will be defined too.
.DIFF has the same structure as
.NEW and it contains the result of evaluating
diffobj::diffObj between each component of
.diff is shorthand for
.DIFF$value. If there are state differences (e.g. search path) you will be able to view those with
ls at the
unitizer prompt calls an
unitizer version of the function (you can call the original with
base::ls()). This is what happens when we type
ls() at the first failed test in the
unitizer we’ve been reviewing in this vignette:
$`objects in new test env:`  "res" "x" "y" $`objects in ref test env:`  "res" "x" "y" $`unitizer objects:`  ".new" ".NEW" ".ref" ".REF" Use `ref(.)` to access objects in ref test env `.new` / `.ref` for test value, `.NEW` / `.REF` for details. unitizer>
This special version of
ls highlights that our environment is more complex than that at the typical R prompt. This is necessary to allow us to review both the newly evaluated objects as well as the objects from the reference
unitizer store to compare them for differences. For instance, in this example, we can see that there are both new and reference copies of the
y objects. The reference copies are from the previous time we ran
ls also notes what
unitizer special objects are available.
When you type at the prompt the name of one of the objects
ls lists, you will see the newly evaluated version of that variable. If you wish to see the reference value, then use the
unitizer> res intercept slope rsq -3.541306e+13 7.012486e+11 9.386790e-01 attr(,"class")  "fastlm" unitizer> ref(res) intercept slope rsq -1717.000000 101.000000 0.938679 attr(,"class")  "fastlm"
Note that at times when you use
ls at the
unitizer promopt you may see something along the lines of:
$`objects in ref test env:`  "res" "x*" "y'"
where object names have symbols such as
' appended to them. This happens because
unitizer does not store the entire environment structure of the reference tests. Here is a description of the possible situations you can run into:
*Object existed during reference test evaluation, but is no longer available
'Object existed during reference test evaluation, and still does, but it has a different value than it did during reference test evaluation
**Object exists now, but did not exist during reference test evaluation
For more discussion see
?"healEnvs,unitizerItems,unitizer-method" and the discussion of Patchwork Reference Environments.
Objects assigned right before a test are part of that test’s environment so will always be available.
Errors that occur during test evaluation are handled, so they do not register in the normal R traceback mechanism.
unitizer stores the traces from the test evaluation and makes them available via internal versions of
.traceback that mask the base ones at the interactive
unitizer prompt. They behave similarly but not identically to the
base counterparts. In particular, parameter
x must be NULL. You can access the
base versions with e.g.
base::traceback, but those will not display any tracebacks generated by
unitize_dir adds a layer of navigation. Here is what you see after running it on the demo package directory test directory:
> (.unitizer.fastlm <- copy_fastlm_to_tmpdir()) # package directory > unitize_dir(.unitizer.fastlm) Inferred test directory location: private/var/folders/56/qcx6p6f94695mh7yw- q9m6z_80000gq/T/RtmpJO7kjd/file43ac57df6164/unitizer.fastlm/tests/unitizer Summary of files in common directory 'tests/unitizer': Pass Fail New *1. fastlm1.R - - 4 *2. fastlm2.R - - 1 *3. unitizer.fastlm.R - - 3 ..................................... - - 8 Legend: * `unitizer` requires review Type number of unitizer to review, 'A' to review all that require review unitizer>
Each listing corresponds to a test file. If you were to type
1 at the prompt then you would see the equivalent of the
unitize process in the demo, since “fastlm1.R” is the file we
unitize in the demo. The
* ahead of each file indicates that the file has tests that require review. In this case, all the files have new tests. After we type
1 and go through the
unitize process for “fastlm1.R” we are returned to the
unitizer updated Summary of files in common directory 'tests/unitizer': Pass Fail New $1. fastlm1.R ? ? ? *2. fastlm2.R - - 1 *3. unitizer.fastlm.R - - 3 ..................................... ? ? ? Legend: * `unitizer` requires review $ `unitizer` has been updated and needs to be re-evaluted to recompute summary Type number of unitizer to review, 'A' to review all that require review, 'R' to re-run all updated unitizer>
Because we updated “fastlm.R”, the statistics
unitize_dir collected when it first ran all the tests are out of date, which is why they show up as question marks. The
$ also indicates that “fastlm1.R” stats are out of date. There is nothing wrong with this, and you do not need to do anything about it, but if you want you can re-run any unitizers that need to be updated by typing “R” at the prompt. This is what happens if we do so:
unitizer> R Summary of files in common directory 'tests/unitizer': Pass Fail New 1. fastlm1.R 4 - - *2. fastlm2.R - - 1 *3. unitizer.fastlm.R - - 3 ..................................... 4 - 4 * `unitizer` requires review Type number of unitizer to review, 'A' to review all that require review unitizer>
You can now see that we added all the tests, and upon re-running, they all passed since the source code for
unitizer.fastlm has not changed. Notice how there is no
* ahead of the first test anymore.
Another option for reviewing tests is to type “A” at the prompt, which would cause
unitize_dir to put you through each test file that requires review in sequence.