Involved Source Filesallocs.gobenchmark.gocover.goexample.gofuzz.gomatch.gonewcover.gorun_example.go Package testing provides support for automated testing of Go packages.
It is intended to be used in concert with the "go test" command, which automates
execution of any function of the form
func TestXxx(*testing.T)
where Xxx does not start with a lowercase letter. The function name
serves to identify the test routine.
Within these functions, use the Error, Fail or related methods to signal failure.
To write a new test suite, create a file that
contains the TestXxx functions as described here,
and give that file a name ending in "_test.go".
The file will be excluded from regular
package builds but will be included when the "go test" command is run.
The test file can be in the same package as the one being tested,
or in a corresponding package with the suffix "_test".
If the test file is in the same package, it may refer to unexported
identifiers within the package, as in this example:
package abs
import "testing"
func TestAbs(t *testing.T) {
got := Abs(-1)
if got != 1 {
t.Errorf("Abs(-1) = %d; want 1", got)
}
}
If the file is in a separate "_test" package, the package being tested
must be imported explicitly and only its exported identifiers may be used.
This is known as "black box" testing.
package abs_test
import (
"testing"
"path_to_pkg/abs"
)
func TestAbs(t *testing.T) {
got := abs.Abs(-1)
if got != 1 {
t.Errorf("Abs(-1) = %d; want 1", got)
}
}
For more detail, run "go help test" and "go help testflag".
# Benchmarks
Functions of the form
func BenchmarkXxx(*testing.B)
are considered benchmarks, and are executed by the "go test" command when
its -bench flag is provided. Benchmarks are run sequentially.
For a description of the testing flags, see
https://golang.org/cmd/go/#hdr-Testing_flags.
A sample benchmark function looks like this:
func BenchmarkRandInt(b *testing.B) {
for range b.N {
rand.Int()
}
}
The benchmark function must run the target code b.N times.
It is called multiple times with b.N adjusted until the
benchmark function lasts long enough to be timed reliably.
The output
BenchmarkRandInt-8 68453040 17.8 ns/op
means that the loop ran 68453040 times at a speed of 17.8 ns per loop.
If a benchmark needs some expensive setup before running, the timer
may be reset:
func BenchmarkBigLen(b *testing.B) {
big := NewBig()
b.ResetTimer()
for range b.N {
big.Len()
}
}
If a benchmark needs to test performance in a parallel setting, it may use
the RunParallel helper function; such benchmarks are intended to be used with
the go test -cpu flag:
func BenchmarkTemplateParallel(b *testing.B) {
templ := template.Must(template.New("test").Parse("Hello, {{.}}!"))
b.RunParallel(func(pb *testing.PB) {
var buf bytes.Buffer
for pb.Next() {
buf.Reset()
templ.Execute(&buf, "World")
}
})
}
A detailed specification of the benchmark results format is given
in https://golang.org/design/14313-benchmark-format.
There are standard tools for working with benchmark results at
https://golang.org/x/perf/cmd.
In particular, https://golang.org/x/perf/cmd/benchstat performs
statistically robust A/B comparisons.
# Examples
The package also runs and verifies example code. Example functions may
include a concluding line comment that begins with "Output:" and is compared with
the standard output of the function when the tests are run. (The comparison
ignores leading and trailing space.) These are examples of an example:
func ExampleHello() {
fmt.Println("hello")
// Output: hello
}
func ExampleSalutations() {
fmt.Println("hello, and")
fmt.Println("goodbye")
// Output:
// hello, and
// goodbye
}
The comment prefix "Unordered output:" is like "Output:", but matches any
line order:
func ExamplePerm() {
for _, value := range Perm(5) {
fmt.Println(value)
}
// Unordered output: 4
// 2
// 1
// 3
// 0
}
Example functions without output comments are compiled but not executed.
The naming convention to declare examples for the package, a function F, a type T and
method M on type T are:
func Example() { ... }
func ExampleF() { ... }
func ExampleT() { ... }
func ExampleT_M() { ... }
Multiple example functions for a package/type/function/method may be provided by
appending a distinct suffix to the name. The suffix must start with a
lower-case letter.
func Example_suffix() { ... }
func ExampleF_suffix() { ... }
func ExampleT_suffix() { ... }
func ExampleT_M_suffix() { ... }
The entire test file is presented as the example when it contains a single
example function, at least one other function, type, variable, or constant
declaration, and no test or benchmark functions.
# Fuzzing
'go test' and the testing package support fuzzing, a testing technique where
a function is called with randomly generated inputs to find bugs not
anticipated by unit tests.
Functions of the form
func FuzzXxx(*testing.F)
are considered fuzz tests.
For example:
func FuzzHex(f *testing.F) {
for _, seed := range [][]byte{{}, {0}, {9}, {0xa}, {0xf}, {1, 2, 3, 4}} {
f.Add(seed)
}
f.Fuzz(func(t *testing.T, in []byte) {
enc := hex.EncodeToString(in)
out, err := hex.DecodeString(enc)
if err != nil {
t.Fatalf("%v: decode: %v", in, err)
}
if !bytes.Equal(in, out) {
t.Fatalf("%v: not equal after round trip: %v", in, out)
}
})
}
A fuzz test maintains a seed corpus, or a set of inputs which are run by
default, and can seed input generation. Seed inputs may be registered by
calling (*F).Add or by storing files in the directory testdata/fuzz/<Name>
(where <Name> is the name of the fuzz test) within the package containing
the fuzz test. Seed inputs are optional, but the fuzzing engine may find
bugs more efficiently when provided with a set of small seed inputs with good
code coverage. These seed inputs can also serve as regression tests for bugs
identified through fuzzing.
The function passed to (*F).Fuzz within the fuzz test is considered the fuzz
target. A fuzz target must accept a *T parameter, followed by one or more
parameters for random inputs. The types of arguments passed to (*F).Add must
be identical to the types of these parameters. The fuzz target may signal
that it's found a problem the same way tests do: by calling T.Fail (or any
method that calls it like T.Error or T.Fatal) or by panicking.
When fuzzing is enabled (by setting the -fuzz flag to a regular expression
that matches a specific fuzz test), the fuzz target is called with arguments
generated by repeatedly making random changes to the seed inputs. On
supported platforms, 'go test' compiles the test executable with fuzzing
coverage instrumentation. The fuzzing engine uses that instrumentation to
find and cache inputs that expand coverage, increasing the likelihood of
finding bugs. If the fuzz target fails for a given input, the fuzzing engine
writes the inputs that caused the failure to a file in the directory
testdata/fuzz/<Name> within the package directory. This file later serves as
a seed input. If the file can't be written at that location (for example,
because the directory is read-only), the fuzzing engine writes the file to
the fuzz cache directory within the build cache instead.
When fuzzing is disabled, the fuzz target is called with the seed inputs
registered with F.Add and seed inputs from testdata/fuzz/<Name>. In this
mode, the fuzz test acts much like a regular test, with subtests started
with F.Fuzz instead of T.Run.
See https://go.dev/doc/fuzz for documentation about fuzzing.
# Skipping
Tests or benchmarks may be skipped at run time with a call to
the Skip method of *T or *B:
func TestTimeConsuming(t *testing.T) {
if testing.Short() {
t.Skip("skipping test in short mode.")
}
...
}
The Skip method of *T can be used in a fuzz target if the input is invalid,
but should not be considered a failing input. For example:
func FuzzJSONMarshaling(f *testing.F) {
f.Fuzz(func(t *testing.T, b []byte) {
var v interface{}
if err := json.Unmarshal(b, &v); err != nil {
t.Skip()
}
if _, err := json.Marshal(v); err != nil {
t.Errorf("Marshal: %v", err)
}
})
}
# Subtests and Sub-benchmarks
The Run methods of T and B allow defining subtests and sub-benchmarks,
without having to define separate functions for each. This enables uses
like table-driven benchmarks and creating hierarchical tests.
It also provides a way to share common setup and tear-down code:
func TestFoo(t *testing.T) {
// <setup code>
t.Run("A=1", func(t *testing.T) { ... })
t.Run("A=2", func(t *testing.T) { ... })
t.Run("B=1", func(t *testing.T) { ... })
// <tear-down code>
}
Each subtest and sub-benchmark has a unique name: the combination of the name
of the top-level test and the sequence of names passed to Run, separated by
slashes, with an optional trailing sequence number for disambiguation.
The argument to the -run, -bench, and -fuzz command-line flags is an unanchored regular
expression that matches the test's name. For tests with multiple slash-separated
elements, such as subtests, the argument is itself slash-separated, with
expressions matching each name element in turn. Because it is unanchored, an
empty expression matches any string.
For example, using "matching" to mean "whose name contains":
go test -run '' # Run all tests.
go test -run Foo # Run top-level tests matching "Foo", such as "TestFooBar".
go test -run Foo/A= # For top-level tests matching "Foo", run subtests matching "A=".
go test -run /A=1 # For all top-level tests, run subtests matching "A=1".
go test -fuzz FuzzFoo # Fuzz the target matching "FuzzFoo"
The -run argument can also be used to run a specific value in the seed
corpus, for debugging. For example:
go test -run=FuzzFoo/9ddb952d9814
The -fuzz and -run flags can both be set, in order to fuzz a target but
skip the execution of all other tests.
Subtests can also be used to control parallelism. A parent test will only
complete once all of its subtests complete. In this example, all tests are
run in parallel with each other, and only with each other, regardless of
other top-level tests that may be defined:
func TestGroupedParallel(t *testing.T) {
for _, tc := range tests {
tc := tc // capture range variable
t.Run(tc.Name, func(t *testing.T) {
t.Parallel()
...
})
}
}
Run does not return until parallel subtests have completed, providing a way
to clean up after a group of parallel tests:
func TestTeardownParallel(t *testing.T) {
// This Run will not return until the parallel tests finish.
t.Run("group", func(t *testing.T) {
t.Run("Test1", parallelTest1)
t.Run("Test2", parallelTest2)
t.Run("Test3", parallelTest3)
})
// <tear-down code>
}
# Main
It is sometimes necessary for a test or benchmark program to do extra setup or teardown
before or after it executes. It is also sometimes necessary to control
which code runs on the main thread. To support these and other cases,
if a test file contains a function:
func TestMain(m *testing.M)
then the generated test will call TestMain(m) instead of running the tests or benchmarks
directly. TestMain runs in the main goroutine and can do whatever setup
and teardown is necessary around a call to m.Run. m.Run will return an exit
code that may be passed to os.Exit. If TestMain returns, the test wrapper
will pass the result of m.Run to os.Exit itself.
When TestMain is called, flag.Parse has not been run. If TestMain depends on
command-line flags, including those of the testing package, it should call
flag.Parse explicitly. Command line flags are always parsed by the time test
or benchmark functions run.
A simple implementation of TestMain is:
func TestMain(m *testing.M) {
// call flag.Parse() here if TestMain uses flags
m.Run()
}
TestMain is a low-level primitive and should not be necessary for casual
testing needs, where ordinary test functions suffice.testing_other.go
Package-Level Type Names (total 13)
/* sort by: | */
B is a type passed to [Benchmark] functions to manage benchmark
timing and to specify the number of iterations to run.
A benchmark ends when its Benchmark function returns or calls any of the methods
FailNow, Fatal, Fatalf, SkipNow, Skip, or Skipf. Those methods must be called
only from the goroutine running the Benchmark function.
The other reporting methods, such as the variations of Log and Error,
may be called simultaneously from multiple goroutines.
Like in tests, benchmark logs are accumulated during execution
and dumped to standard output when done. Unlike in tests, benchmark logs
are always printed, so as not to hide output whose existence may be
affecting benchmark results.Nint Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order. Elapsed returns the measured elapsed time of the benchmark.
The duration reported by Elapsed matches the one measured by
[B.StartTimer], [B.StopTimer], and [B.ResetTimer]. Error is equivalent to Log followed by Fail. Errorf is equivalent to Logf followed by Fail. Fail marks the function as having failed but continues execution. FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines. Failed reports whether the function has failed. Fatal is equivalent to Log followed by FailNow. Fatalf is equivalent to Logf followed by FailNow. Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines. Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag. Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag. Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique. ReportAllocs enables malloc statistics for this benchmark.
It is equivalent to setting -test.benchmem, but it only affects the
benchmark function that calls ReportAllocs. ReportMetric adds "n unit" to the reported benchmark results.
If the metric is per-iteration, the caller should divide by b.N,
and by convention units should end in "/op".
ReportMetric overrides any previously reported value for the same unit.
ReportMetric panics if unit is the empty string or if unit contains
any whitespace.
If unit is a unit normally reported by the benchmark framework itself
(such as "allocs/op"), ReportMetric will override that metric.
Setting "ns/op" to 0 will suppress that built-in metric. ResetTimer zeroes the elapsed benchmark time and memory allocation counters
and deletes user-reported metrics.
It does not affect whether the timer is running. Run benchmarks f as a subbenchmark with the given name. It reports
whether there were any failures.
A subbenchmark is like any other benchmark. A benchmark that calls Run at
least once will not be measured itself and will be called once with N=1. RunParallel runs a benchmark in parallel.
It creates multiple goroutines and distributes b.N iterations among them.
The number of goroutines defaults to GOMAXPROCS. To increase parallelism for
non-CPU-bound benchmarks, call [B.SetParallelism] before RunParallel.
RunParallel is usually used with the go test -cpu flag.
The body function will be run in each goroutine. It should set up any
goroutine-local state and then iterate until pb.Next returns false.
It should not use the [B.StartTimer], [B.StopTimer], or [B.ResetTimer] functions,
because they have global effect. It should also not call [B.Run].
RunParallel reports ns/op values as wall time for the benchmark as a whole,
not the sum of wall time or CPU time over each parallel goroutine. SetBytes records the number of bytes processed in a single operation.
If this is called, the benchmark will report ns/op and MB/s. SetParallelism sets the number of goroutines used by [B.RunParallel] to p*GOMAXPROCS.
There is usually no need to call SetParallelism for CPU-bound benchmarks.
If p is less than 1, this call will have no effect. Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
Because Setenv affects the whole process, it cannot be used
in parallel tests or tests with parallel ancestors. Skip is equivalent to Log followed by SkipNow. SkipNow marks the test as having been skipped and stops its execution
by calling [runtime.Goexit].
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines. Skipf is equivalent to Logf followed by SkipNow. Skipped reports whether the test was skipped. StartTimer starts timing a test. This function is called automatically
before a benchmark starts, but it can also be used to resume timing after
a call to [B.StopTimer]. StopTimer stops timing a test. This can be used to pause the timer
while performing complex initialization that you don't
want to measure. TempDir returns a temporary directory for the test to use.
The directory is automatically removed when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
*B : TB
BenchmarkResult contains the results of a benchmark run. // Bytes processed in one iteration. Extra records additional metrics reported by ReportMetric. // The total number of memory allocations. // The total number of bytes allocated. // The number of iterations. // The total time taken. AllocedBytesPerOp returns the "B/op" metric,
which is calculated as r.MemBytes / r.N. AllocsPerOp returns the "allocs/op" metric,
which is calculated as r.MemAllocs / r.N. MemString returns r.AllocedBytesPerOp and r.AllocsPerOp in the same format as 'go test'. NsPerOp returns the "ns/op" metric. String returns a summary of the benchmark results.
It follows the benchmark result line format from
https://golang.org/design/14313-benchmark-format, not including the
benchmark name.
Extra metrics override built-in metrics of the same name.
String does not include allocs/op or B/op, since those are reported
by [BenchmarkResult.MemString].
BenchmarkResult : expvar.Var
BenchmarkResult : fmt.Stringer
func Benchmark(f func(b *B)) BenchmarkResult
CoverBlock records the coverage data for a single basic block.
The fields are 1-indexed, as in an editor: The opening line of
the file is number 1, for example. Columns are measured
in bytes.
NOTE: This struct is internal to the testing infrastructure and may change.
It is not covered (yet) by the Go 1 compatibility guidelines. // Column number for block start. // Column number for block end. // Line number for block start. // Line number for block end. // Number of statements included in this block.
F is a type passed to fuzz tests.
Fuzz tests run generated inputs against a provided fuzz target, which can
find and report potential bugs in the code being tested.
A fuzz test runs the seed corpus by default, which includes entries provided
by (*F).Add and entries in the testdata/fuzz/<FuzzTestName> directory. After
any necessary setup and calls to (*F).Add, the fuzz test must then call
(*F).Fuzz to provide the fuzz target. See the testing package documentation
for an example, and see the [F.Fuzz] and [F.Add] method documentation for
details.
*F methods can only be called before (*F).Fuzz. Once the test is
executing the fuzz target, only (*T) methods can be used. The only *F methods
that are allowed in the (*F).Fuzz function are (*F).Failed and (*F).Name. Add will add the arguments to the seed corpus for the fuzz test. This will be
a no-op if called after or within the fuzz target, and args must match the
arguments for the fuzz target. Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order. Error is equivalent to Log followed by Fail. Errorf is equivalent to Logf followed by Fail. Fail marks the function as having failed but continues execution. FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines. Failed reports whether the function has failed. Fatal is equivalent to Log followed by FailNow. Fatalf is equivalent to Logf followed by FailNow. Fuzz runs the fuzz function, ff, for fuzz testing. If ff fails for a set of
arguments, those arguments will be added to the seed corpus.
ff must be a function with no return value whose first argument is *T and
whose remaining arguments are the types to be fuzzed.
For example:
f.Fuzz(func(t *testing.T, b []byte, i int) { ... })
The following types are allowed: []byte, string, bool, byte, rune, float32,
float64, int, int8, int16, int32, int64, uint, uint8, uint16, uint32, uint64.
More types may be supported in the future.
ff must not call any *F methods, e.g. (*F).Log, (*F).Error, (*F).Skip. Use
the corresponding *T method instead. The only *F methods that are allowed in
the (*F).Fuzz function are (*F).Failed and (*F).Name.
This function should be fast and deterministic, and its behavior should not
depend on shared state. No mutable input arguments, or pointers to them,
should be retained between executions of the fuzz function, as the memory
backing them may be mutated during a subsequent invocation. ff must not
modify the underlying data of the arguments provided by the fuzzing engine.
When fuzzing, F.Fuzz does not return until a problem is found, time runs out
(set with -fuzztime), or the test process is interrupted by a signal. F.Fuzz
should be called exactly once, unless F.Skip or [F.Fail] is called beforehand. Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines. Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag. Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag. Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique. Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
Because Setenv affects the whole process, it cannot be used
in parallel tests or tests with parallel ancestors. Skip is equivalent to Log followed by SkipNow. SkipNow marks the test as having been skipped and stops its execution
by calling [runtime.Goexit].
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines. Skipf is equivalent to Logf followed by SkipNow. Skipped reports whether the test was skipped. TempDir returns a temporary directory for the test to use.
The directory is automatically removed when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
*F : TB
InternalFuzzTarget is an internal type but exported because it is
cross-package; it is part of the implementation of the "go test" command.Fnfunc(f *F)Namestring
func MainStart(deps testDeps, tests []InternalTest, benchmarks []InternalBenchmark, fuzzTargets []InternalFuzzTarget, examples []InternalExample) *M
A PB is used by RunParallel for running parallel benchmarks. Next reports whether there are more iterations to execute.
T is a type passed to Test functions to manage test state and support formatted test logs.
A test ends when its Test function returns or calls any of the methods
FailNow, Fatal, Fatalf, SkipNow, Skip, or Skipf. Those methods, as well as
the Parallel method, must be called only from the goroutine running the
Test function.
The other reporting methods, such as the variations of Log and Error,
may be called simultaneously from multiple goroutines. Cleanup registers a function to be called when the test (or subtest) and all its
subtests complete. Cleanup functions will be called in last added,
first called order. Deadline reports the time at which the test binary will have
exceeded the timeout specified by the -timeout flag.
The ok result is false if the -timeout flag indicates “no timeout” (0). Error is equivalent to Log followed by Fail. Errorf is equivalent to Logf followed by Fail. Fail marks the function as having failed but continues execution. FailNow marks the function as having failed and stops its execution
by calling runtime.Goexit (which then runs all deferred calls in the
current goroutine).
Execution will continue at the next test or benchmark.
FailNow must be called from the goroutine running the
test or benchmark function, not from other goroutines
created during the test. Calling FailNow does not stop
those other goroutines. Failed reports whether the function has failed. Fatal is equivalent to Log followed by FailNow. Fatalf is equivalent to Logf followed by FailNow. Helper marks the calling function as a test helper function.
When printing file and line information, that function will be skipped.
Helper may be called simultaneously from multiple goroutines. Log formats its arguments using default formatting, analogous to Println,
and records the text in the error log. For tests, the text will be printed only if
the test fails or the -test.v flag is set. For benchmarks, the text is always
printed to avoid having performance depend on the value of the -test.v flag. Logf formats its arguments according to the format, analogous to Printf, and
records the text in the error log. A final newline is added if not provided. For
tests, the text will be printed only if the test fails or the -test.v flag is
set. For benchmarks, the text is always printed to avoid having performance
depend on the value of the -test.v flag. Name returns the name of the running (sub-) test or benchmark.
The name will include the name of the test along with the names of
any nested sub-tests. If two sibling sub-tests have the same name,
Name will append a suffix to guarantee the returned name is unique. Parallel signals that this test is to be run in parallel with (and only with)
other parallel tests. When a test is run multiple times due to use of
-test.count or -test.cpu, multiple instances of a single test never run in
parallel with each other. Run runs f as a subtest of t called name. It runs f in a separate goroutine
and blocks until f returns or calls t.Parallel to become a parallel test.
Run reports whether f succeeded (or at least did not fail before calling t.Parallel).
Run may be called simultaneously from multiple goroutines, but all such calls
must return before the outer test function for t returns. Setenv calls os.Setenv(key, value) and uses Cleanup to
restore the environment variable to its original value
after the test.
Because Setenv affects the whole process, it cannot be used
in parallel tests or tests with parallel ancestors. Skip is equivalent to Log followed by SkipNow. SkipNow marks the test as having been skipped and stops its execution
by calling [runtime.Goexit].
If a test fails (see Error, Errorf, Fail) and is then skipped,
it is still considered to have failed.
Execution will continue at the next test or benchmark. See also FailNow.
SkipNow must be called from the goroutine running the test, not from
other goroutines created during the test. Calling SkipNow does not stop
those other goroutines. Skipf is equivalent to Logf followed by SkipNow. Skipped reports whether the test was skipped. TempDir returns a temporary directory for the test to use.
The directory is automatically removed when the test and
all its subtests complete.
Each subsequent call to t.TempDir returns a unique directory;
if the directory creation fails, TempDir terminates the test by calling Fatal.
*T : TB
func testing/slogtest.Run(t *T, newHandler func(*T) slog.Handler, result func(*T) map[string]any)
func crypto/internal/cryptotest.TestHash(t *T, mh cryptotest.MakeHash)
func internal/testenv.ParallelOn64Bit(t *T)
func vendor/golang.org/x/net/nettest.TestConn(t *T, mp nettest.MakePipe)
AllocsPerRun returns the average number of allocations during calls to f.
Although the return value has type float64, it will always be an integral value.
To compute the number of allocations, the function will first be run once as
a warm-up. The average number of allocations over the specified number of
runs will then be measured and returned.
AllocsPerRun sets GOMAXPROCS to 1 during its measurement and will restore
it before returning.
Benchmark benchmarks a single function. It is useful for creating
custom benchmarks that do not use the "go test" command.
If f depends on testing flags, then [Init] must be used to register
those flags before calling Benchmark and before calling [flag.Parse].
If f calls Run, the result will be an estimate of running all its
subbenchmarks that don't call Run in sequence in a single benchmark.
Coverage reports the current code coverage as a fraction in the range [0, 1].
If coverage is not enabled, Coverage returns 0.
When running a large set of sequential test cases, checking Coverage after each one
can be useful for identifying which test cases exercise new code paths.
It is not a replacement for the reports generated by 'go test -cover' and
'go tool cover'.
CoverMode reports what the test coverage mode is set to. The
values are "set", "count", or "atomic". The return value will be
empty if test coverage is not enabled.
Init registers testing flags. These flags are automatically registered by
the "go test" command before running test functions, so Init is only needed
when calling functions such as Benchmark without using "go test".
Init is not safe to call concurrently. It has no effect if it was already called.
Main is an internal function, part of the implementation of the "go test" command.
It was exported because it is cross-package and predates "internal" packages.
It is no longer used by "go test" but preserved, as much as possible, for other
systems that simulate "go test" using Main, but Main sometimes cannot be updated as
new functionality is added to the testing package.
Systems simulating "go test" should be updated to use MainStart.
MainStart is meant for use by tests generated by 'go test'.
It is not meant to be called directly and is not subject to the Go 1 compatibility document.
It may change signature from release to release.
RegisterCover records the coverage data accumulators for the tests.
NOTE: This function is internal to the testing infrastructure and may change.
It is not covered (yet) by the Go 1 compatibility guidelines.
RunBenchmarks is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
RunExamples is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
RunTests is an internal function but exported because it is cross-package;
it is part of the implementation of the "go test" command.
Short reports whether the -test.short flag is set.
Testing reports whether the current code is being run in a test.
This will report true in programs created by "go test",
false in programs created by "go build".
Verbose reports whether the -test.v flag is set.
The pages are generated with Goldsv0.7.0-preview. (GOOS=linux GOARCH=amd64)
Golds is a Go 101 project developed by Tapir Liu.
PR and bug reports are welcome and can be submitted to the issue list.
Please follow @zigo_101 (reachable from the left QR code) to get the latest news of Golds.