Help:Wikifunctions/Function call metadata

PD Note: When you edit this page, you agree to release your contribution under the CC0. See Public Domain Help Pages for more info. PD

Whenever Wikifunctions runs a function, it collects and reports information about the run, including any errors that have been raised alongside a variety of basic metrics such as the run's duration, CPU usage, and memory usage. The purpose of this information is to provide function contributors and users with some awareness of the performance characteristics of particular function implementations.

Where is this shown?

edit

This information, known as function call metadata, is displayed in the user interface, in a pop-up dialog available in four different settings:

  1. Immediately after invoking a function from the Evaluate a function call page (Special:EvaluateFunctionCall)
  2. When viewing test results on the page for a particular function
  3. When viewing test results on the page for a particular implementation.
  4. When viewing test results on the page for a particular tester.

In setting 1, the metadata dialog can be displayed by clicking on the button labeled Show metrics (which may soon be changed to Show metadata). In the other three settings, it is displayed by clicking on the information icon (letter 'i' inside a circle) for a particular tester run.

When viewing the metadata dialog in setting 1, the user sees metadata from the function run they just requested. When viewing metadata dialog for a tester in the other three settings, what's shown is the metadata for the most recent run of the tester. Additional information about tester run metadata is given below, in Metadata for tester runs.

Metadata is collected in the orchestrator, evaluator, and executor components. See Function Evaluation for Wikifunctions for general information about these components.

What do the different bits of data mean?

edit

The currently implemented metadata elements are described in the following sections. The headings for individual metadata elements, shown in bold (e.g., Implementation type) are the labels that show up in the metadata dialog for an English-language reader.

Implementation metadata

edit
Implementation type
The type (BuiltIn, Evaluated, or Composition) of the implementation used to run the function. See the Function model for more about implementation types.
Implementation ID
The persistent ID, if there is one, of the implementation used to run the function. See the Function model for more about persistent IDs.

Orchestrator metadata

edit
Orchestration start time
Wall clock time when orchestration began, given to millisecond precision, in Coordinated Universal Time (UTC).
Orchestration end time
Wall clock time when orchestration finished, given to millisecond precision, in Coordinated Universal Time (UTC).
Orchestration duration
The time elapsed, given in milliseconds, between Orchestration start time and Orchestration end time.
Orchestration CPU usage
CPU time used by the orchestrator during the interval between Orchestration start time and Orchestration end time, given in milliseconds, as reported by the Node.js method process.cpuUsage().
Orchestration CPU usage must be interpreted carefully, because it doesn't necessarily reflect CPU time used exclusively for the current function call.  Depending on operational configuration and current load, it could reflect time spent on multiple different function calls, because the orchestrator may be configured to handle multiple calls in an interleaved fashion.  The implementation of this metric will be revisited in the future, after operational configuration has been more permanently determined.  See also Phabricator ticket T314953.
Orchestration memory usage
Orchestrator memory allocation at the moment when the orchestrator finished handling the function call, as reported by the Node.js method process.memoryUsage.rss().
Orchestration memory usage must be interpreted carefully, because it doesn't necessarily reflect memory allocation made exclusively for the current function call.  Depending on operational configuration, current load, and garbage collection behavior, it could reflect memory needed for function calls handled previously, or concurrently with the current function call.  The implementation of this metric will be revisited in the future, after operational configuration has been more permanently determined.  See also Phabricator ticket T314953.
Orchestration server
The virtual host on which the orchestrator ran while handling the function call, as reported by Node.js method os.hostname(). As of this writing, this value is a Docker container ID.

Evaluator metadata

edit
Evaluation start time
Wall clock time when evaluation began, given to millisecond precision, in Coordinated Universal Time (UTC).
Evaluation end time
Wall clock time when evaluation finished, given to millisecond precision, in Coordinated Universal Time (UTC).
Evaluation duration
The time elapsed, given in milliseconds, between Evaluation start time and Evaluation end time.
Evaluation CPU usage
CPU time used by the evaluator during the interval between Evaluation start time and Evaluation end time, given in milliseconds, as reported by the Node.js method process.cpuUsage().
The note for Orchestration CPU usage also applies here.
Evaluation memory usage
Orchestrator memory allocation at the moment when the orchestrator finished handling the function call, as reported by the Node.js method process.memoryUsage.rss().
The note for Orchestration memory usage also applies here.
Evaluation server
The virtual host on which the orchestrator ran while handling the function call, as reported by Node.js method os.hostname(). As of this writing, this value is a Docker container ID.

Executor metadata

edit
Execution CPU usage
CPU time used by the executor, given in milliseconds, as reported by the ctime property returned from the Node.js method pidusage().
This metric must be interpreted carefully, because it doesn't necessarily give an accurate report of the total CPU usage by the executor of the current function call.  See also Phabricator ticket T313460.
Execution memory usage
Memory used by the executor, given in milliseconds, as reported by the memory property returned from the Node.js method pidusage().
This metric must be interpreted carefully, because it doesn't necessarily give an accurate report of the total memory usage by the executor of the current function call. See also Phabricator ticket T313460.

Errors

edit

Errors are currently reported, as instances of Z5 / ZError, from the orchestrator and evaluator components. Error conditions involving an executor are currently reported from the evaluator that spawned the executor, but in the near future we expect to begin reporting errors directly from executors. In rare circumstances, it's also possible that an error raised in the WikiLambda component might be reported.

Error(s)
A ZError that has been returned from the function call, presented in summary form for readability. Note that a ZError may have nested ZErrors.

Debug tracing

edit
Execution debug logs
One or more strings emitted by the Wikifunctions.Debug command in an evaluated implementation (an implementation written in one of the supported programming languages, such as JavaScript or Python).

For more info about Wikifunctions.Debug, see Abstract_Wikipedia/Updates/2024-01-25.

Metadata for tester runs

edit

Each run of a tester involves running two functions:

  1. the function being tested is run first
  2. a result-checking function is then run to determine if the result of the first function call is correct.

If the result of (1) is correct, and no errors arise in the execution of (2), the metadata dialog for the tester shows exactly the metadata for (1). If, on the other hand, the first function call has returned an incorrect result, the metadata dialog also shows these two metadata elements, in addition to the metadata returned for (1):

Expected result
The result expected from (1), a Z1 / ZObject, as defined by the tester.
Actual result
The result actually returned from (1), a Z1 / ZObject.

Similarly, if an error arises in the execution of (2), that error is displayed along with the metadata returned for (1):

Validator error(s)
A ZError that has been returned from (2), presented in summary form for readability.

Testers are instances of Z20 / Test, and are described in greater detail in the Function model.

Caching of test results and metadata

edit

Test results and metadata from tester runs are cached in a database for performance optimization. So long as there have been no changes to the tested function, tested function implementation, or the tester itself, the cached metadata remains valid, and it is unnecessary to rerun the tester.