Since its release in 2010, Instruments has been tepid, and there hasn’t been any notable updates in the last year. But after a quiet year, the Instruments team finally had a release to look forward to this year. Session 410: Creating Custom Instruments Session 410: Creating Custom Instruments

Instruments 10

Instruments is a powerful and flexible performance analysis tool integrated into Xcode’s developer toolset. We can analyze and test various performance problems using different instruments, such as Leaks for memory Leaks, Time Profiler for App page lag, and so on. So what updates did Apple make to Instruments this year? From the official Xcode 10 Release Notes we can see the following points:

  • Developers can flexibly and quickly create and publish instruments based on OS_SignPOST based on different requirements.
  • Instruments automatically displays data marked by OSLog signposts in your code;
  • New image mode for thread analysis in System Trace
  • All custom tools in Instrument 10 are performed based on OS_SignPOST. Therefore, custom instruments based on DTrace are not supported.

The most important point is to customize your own Instrument. In previous versions, you can create your own Instrument according to your own needs, but the procedure is cumbersome and the graphical support is poor. In addition, because data collection is based on DTrace, it cannot be used on the real phone. But Instruments 10 has gone to great trouble with this — the new OS SignPOST-based architecture supports all platforms, The “Standard UI” and “Analysis Core” make custom Instruments more flexible and convenient. This Session is organized around how to create a custom component and is divided into four major sections:

  • Why create custom tools
  • The architecture of Instruments 10
  • Instruments Elementary, intermediate, and advanced applications of custom tools
  • Best practices for customizing tools

Expert Systems and CLIPS languages will be covered if you’re interested. Compared with App developers, this article may be more suitable for some testers, especially those who are interested in performance testing.

Why create custom tools

We all know that there are a number of handy tools already built into Instruments that Apple has created through the document template and integrated into the Startup selection screen of Instruments. As the Leaks and Time Profiler Leaks and Time Profiler mentioned above have revealed, the startup screen of Instruments 10 is similar to the previous version.

The architecture of Instruments 10

Evolution of architecture

Before introducing the architecture of Instrument 10, let’s review how Apple has maintained Instruments components in the past.

“Standard UI”
“Analysis Core”

Interface is introduced

Instruments Elementary, intermediate, and advanced applications of custom tools

The primary application

With enough theoretical knowledge, we can try to create a simple custom tool.

.instrpkg

1. Import the required Schema

    <! -- MARK: Import the schema you want to use -->
    <import-schema>tick</import-schema>
Copy the code

2. Configure the Standard page and Analysis Core of the Instrument

<! -- MARK: Import the schema you want to use -->
    <import-schema>tick</import-schema>
    <instrument>
        <! MARK: Basic information about this Instrument -->
        <id>com.Parsifal.TicksDemo</id>
        <title>Ticks</title>
        <category>Behavior</category>
        <purpose>Instrument drawing ticks every 100ms</purpose>
        <icon>Generic</icon>
        
        <! -- MARK: The table data described will be stored and parsed by the analysis core and provided to the standard interface module.
        <create-table>
            <id>tick-table</id>
            <! Define the data for each column -->
            <schema-ref>tick</schema-ref>
        </create-table>
        
        <! -- MARK: Graphic view shown above (optional) -->
        <graph>
            <title>Ticks</title>
            <lane>
                <title>Lane</title>
                <! Select * from table_name where table id = >
                <table-ref>tick-table</table-ref>
                <plot>
                    <value-from>time</value-from>
                </plot>
            </lane>
        </graph>
        
        <! -- MARK: This describes the data you need to display in detail view -->
        <list>
            <title>Ticks</title>
            <! Select * from table_name where table id = >
            <table-ref>tick-table</table-ref>
            <column>time</column>
        </list>
    </instrument>
Copy the code

At this point, all of the coding is done, and as we write, we find that Apple has prepared a lot of code snippets to help with the configuration, and that the code is checked during compilation and the error messages are easy to debug. After compiling and running, select Blank in the Instruments selection window that pops up to find our own tool in the Library of the test screen. Drag the tool directly into Instruments and it will run just like any other built-in tool.

Instruments -> Preferences -> Packages

Intermediate application

In this section we will cover some of the more detailed aspects of the “Standard Interface” and “Analysis Center”.

Standard interface

The “Standard Interface” module provides many simple and useful elements that allow us to create cool and useful Instruments. Here are some of the most common elements, and more to explore when Apple releases Instruments 10.

Graphics channel panel

  • : Delineate a separate data channel for the graph view. Provide a value to locate the column, such as time in our Ticks example;
  • : this is similar to plot except that it automatically creates a data channel for each instance-by value;

  • : Generate histogram graphs for a given time slice, as used in the System Trace component;

Details panel

  • : Creates a list, which is available with all the usual built-in tools;
  • <aggregation>: Creates a total view, statistics totals, averages, etc. When using this element, the vertical column is used for various functions, such assum,averagecountAnd so on, and then this element has ahierarchyProperty, can be set for different vertical outline, very suitable for the display of large amounts of data;

  • : This is what the name implies, the view of the call stack;

  • : Showing a view describing the engineering type;

Analysis of the core

In this section, we will first focus on how the “analytics core” collects and processes data. This process consists of three steps. We then introduce some of the concepts in the “analysis Core” to help us use them in our configuration tables.

1, the simplified

2, search,

Each store will then start trying to find a provider for the data.

3, optimize

Once we get the data source from each store, we start a work called “Binding Solution” in the “Analysis Core”, and the third step is to optimize this workflow.

Thread Narrative

Some important concepts:

Binding Solution: Instruments is implemented through Thread Narrative, which has the following two advantages

  • The trace – wide;
  • Instruments will start calculating the best possible recording scheme to minimize impact on the target as we drag the tool into the test interface;

Schemas: When we create tables, we must specify a Schema, such as the tick in our first Demo

  • There are currently over 100 schemas defined in Instruments that we can import and use;
  • Included in the Instruments package;
  • Other Instruments packages can be linked through compilation Settings, and type checks are performed at compile time;
  • Different building blocks are provided;

Modelers: As mentioned earlier, Modeler can help us synthesize different data, and there are several common elements of Modeler that can be used in XML configuration files.


  • : Create a Modeler to help us with the data type conversion;
  • : define a schema that can be used to store points (no time fragments, i.e. data at a point in time);

  • : defines a schema that can be used to store interval data (time segments, data over a period of time);

PS: Modelers is actually a very powerful and advanced small Expert System written by CLIPS. It can specify which input signals it needs to tell the Binding Solution how to populate the rest of the data graph. We’ll cover this in more detail in “Advanced Applications.”

Finally, the ability to define a schema is important. The new OS SignPOST API released this year gives us a great way to import data into Instruments, and Apple created some shortcuts for us to use it, For example, typing the < OS-signpost-interval-schema > element in the XML configuration table automatically generates the following code snippet.

< OS-signpost-interval-schema > : defines a schema for storing distance data provided by the OS_SIGNPOST API. This means that we can use the OS_SignPOST API anywhere in our code to import the data we need to test directly into Instruments. When this element is created, Xcode automatically generates the relevant code snippets to help us complete Modeler creation. This API, combined with os_SignPOST, looks like this:

Use the OS_SignPOST to pass your data to Instruments anywhere in our code, but remember that begin and end are used in pairs
os_signpost(.begin, log: parsingLog, name: "Parsing"."Parsing started SIZE:%ld", data.count)

// Decode the JSON we just downloaded

let result = try jsonDecoder.decode(Trail.self, from: data)

os_signpost(.end, log: parsingLog, name: "Parsing"."Parsing finished")
Copy the code
<! -- MARK: Use this element to create a schema that gets data from os_SIGNPOST -->
<os-signpost-interval-schema>

<id>json-parse</id>

<title>Image Download</title>

<subsystem>"com.apple.trailblazer"</subsystem>

<category>"Networking"</category>

<name>"Parsing"</name>

<start-pattern>
<! -- MARK: This output is based on the conditions we set -->
<message>"Parsing started SIZE:" ? data-size</message>

</start-pattern>

<column>
<! -- MARK: Engineering Type -->
<mnemonic>data-size</mnemonic>

<title>JSON Data Size</title>
<! MARK: This is a data Type defined in Engineering Type. For example, if you are looking at memory, you will see size-in-bytes -->
<type>size-in-bytes</type>
<! -- MARK: Expressions written in the CLIPS language as the values for this column -->
<expression>? data-size</expression>

</column>

</os-signpost-interval-schema>
Copy the code

In the intermediate section, Apple also demonstrated an example of using the OS_SignPOST API. In this example, an image test is performed on a list page showing images, monitoring the download of each cell image. Since the knowledge points involved have been described above, the completion process of specific examples will not be described here, I believe that you can see more intuitively through the video. There are several points worth noting in the demonstration process, and here the emphasis is drawn out to illustrate.

Instrument Inspector

Advanced application

This section will focus on how to create and define Modelers, and briefly introduce how to build a basic expert system with CLIPS.

Explore Modeler’s inner world

Let’s explore the Modeler world with a simple example. We simulate a situation where some dangerous actions in my code are likely to trigger a program problem, and our goal is to find out which actions are responsible when the program fails. So we can define the three schemas in the figure below, the first two as input items and the last as output items.

To illustrate this process, we have a more visual sequence diagram, where the dotted lines represent Modeler’s own clock:

In the figure, I marked four notable nodes in chronological order:

(1) At this moment, the App is in normal running state, no data is transferred to Modeler’s working memory, and Modeler’s clock does not start to run;

(2) At this point the dashed line reaches our first input Schema-triggered node (the node that started the dangerous operation), where Modeler’s working memory officially starts receiving data, and Modeler’s clock starts timing from this point.

(3) This is the triggering node of the second Input Schema (the node with problems in our App). It is worth mentioning that Modeler is clever and has its own logic to distinguish between dangerous operation data before this point in time, and data between the start of this point and the end of the app-on fire point.

(4) By this last node, all the input data has been transferred, Modeler’s clock has no intersection with the input data, and it deduces that the input data is no longer needed, so it removes it from working memory and produces the final output data.

If you can review the whole process, you can conclude two things:

  • Modeler’s clock start time is always the time when the first input data is fired
  • Only input data that intersects Modeler’s current clock is kept in working memory by Modeler

Such a mechanism would help us identify problems more clearly, without being distracted by meaningless old data. How does such a mechanism work? The answer is “Production System”, which we will discuss next.

A production system

The logical support for “Working Memory” in Modeler comes from the “production system” we defined. The “production system” can be generated by a set of “Rules” that serve facts in “working memory.” The “rule” consists of three parts — LHS => RHS, the left part + operator (=>) + the right part. The left side is a condition for activating the rule in “working memory,” and the right side is the behavior to perform after activating. The actions on the right include creating a new row for the output table, or infering a new “fact” into Working Memory during modeling. Combined with the sequence diagram above, we can think of “facts” coming from two places. One is the data input table we see, which is automatically deduced from the “rules”. Another can be actions from the right-hand side, which are actively inferred by the Assert command in CLIPS. If we’re going to create our own “facts,” CLIPS provides fact templates that allow us to provide data structures for “Facts” and do some basic type checking. Here’s how to define rules.

Rules and CLIPS,

We can define some rules using CLIPS language. Let’s review the timing of the above example and see how we use CLIPS to set the rule:

Defrule MODELER::found-cause// rule name //LHS, the left part specifies the condition of the rule activation (playing-with-matches (start-time? t1) (who ? object)) (app-on-fire (start-time ? t2)) (test (< ? t1 ? T2)) => //RHS, the right part of the inference generates a fact (assert-of-fire (who? Object)))) (Defrule RECORDER:: Record-cause // rule name //LHS, left part not set activation conditions (app-on fire (start-time? start)) (cause-of-fire (who ? object)) (table (table-id ? t) (side append)) (table-attribute (table-id ? T) (has schema started a-fire)) => //RHS, the right part is to generate the output data (create-row? t) (set-column time ? start) (set-column who ? object) )Copy the code

The record-cause rule defines that if the following three conditions are met, the production will deduce a fact and be pushed into Working Memory.

  • An object t1 at this timeplaying-with-matchesTo produce;
  • app-on-fireIt’s triggered at t2;
  • T1 occurred earlier than T2.

The record-cause rule defines if the following four criteria are met:

  • [Fixed] App “catches fire” at some starting time
  • Know the cause of the “fire” and who is involved (available in Rule 1);
  • There is a data table bound to the Output side of Modeler;
  • This data table is associated with what we defined earlierstart-a-fireSchema;

Creates a row of data in the output data table, and sets the time and sets the value captured in the “left part” that caused the “fire.”

With these two simple rules, we basically created the earliest “Expert System.” Using the two rules defined, we can find some problems in our App. You may also have noticed that these rules are either pre-set for Modeler or Recorder. CLIPS calls them “modules,” and supports grouping rules and controlling the order in which they are executed. For example, if all your rules are defined to produce output tables in the record module, then you will not write any output data when the Modeler module deduces. Because rules in the Modeler module must be executed before rules in the recording module.

Logic support

Earlier in our exploration of Modeler’s inner world, we mentioned logic support. Logical support is generally associated with pure inference rules. For example, if A and B, then C. In our production system, if a and B are no longer in working memory, c is automatically reclaimed. Then we can say that C is logically supported by A and B. Such an ability is important for an expert system to keep working memory low because it keeps resource overhead under control. It is also important to remove invalid facts from “working memory” in a timely manner. If A and B fail, then C should also be removed. Such a requirement would be easy to implement in CLIPS via logical commands, as shown below.

Defrule MODELER::found-cause // Logical (playing-with-matches (start-time? t1) (who ? object)) (app-on-fire (start-time ? t2)) ) (test (< ? t1 ? t2)) => (assert (cause-of-fire (who ? object))) )Copy the code

Best practices

After all the talk, this last section focuses on best practices when developing custom Instruments tools.

Write one more Instrument

This sentence does not suggest that we should practice writing custom Instrument tools, but rather that we should fine-grained Instrument functionality. For example, we have a customized Instrument tool, but its functions cannot meet current requirements. Therefore, we need to add or delete some details or graphs on the basis of the tool. In such a scenario, we will do it in two ways. The first is to iterate on the original basis, and the second is to write an Instrument that meets our current requirements. If you choose the first option, this will cause the Instrument component to become less pure, with more functionality, but the corresponding Instruments to become more complex. Therefore, Apple would recommend us to use the second solution, which is to rewrite an Instrument that meets our current requirements. If we need to combine different instruments, we can simply drag and drop the corresponding instruments from the Instrument component library into our document. If there are many such combination scenarios, we can also use “File -> Save As Template” to Save As a Template for future use. Saved templates will be displayed on our Instruments startup page, as will built-in Leaks, Activity Monitor, and Time Profiler. In addition, these templates can be easily reused in our packages by using the

Real-time mode is difficult

Real-time mode means that data is captured, analyzed, and finally displayed in real time through the Instruments interface. There are two main reasons why This kind of real-time interaction is difficult for Apple right now. The first reason is that real-time requires some extra support, which Apple doesn’t have the luxury of doing right now. The second, and more important, reason is Interval Data. Distance variables can only be added to the data table and retrieved by the “analysis core” when the start and end phases are complete. After starting the test record, Instruments will receive an open interval for a set of distance variables, and Modeler’s clock will not move forward when the distance variables have not completed the closed interval (data in Modeler is sorted by time). The problem with this mechanism is that if the open and close interval of a distance variable is pulled too far, the Modeler will stay there waiting. But if the user clicks the Stop recording button at this point, all the open interval distance variables will be closed, everything will return to normal, and the data will be filled into Modeler. It should feel like a bad user experience. Once we encounter this situation, we have two options. The first option is to configure our Instrument so that it doesn’t support this real-time mode, which you can do with the element. The second option avoids this long open interval for data, such as using < OS-signpost-interval-schema > instead of

.

Use “Last 5 seconds record mode”

When creating an Instrument to test an Instrument with a large amount of input data, using “last 5 second recording mode” is by far our best choice. This option can be found in File -> Recording Options. The diagram below:

conclusion

Instruments 10 offers so many possibilities for creating custom Instruments, but it also takes some time to learn how to write a new Instrument. Most client developers may not use these skills, but for the test team, it opens up a new window for iOS App performance testing. We believe that in the coming year, there will be high-quality customized Instrument output in the circle. Let’s look forward to it together.

For more WWDC 18 articles, head over to the xSwiftGG WWDC 18 Topics directory