The computational understanding of consumer interfaces (UI) is a key step in the direction of attaining clever UI behaviors. Beforehand, we investigated numerous UI modeling duties, together with widget captioning, display summarization, and command grounding, that deal with numerous interplay eventualities equivalent to automation and accessibility. We additionally demonstrated how machine studying might help consumer expertise practitioners enhance UI high quality by diagnosing tappability confusion and offering insights for bettering UI design. These works together with these developed by others within the discipline have showcased how deep neural networks can doubtlessly remodel finish consumer experiences and the interplay design apply.
With these successes in addressing particular person UI duties, a pure query is whether or not we are able to acquire foundational understandings of UIs that may profit particular UI duties. As our first try and reply this query, we developed a multi-task mannequin to deal with a spread of UI duties concurrently. Though the work made some progress, a couple of challenges stay. Earlier UI fashions closely depend on UI view hierarchies — i.e., the construction or metadata of a cellular UI display just like the Doc Object Mannequin for a webpage — that enable a mannequin to immediately purchase detailed data of UI objects on the display (e.g., their sorts, textual content content material and positions). This metadata has given earlier fashions benefits over their vision-only counterparts. Nonetheless, view hierarchies usually are not all the time accessible, and are sometimes corrupted with lacking object descriptions or misaligned construction data. Consequently, regardless of the short-term good points from utilizing view hierarchies, it might finally hamper the mannequin efficiency and applicability. As well as, earlier fashions needed to take care of heterogeneous data throughout datasets and UI duties, which regularly resulted in complicated mannequin architectures that had been tough to scale or generalize throughout duties.
In “Highlight: Cell UI Understanding utilizing Imaginative and prescient-Language Fashions with a Focus”, accepted for publication at ICLR 2023, we current a vision-only strategy that goals to realize basic UI understanding fully from uncooked pixels. We introduce a unified strategy to signify numerous UI duties, the data for which might be universally represented by two core modalities: imaginative and prescient and language. The imaginative and prescient modality captures what an individual would see from a UI display, and the language modality might be pure language or any token sequences associated to the duty. We exhibit that Highlight considerably improves accuracy on a spread of UI duties, together with widget captioning, display summarization, command grounding and tappability prediction.
Highlight Mannequin
The Highlight mannequin enter features a tuple of three gadgets: the screenshot, the area of curiosity on the display, and the textual content description of the duty. The output is a textual content description or response in regards to the area of curiosity. This straightforward enter and output illustration of the mannequin is expressive to seize numerous UI duties and permits scalable mannequin architectures. This mannequin design permits a spectrum of studying methods and setups, from task-specific fine-tuning, to multi-task studying and to few-shot studying. The Highlight mannequin, as illustrated within the above determine, leverages current structure constructing blocks equivalent to ViT and T5 which can be pre-trained within the high-resourced, basic vision-language area, which permits us to construct on prime of the success of those basic area fashions.
As a result of UI duties are sometimes involved with a selected object or space on the display, which requires a mannequin to have the ability to deal with the article or space of curiosity, we introduce a Focus Area Extractor to a vision-language mannequin that permits the mannequin to focus on the area in gentle of the display context.
Particularly, we design a Area Summarizer that acquires a latent illustration of a display area based mostly on ViT encodings by utilizing consideration queries generated from the bounding field of the area (see paper for extra particulars). Particularly, every coordinate (a scalar worth, i.e., the left, prime, proper or backside) of the bounding field, denoted as a yellow field on the screenshot, is first embedded by way of a multilayer perceptron (MLP) as a set of dense vectors, after which fed to a Transformer mannequin alongside their coordinate-type embedding. The dense vectors and their corresponding coordinate-type embeddings are coloration coded to point their affiliation with every coordinate worth. Coordinate queries then attend to display encodings output by ViT by way of cross consideration, and the ultimate consideration output of the Transformer is used because the area illustration for the downstream decoding by T5.
A goal area on the display is summarized by utilizing its bounding field to question into display encodings from ViT by way of attentional mechanisms. |
Outcomes
We pre-train the Highlight mannequin utilizing two unlabeled datasets (an inner dataset based mostly on C4 corpus and an inner cellular dataset) with 2.5 million cellular UI screens and 80 million net pages. We then individually fine-tune the pre-trained mannequin for every of the 4 downstream duties (captioning, summarization, grounding, and tappability). For widget captioning and display summarization duties, we report CIDEr scores, which measure how related a mannequin textual content description is to a set of references created by human raters. For command grounding, we report accuracy that measures the proportion of occasions the mannequin efficiently locates a goal object in response to a consumer command. For tappability prediction, we report F1 scores that measure the mannequin’s means to inform tappable objects from untappable ones.
On this experiment, we examine Highlight with a number of benchmark fashions. Widget Caption makes use of view hierarchy and the picture of every UI object to generate a textual content description for the article. Equally, Screen2Words makes use of view hierarchy and the screenshot in addition to auxiliary options (e.g., app description) to generate a abstract for the display. In the identical vein, VUT combines screenshots and consider hierarchies for performing a number of duties. Lastly, the unique Tappability mannequin leverages object metadata from view hierarchy and the screenshot to foretell object tappability. Taperception, a follow-up mannequin of Tappability, makes use of a vision-only tappability prediction strategy. We look at two Highlight mannequin variants with respect to the dimensions of its ViT constructing block, together with B/16 and L/16. Highlight drastically exceeded the state-of-the-art throughout 4 UI modeling duties.
Mannequin | Captioning | Summarization | Grounding | Tappability | |||||||||||
Baselines |
Widget Caption | 97 | – | – | – | ||||||||||
Screen2Words | – | 61.3 | – | – | |||||||||||
VUT | 99.3 | 65.6 | 82.1 | – | |||||||||||
Taperception | – | – | – | 85.5 | |||||||||||
Tappability | – | – | – | 87.9 | |||||||||||
Highlight | B/16 | 136.6 | 103.5 | 95.7 | 86.9 | ||||||||||
L/16 | 141.8 | 106.7 | 95.8 | 88.4 |
We then pursue a tougher setup the place we ask the mannequin to be taught a number of duties concurrently as a result of a multi-task mannequin can considerably scale back mannequin footprint. As proven within the desk under, the experiments confirmed that our mannequin nonetheless performs competitively.
Mannequin | Captioning | Summarization | Grounding | Tappability | ||||||||||
VUT multi-task | 99.3 | 65.1 | 80.8 | – | ||||||||||
Highlight B/16 | 140 | 102.7 | 90.8 | 89.4 | ||||||||||
Highlight L/16 | 141.3 | 99.2 | 94.2 | 89.5 |
To grasp how the Area Summarizer permits Highlight to deal with a goal area and related areas on the display, we analyze the consideration weights (which point out the place the mannequin consideration is on the screenshot) for each widget captioning and display summarization duties. Within the determine under, for the widget captioning job, the mannequin predicts “choose Chelsea workforce” for the checkbox on the left facet, highlighted with a crimson bounding field. We will see from its consideration heatmap (which illustrates the distribution of consideration weights) on the proper that the mannequin learns to take care of not solely the goal area of the test field, but in addition the textual content “Chelsea” on the far left to generate the caption. For the display summarization instance, the mannequin predicts “web page displaying the tutorial of a studying app” given the screenshot on the left. On this instance, the goal area is your entire display, and the mannequin learns to take care of vital components on the display for summarization.
Conclusion
We exhibit that Highlight outperforms earlier strategies that use each screenshots and consider hierarchies because the enter, and establishes state-of-the-art outcomes on a number of consultant UI duties. These duties vary from accessibility, automation to interplay design and analysis. Our vision-only strategy for cellular UI understanding alleviates the necessity to use view hierarchy, permits the structure to simply scale and advantages from the success of huge vision-language fashions pre-trained for the final area. In comparison with current massive vision-language mannequin efforts equivalent to Flamingo and PaLI, Highlight is comparatively small and our experiments present the development that bigger fashions yield higher efficiency. Highlight might be simply utilized to extra UI duties and doubtlessly advance the fronts of many interplay and consumer expertise duties.
Acknowledgment
We thank Mandar Joshi and Tao Li for his or her assist in processing the online pre-training dataset, and Chin-Yi Cheng and Forrest Huang for his or her suggestions for proofreading the paper. Due to Tom Small for his assist in creating animated figures on this submit.