Categories: FAANG

Never-ending Learning of User Interfaces

Machine learning models have been trained to predict semantic information about user interfaces (UIs) to make apps more accessible, easier to test, and to automate. Currently, most models rely on datasets that are collected and labeled by human crowd-workers, a process that is costly and surprisingly error-prone for certain tasks. For example, it is possible to guess if a UI element is “tappable” from a screenshot (i.e., based on visual signifiers) or from potentially unreliable metadata (e.g., a view hierarchy), but one way to know for certain is to programmatically tap the UI element and…
AI Generated Robotic Content

Recent Posts

Capital One builds agentic AI modeled after its own org chart to supercharge auto sales

Capital One's head of AI foundations explained at VB Transform on how the bank patterned…

12 mins ago

A Pro-Russia Disinformation Campaign Is Using Free AI Tools to Fuel a ‘Content Explosion’

Consumer-grade AI tools have supercharged Russian-aligned disinformation as pictures, videos, QR codes, and fake websites…

12 mins ago

RisingAttacK: New technique can make AI ‘see’ whatever you want

Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them…

12 mins ago

Flux Kontext is great changing titles

Flux Kontext can change a poster title/text while keeping the font and style. It's really…

23 hours ago

Linear Layers and Activation Functions in Transformer Models

This post is divided into three parts; they are: • Why Linear Layers and Activations…

23 hours ago

LayerNorm and RMS Norm in Transformer Models

This post is divided into five parts; they are: • Why Normalization is Needed in…

23 hours ago