Supplementing the intelligence analyst using image recognition… and a more intuitive UI
I briefly supported Dr. Jeff Hansberger of Army Research Laboratory on the “Zoomable User Interface” (ZUI)🗗 for DARPA’s Visual Media Reasoning (VMR)🗗 program.
The ZUI concept pits the “zoomable” approach against the “drill down” approach to data visualization; and it is designed to supplement, rather than replace, the human analyst.
His inspiration for the ZUI concept, in his own words:
I discovered that like many people in other domains, the Intel analysts we had interacted with also adhered to what is called the ‘visual information seeking mantra,’ which is that people looking within and for visual information prefer to have
1) an overview first, then
2) zoom and filter through the information and retrieve
3) details on demand. […]
—Jeff Hansberger, Ph.D. (source)🗗
In a way, the ZUI functions like Google Maps. It presents all of the media present in some data set all at once, just as Maps presents the entire globe at once. And just as Maps allows for zooming and panning, the ZUI also allows the analyst to “zoom in” on data of interest and pan around the its visual presentation of the data.
This way, the ZUI keeps the analyst from losing the context behind his/her current view of the data. By analogy, even if he/she zooms in on Birmingham, it’s easy to pan right over to Huntsville or Montgomery without needing to traverse a hierarchy. For spatial navigation, having to back up to the state level and navigate down a hierarchy to find a specific city or street simply isn’t intuitive—hence, the zoomable approach.
Task summary
Intelligence analysts in organizations such as the CIA, FBI, or DARPA work with massive amounts of raw data from disparate locations and in a variety of forms. Their goal is to extract relevant information and draw meaningful patterns using this data. Many of their current tools, algorithms, and methods suffer from typical human factors-related problems. For instance, they do not completely mitigate the sheer fatigue that analysts can suffer when absorbing large amounts of data at once. Problems of this type can cause analysts to miss important patterns and connections.
DARPA’s Visual Media Reasoning (VMR) project is a response to the analysts’ needs. The project involves creating algorithms to find similarities between media, such as images and video, and to extract properties and contents of media. Its underlying algorithms can detect certain objects, such as weapons, vehicles, or even people; provide a logical “clustering” of media based upon their content; and allow users to filter through the data by specifying certain desired characteristics or content, or even by providing examples of images.
Additionally, analysts must use some interface to interact with these algorithms. Thus, the design of a user interface must allow analysts to use these algorithms in an intuitive, straightforward, and overall helpful manner. The current interface, designed as part of the VMR project, allows users to select a single “cluster” of related images, then delve deeper into the cluster to find images of interest.
Drilling down vs. zooming
Like many other data analysis tools, the VMR’s current interface uses a “drill down” approach: analysts can use the interface to dive as deeply as needed into hierarchies of data. That approach, however, has the disadvantage of removing potentially relevant data that the interface has completely removed from the analyst’s view, due to the interface’s strict adherence and implementation of data hierarchies.
Complementing the human element
Additionally, humans themselves have an innate ability to recognize patterns. An interface, therefore, should not “get in the way” of the analyst’s investigation by ignoring the analyst mantra and/or by restricting the analyst in such a way that he/she cannot use his/her own pattern recognition capabilities. The Zoomable User Interface (ZUI) is a response to this problem, providing a competing approach to the existing interface’s approach.
More on my role
My role involved the further development of the ZUI, both in terms of its interface and through its “backend” communication with DARPA’s algorithms and datasets. The team behind the task consists of ARL project lead Jeff Hansberger as well as two students and one researcher from UAH. The team is developing two approaches to interaction with the ZUI: a standard point-and-click interface and a touchscreen interface.
Interfacing
The standard way of interacting with desktop computers is through a point-and-click interface, where the movement of an external device, such as a mouse, moves and manipulates an onscreen cursor. The ZUI’s point-and-click interface is deployed on the Internet, and it is implemented using standard web programming and scripting languages, such as HTML, PHP, Perl/CGI, and JavaScript. This interface is used primarily as a testbed for ZUI layouts, features, and communication with DARPA’s algorithms and media datasets.
According to Hansberger, touchscreen interfaces are currently the “most effective and alluring way to interact with digital images.” They have the advantage of providing a one-to-one correspondence between a user’s actions and interface’s actions. Smartphones provide such an interface, using intuitive gestures, or controls, using the fingers. These typically include having the movement of a finger to scroll through data, “pinching” and “releasing” to zoom in and out of images, and swiping quickly to “throw” an object with physical motion. The ZUI’s touch interface uses gestures such as these to allow analysts to move around, zoom to the desired level of detail, and store images for later use by “flicking” the image to one of the borders of the screen. The interface also includes many other gestures that help analysts navigate the visual space.
Other Media
- Military AI interface helps you make sense of thousands of photos🗗
- How DARPA Deals With Its Overwhelming Stockpile Of Photos🗗
- ARL Researcher Jeff Hansberger Develops Image Analysis User Interface🗗
- Researcher develops advanced computer vision technology🗗
- DARPA develops new interface for image analysis
- Hawking: Machine Learning AI Our “Biggest Existential Threat”🗗 (seriously)
- ARL scientist zooms in on a better way to search confiscated images🗗