One Visualization is Worth 1000 Words: Toward Automated Data Recovery and Interpretation from Past 3D Visualizations
ORAL
Abstract
Since the late 1700s, plots and graphs have been frequently used to visualize scientific data and convey results far more clearly than can be done with words. Unfortunately, much of that past work, even if surviving and converted to electronic form, is effectively inaccessible to most semantic queries. Unlike photographs and other pictorial presentations, plots and graphs are not interpretable by search engines. Further, even if a researcher identifies a figure relevant to their work, it is often non-trivial to recover the numerical data being represented. To address these challenges, researchers are actively working on methods for automatically extracting information from published 2D plots and figures to enable large-scale indexing. In this work, we turn our eye to 3D data visualizations. Using our recently-published SurfaceGrid dataset, we successfully train an artificial neural net to recover numerical data from 3D plots and graphical models using contour-based curvature cues that are widely used in published data visualizations. When calibration information is available, reconstructions have less than 0.5% mean-squared relative error.
–
Publication: This work builds on Brandt, L.E. and Freeman, W.T. (2021) Toward Automatic Interpretation of 3D Plots.
Presenters
-
Laura E Brandt
MIT CSAIL
Authors
-
Laura E Brandt
MIT CSAIL
-
William T Freeman
MIT CSAIL, NSF IAIFI