What Makes a (Graphics) Systems Paper Beautiful
(Addendum for Papers Chairs and Sorters)

Kayvon Fatahalian, Stanford University

This note extends the article What Makes a (Graphics) Systems Paper Beautiful with a few tips for graphics conference program chairs, program committee members, and papers sorters. It is intended that the readers of this document have read the main article first.

Graphics systems research is not limited to research on graphics hardware or parallel computing.

A common pitfall in the review process is to give papers that make systems contributions to reviewers that specialize in methods of the target domain. For example, a system for authoring physical simulations should not be given to only experts in physical simulation algorithms. Even without detailed domain knowledge, a systems minded researcher might be able to apply the criteria described in this document to the work, and can serve to help domain experts with the process of determining whether goals, assumptions, and design decisions are sound.

Conversely, a reviewer that is able to bring a systems viewpoint to critiquing a paper may be valuable even on papers whose primary contribution may be in other areas of computer graphics. As one example, in a read of "Fast computation of seamless video loops" [Liao 2016], I was struck by the number of decisions in the paper that were made as a result of cross-cutting issues. While a methods-centric view on the paper might see it as a collection of modest improvements to prior methods, a systems reviewer would assess whether insightful thought was responsible for the choice of methods combined to address the end-to-end problem.

Papers about parallel algorithms for GPUs do not necessarily make systems contributions.

A common reaction is to see "parallel" or "on the GPU" in a paper’s title and think the paper is a good fit for a reviewer with a systems background. These papers are often methods papers that propose a novel algorithm that maps well to modern parallel processors. A reviewer might need parallel programming or GPU architecture background to evaluate the quality of the algorithm, but that does not imply that “systems thinking principles” described here are the best way to evaluate the work.

For example, the SIGGRAPH 2016 Halide autoscheduler paper [Mullapudi 16] is a methods paper about a new algorithm, and less of a systems paper. However, the constraints motivating the approach were without question drawn from systems thinking concerns (namely a need for “fast compile times”) and the result of the new scheduling algorithm was a notable improvement to the Halide system.

Industry practitioners are excellent reviewers of systems work since they can assess the validity of goals and constraints.

Does the system solve problems that real-world architects face, or is it solving a problem that does not arise in practice for reasons that academics may not be aware of? Does the system ignore key optimizations or concerns of practitioners of a domain? These are common questions that an industry practitioner can help an academic review committee with.

On the flip side, industry reviewers should be mindful to not judge a paper harshly if it establishes assumptions and requirements do not align with those of the industry’s current products ("not applicable to my company!"). Industrial reviewers should be encouraged to validate assumptions with an open and forward-looking mindset. (e.g., might there be future situations where the paper’s assumptions are compelling? Might an approach that seems impractical today be practical in the more distant future?)