Workshop Themes

Cross-Cutting Approaches (CCA)

In order to fully exploit the power of current and emerging technologies, research is needed to revisit assumptions underlying traditional approaches -- to applications, data management, data mining and machine learning systems, programming languages, compilers, run-time systems, virtual machines, operating systems, architectures, and hardware/microarchitectures -- in light of current and future heterogeneous parallel systems. A successful approach should be a collaboration that explores new holistic approaches to parallelism and cross-layer design. Topics include, but are not limited to:

  • New abstractions, models, and software systems that expose fundamental attributes, such as energy use and communication costs, across all layers and that are portable across different platforms and architectural generations.
  • New software and system architectures that are designed for exploitable locality, with parallelism and communication efficiency to minimize energy use, and using on-chip and chip-to-chip communication achieving low latency, high bandwidth, and power efficiency.
  • New methods and metrics for evaluating, verifying and validating correctness, reliability, resilience, performance, and scalability of concurrent, parallel, and heterogeneous systems.
  • Runtime systems to manage parallelism, memory allocation, synchronization, communication, I/O, data placement, and energy usage.
  • Extracting general principles that can drive the future generation of computing architectures and tools with a focus on scalability, reliability, robustness and verifiability.
  • Exploration of tradeoffs addressing an optimized separation of concerns across layers. Which problems should be handled by which layers? What information, using which abstractions, must flow between the layers to achieve optimal performance? Which aspects of system design can be automated and what is the optimal use of costly human ingenuity?
  • Cross-layer issues related to the support of large-scale distributed computational science applications.

Domain-Specific Design (DSD)

Research is needed on foundational techniques for exploiting domain and application-specific knowledge to improve programmability, reliability, and scalable parallel performance. Topics include, but are not limited to:

  • Parallel domain-specific languages, including query languages, that provide both high-level programming models for domain experts and high performance across a range of parallel platforms, such as GPUs, SMPs, and clusters.
  • Program synthesis tools that generate efficient parallel codes and/or query processing plans from high-level problem descriptions using domain-specific knowledge. Approaches might include optimizations based on mathematical and/or statistical reasoning, set theory, logic, auto-vectorization techniques that exploit domain-specific properties, and auto-tuning techniques.
  • Hardware-software co-design for domain-specific applications that pushes performance and energy efficiency while reducing cost, overhead, and inefficiencies.
  • Integrated data management paradigms harnessing parallelism and concurrency; the entire data path, from data generation to transmission, storage, access, use, maintenance, analytics, and to eventual archiving or destruction, is in scope.
  • Work that generalizes the approach of exploiting domain-specific knowledge, such as tools, frameworks, and libraries that support the development of domain-specific solutions to computational and data management problems and are integrated with domain science.
  • Novel approaches suitable for scientific application frameworks addressing domain-specific mapping of parallelism onto a variety of parallel computational models and scales.

Foundational Principles (FP)

Research on foundational principles should engender a paradigm shift in the ways in which one conceives, develops, analyzes, and uses parallel algorithms, languages, and concurrency. Foundational research should be guided by crucial design principles and constraints impacting these principles. Topics include, but are not limited to:

  • New computational models that free algorithm designers and programmers from many low-level details of specific parallel hardware while supporting the expression of properties of a desired computation that allows maximum parallel performance. Models should be simple enough to understand and use, have solid semantic foundations, and guide algorithm design choices for diverse parallel platforms.
  • Algorithms and algorithmic paradigms that simultaneously allow reasoning about correctness and parallel performance, lead to provable performance guarantees, and allow optimizing for various resources, including energy and data movement (both memory hierarchy and communication bandwidth) as well as parallel work and running time.
  • New programming languages, program logics, type theories, and language mechanisms that support new computational and data models, raise the level of abstraction, and lower the barrier of entry for parallel and concurrent programming. Parallel and concurrent languages that have programmability, verifiability, and scalable performance as design goals. Of particular interest are languages that abstract away from the traditional imperative programming model found in most sequential programming languages.
  • Compilers and techniques, including certification, for mapping high-level parallel languages and language mechanisms to efficient low-level, platform-specific code.
  • Development of interfaces to express parallelism at a higher level while being able to express and analyze locality, communication, and other parameters that affect performance and scalability.
  • New data models, query languages, and query optimization techniques that support large data sets and parallel processing for database, data mining, and machine learning queries.
  • Novel approaches to designing and analyzing heterogeneous hardware, programmable logic, and accelerators, and to hardware support for programmability (e.g., transactional memory) and reliability (e.g., recovery blocks).
  • Scalable Distributed Architectures (SDA)

    Large-scale heterogeneous distributed systems (e.g., the web, grid, cloud) have become commonplace in both general purpose and scientific contexts. With the increased prominence of smart phones, tablets, and other types of edge devices, users expect these systems to be robust, reliable, safe and efficient. At the same time, new applications leveraging these platforms require a rich environment that enables sensing and computing with diverse distributed data, along with communication among and between these systems and the elements that comprise them. Research supporting the science and design of these extensible distributed systems, particularly the components and programming of highly parallel and scalable distributed architectures, will enable the many "smart" technologies and infrastructures of the future. Topics include, but are not limited to:

  • Novel approaches enabling heterogeneous edge devices - with constraints such as low energy use, tight form factors, tight time constraints, adequate computational and data management capacity, and low cost - to collaborate in delivering computation-intensive applications utilizing distributed data.
  • Runtime platforms and virtualization tools that allow programs to divide effort among portable platforms and large-scale compute and data resources while responding dynamically to changes in reliability and energy efficiency. Possible questions include: How should computation be mapped onto the elements of large-scale distributed systems? How can system architecture help preserve privacy by giving users more control over their data?
  • Research that enables conventionally-trained engineers to program computing systems extending across wide geographic scales, taking advantage of highly parallel and distributed environments while simultaneously exhibiting resilience to significant amounts of component and communication failures. Such research may be based on novel hardware support, programming abstractions, algorithms, storage systems, middleware, operating systems, or data management, data mining systems, and machine learning systems