Modern large scale computing systems are rapidly evolving and may soon
feature millions of cores with exaflop performance. However, this leads to
a tremendous complexity with an unprecedented number of available design
and optimization choices for architectures, applications, compilers and
run-time systems. Using outdated, non-adaptive technology results in an
enormous waste of expensive computing resources and energy, while slowing
down time to market.
The 1st International Workshop on Self-tuning, Large Scale Computing
Systems for Exaflop Era is intended to become a regular inter-disciplinary
forum for researchers, practitioners, developers and application writers to
discuss ideas, experience, methodology, applications, practical techniques
and tools to improve or change current and future computing systems using
self-tuning technology. Such systems should be able to automatically adjust
their behavior to multi-objective usage scenarios at all levels (hardware
and software) based on empirical, dynamic, iterative, statistical,
collective, bio-inspired, machine learning and alternative techniques while fully
utilizing available resources.
All papers will be peer-reviewed including short position papers
and should include unpublished ideas on how to simplify, automate and standardize the design,
programming, optimization and adaptation of large-scale computing systems for multiple objectives
to improve performance, power consumption, utilization, reliability and scalability
including the following topics:
whole system parameterization and modularization to enable self-tuning across the whole hardware and software stack
transformation space of static, JIT and source-to-source compilers
run-time resource management/scheduling
task/process/thread/data migration
design space of architectures including heterogeneous multi-cores, accelerators, memory hierarchy and IO
propagation and usage of the feedback between various system layers
static and dynamic code and data partitioning/modification for self-tuning
application conversion to support multi-level, hybrid parallelization
modification of existing tools and applications to enable auto-tuning
resource and contention aware scheduling
performance, power and reliability evaluation methodologies
scalable performance evaluation tools
detection, classification, and mitigation of resource contentions
collaborative optimization repositories and benchmarks
characterization of static program constructs
characterization of dynamic program behavior under various system load scenarios
software/hardware co-design and co-optimization
analysis of interactions between different parts of a large application
prediction of optimizations and architectural designs based on prior knowledge
Registration and accommodation for the EXADAPT workshop are
handled through the parent conference, PLDI. To register,
please go to the PLDI website http://pldi11.cs.utah.edu
and click on the registration link on the left. The early
registration deadline for discounted rates is TBD.
For local and travel information follow the "Local
Information" links on the same website.
Full papers should be at most 12 pages long
including bibliography and appendices. Papers in
this category are expected to have relatively mature
content. Full paper presentations will be 25 minutes
each.
Position papers should be at most 6 pages long
including bibliography and appendices. Preliminary
and exploratory work are welcome in this
category, including wild & crazy ideas.
Position paper presentations will be 10
minutes each. Authors submitting papers in this
category must prepend the phrase Position Paper: to
the title of the submitted paper.
Submissions should be PDF documents typeset in the ACM
proceedings format using 10pt fonts. SIGPLAN-approved
templates can be found here.
Both full and position papers must describe work not
published in other refereed venues (see
the SIGPLAN republication policy
for more details). The proceedings of this workshop will
be published in the ACM Digital Library (ISBN 978-1-4503-0708-6).