With the increased demand for artificial intelligence and big data applications, **H**igh-**P**erformance **C**omputing needs to make the transition
from supercomputing centres to the wider audience. The increasing popularity of frameworks such as TensorFLow, PyTorch, Matlab, or NumPy are
witness of this development.

This research theme tries to look at application areas, domain specific languages (DSLs), and language designs that bridge the help making HPC accessible to the masses. Some concrete topics of interest are:

Graph-processing typically is based on the idea of pointers, making it ideally suited for an imperative setting. While non-cyclic graphs can be easily manipulated in a declarative style, cyclic graphs pose a bigger challenge. Some non-trivial approaches exist, however, they lack an efficient implementation. Here, we look into ways to find simpler solutions as well as a code generator for generating efficient parallel code.

Libraries such as MKL or BLAS are often considered the non-plus-ultra. However, in several cases, it could be shown that compilers that take the calling context into consideration can outperform such libraries. Here, we look at applications areas such as vision or AI and investigate how whole-world compilers compete against tuned libraries.

Many modern applications are memory bound, i.e., their performance is not dominated by the compute power that is thrown at them but by the throughput the memory subsystem can provide. Here, we look at new number formats such as POSITS instead of IEEE floating point, n-bit integers, or n-bit reals.

Thrust (https://thrust.github.io) offers a wide range of highly optimised computational kernels for multi-core and GPU systems. The flexibility of these templates makes them generically applicable while providing excellent parallel performance. Orchestrating Thrust programs by hand, however, is tedious and error-prone. Here, we look at ways on how to generate Thrust programs from more abstract high-level programs such as SaC programs.

Reference Counting sofar has not made it into the mainstream. With its advantages in a parallel setting there is a rise in interest in this area. Combining techniques such as uniqueness-inference and reference counting offers opportunities for further improvements. Another area of investigation in the context of reference counting is efficient support for nested data structures.

KNIME is an open source tool for commercial data processing pipelines. In principle, it should be possible to create SaC-implemented data processors for such tool chains. However, it would have to rely on runtime-specialisation of code to ensure parallel runtimes competitive with hand-written code. This project investigates the feasibility and performance of such an approach.

A language server enables hooking up programming language tool chains with a variety of IDEs, including visual studio code and others. However, building a language server for a full fledged language is a non-trivial task which typically requires re-implementing many parts of any pre-existing compiler/interpreter. This project aims at building a generic language server that uses dedicated calls to pre-existing language tools for implementing the actual language server capabilities.

Array programming languages such as SaC offer high-levels of programmer productivity. This theme looks at extending such languages in ways that allow to further improve expressiveness of these languages and, with it, the programmer productivity. Some topics of interest are:

For some programs, it is possible to statically infer that not all data is needed for computing the overall result. Languages based on normal-order reduction or laziness guarantee that only those parts of a program are being evaluated that are needed for the result. While this is conceptually nice, the implementation of lazy evaluation comes at a potentially very high price. We try to look at languages that are based on a strict evaluation but who can nevertheless disregard terms that are known not to contribute to the overall result.

Streaming can be seen as computing with arrays of infinite size. Using the notion of Ordinal numbers as indices allows for a new form of multi-dimensional streaming. There are many challenges in this context.

Given an n-dimensional array, how many permutations of it can be formed by repeatedly applying reshapes and transpositions? And is there an efficient datastructure to represent such permutations in memory? This is fundamentally a group-theoretic problem, but we can elegantly approach it using a bit of category theory, which lets such permutations be represented graphically as "string diagrams". This project may include finding proofs of special cases, and some experimentation using Computer Algebra software GAP. (suggested and co-supervised by Dario Stein!)

The shape of any *n*-dimensional array can be expressed by a shape vector. This implies a linear order of the indices. However, in several cases of program optimisation it would be convenient
to express higher levels of neighbourhood. An example for this is the blocking of a 2-dimensional matrix. In this project, we look into higher-dimensional shapes that allow indices themselves to be
multi-dimensional arrays.

This project looks into ways of expressing and verifying domain restrictions in the context of shape polymorphic array operations.

Starting from a pre-existing prototype, this project looks into improving the interactivity of array programming in SaC.

In case you are interested in any of these or related topics, please talk to Sven-Bodo Scholz M1.006