**Get together**

at Luise Dahlem

or 18:00 at Luise

17^{th} January

**Academic and industry presentations**

18^{th} January

18^{th} January

While High Performance Computing has pushed the limits of computation in science and technology for decades, and these applications are often business critical, applying it to the business processes themselves is still not as widespread.

We will discuss how business analytics, from data processing through deep learning, but also huge-scale semantic databases can profit from existing capabilities: Loading and querying a graph database with a trillion RDF triples interactively enables insight into unstructured data lakes, metadata, and cross-silo integration even on today's systems. On the other hand, with the advent of exascale systems, compute power is no longer the limiting factor; instead efficient usage of the increasingly complex memory hierarchy and I/O system are the challenges – and this is to a large extent a problem similar in to logistic and traffic optimization, which businesses routinely solve.

Utz-Uwe Haus is Senior Research Engineer at CRAY. He studied Mathematics and Computer Science at TU Berlin. After obtaining a Doctorate in Mathematics at University of Magdeburg he worked on nonstandard applications of Mathematical Optimization in Chemical Engineering, Material Science and Systems Biology. He lead a Junior Research Group at the Magdeburg Center for Systems Biology and was principal investigator on various FP7 ITN projects. After 5 years as Senior Researcher at the Department of Mathematics at ETH Zürich he is now responsible for developing the Mathematical Optimization and Operations Research group of the Cray EMEA Research Lab (Bristol/Basel) and working on data-dependency driven workflow optimization on future HPC architectures.

We will discuss how business analytics, from data processing through deep learning, but also huge-scale semantic databases can profit from existing capabilities: Loading and querying a graph database with a trillion RDF triples interactively enables insight into unstructured data lakes, metadata, and cross-silo integration even on today's systems. On the other hand, with the advent of exascale systems, compute power is no longer the limiting factor; instead efficient usage of the increasingly complex memory hierarchy and I/O system are the challenges – and this is to a large extent a problem similar in to logistic and traffic optimization, which businesses routinely solve.

Utz-Uwe Haus is Senior Research Engineer at CRAY. He studied Mathematics and Computer Science at TU Berlin. After obtaining a Doctorate in Mathematics at University of Magdeburg he worked on nonstandard applications of Mathematical Optimization in Chemical Engineering, Material Science and Systems Biology. He lead a Junior Research Group at the Magdeburg Center for Systems Biology and was principal investigator on various FP7 ITN projects. After 5 years as Senior Researcher at the Department of Mathematics at ETH Zürich he is now responsible for developing the Mathematical Optimization and Operations Research group of the Cray EMEA Research Lab (Bristol/Basel) and working on data-dependency driven workflow optimization on future HPC architectures.

18^{th} January

Modern multi- and many-core processors draw their number-crunching capabilities from an increasing number of cores, each being capable of hardware data-parallelism. The latter is typically provided by a Single Instruction Multiple Data (SIMD) instruction set extensions.

This talk provides an overview of performance engineering on such systems. We present the Intel Xeon Phi as an example architecture and introduce the Roofline model as a generic approach for relating algorithms to hardware. The second part of the talk presents an optimisation workflow, experiences from real-world codes, as well as some insights on SIMD programming.

This talk provides an overview of performance engineering on such systems. We present the Intel Xeon Phi as an example architecture and introduce the Roofline model as a generic approach for relating algorithms to hardware. The second part of the talk presents an optimisation workflow, experiences from real-world codes, as well as some insights on SIMD programming.

18^{th} January

18^{th} January

18^{th} January

The Ubiquity Generator (UG) framework is a software framework to build a parallel branch-and-bound based solver. The basic concept behind the UG framework is to exploit the mathematically supercharged powerful performance of state-of-the-art algorithm implementations in parallelization. The UG framework was used to develop ParaSCIP(ug[SCIP,MPI]) and ParaXpress(ug[Xpress,MPI]), which are parallel solvers to solve mixed integer programming problems based on SCIP, which is a state-of-the-art academic code, and on Xpress, which is a commercial one, respectively. Those solvers solved over 20 open instances from MIPLIB by using up to 80,000 cores. The parallel solver ug[SCIP-Jack, MPI], which is a parallel version of a customized SCIP solver to solve Steiner tree problems, solved four open instances from SteinLib by using up to 43,000 cores. In this talk, we present success stories of these results.

18^{th} January

Stability on the distribution grid is a hard constraint in power grid operations. In order to maintain a constant electric current oscillating at 50 Hz, over- and undersupply must be avoided. Normally, this is achieved through the use of battery storage units, rotating mass storage and conventional overcapacities as backup. With the increasing importance of renewable energies for the electricity supply, distribution grid volatility increases; as does the demand for electric power, partly due to a growing electrification in the transport sector.As part of this project, we are seeking to devise an information system to predict short-term and long-term grid loads and to determine optimal load management strategies. For this, we want to accumulate heterogeneously formed data from multiple sources (e.g. historical smart meter data, powerline signal data, weather data, etc.) and blend them together. We then wish to apply data science on the resulting very high-dimensional data with the goal to predict supply and demand for a local power distribution network of arbitrary size.The first task is comprised of a series of extract-transform-load (ETL) operations within very large data-sets. The second task will be accomplished through a combined approach of machine-learning and statistical analysis methods. It involves several more or less interdependent classification as well as sequence prediction problems, for both of which there exists ample parallel solution methods and algorithms.Both, ETL and data analysis, will have to be performed in a long-term and a short-term context. This is to identify long-term patterns, on the one hand, and to properly react to current load scenarios, on the other. While a minor factor in long-term predictions, execution time in on-line decision making becomes crucial. We hope to exploit high performance computing techniques in order to be able to use complex modeling even for such decision situations, while adhering to the time constraint.

18^{th} January