Speakers biographies and talk summaries

High order, scale resolving modelling for high Reynolds number Racing Car Aerodynamic

Professor Spencer Sherwin, McLaren Racing/Royal Academy of Engineering Research Chair

The use of computational tools in industrial flow simulations is well established. As engineering design continues to evolve and become ever more complex there is an increasing demand for more accurate transient flow simulations. It can, using existing methods, be extremely costly in computational terms to achieve sufficient accuracy in these simulations. Accordingly, advanced engineering industries, such as the Formula 1 (F1) industry, is looking to academia to develop the next generation of techniques which may provide a mechanism for more accurate simulations without excessive increases in cost.

Currently, the most established methods for industrial flow simulations, including F1, are based upon the Reynolds Averaged Navier-Stokes (RANS) equations which are at the heart of most commercial codes. There is naturally an implicit assumption in this approach of a steady state solution. In practice, however, many industrial problems involve unsteady or transient flows which the RANS techniques are not well equipped to deal with. In order to therefore address incre asing demand for more physical models in engineering design, commercial codes do include unsteady extensions such as URANS (Unsteady RANS), and Direct Eddy Simulation (DES). Unfortunately even on high performance computing facilities these types of computational models require significantly more execution time which, to date, has not been matched with a corresponding increase in accuracy of a level sufficient to justify this costs. Particularly when considering the computing restrictions the F1 rules impose on the race car design.

Alternative high order transient simulation techniques using spectral/hp element discretisations have been developed within research and academic communities over the past few decades. These methods have generally been applied to more academic transient flow simulations with a significantly reduced level of turbulence modelling. As the industrial demand for transient simulations becomes greater and the computer "power per $" improves, alternative computational techniques such as high order spectral/hp element discretisations, not yet widely adopted by industry, are likely to provide a more cost effective tool from the perspective of computational time for a high level of accuracy.

In this presentation we will outline the demands imposed on computational aerodynamics within the highly competitive F1 race car design and discuss the next generation of transient flow modelling that the industry is looking to impact on this design cycle.

Scalable numerics for numerical weather prediction

Colin Cotter, Department of Mathematics

I will describe current research in the Departments of Mathematics and Computing that contribute to the UK Dynamical Core project, a multi-institution collaboration dedicated to designing a scalable dynamical core (the fluids part) of the next generation Met Office forecast model. Whilst the modelling capabilities (measured in forecasting skill) of the current Unified Model are world-class, it has become necessary to redesign the numerical algorithms of the model in order to avoid problems of parallel scalability that arise from the use of a latitude-longitude grid. Scalability is inhibited because the convergence of grid-lines at the poles leads to grid points being close together in physical space but far apart in computer memory, and computations at large core counts become dominated by parallel communication rather than floating point operations. The Met Office has identified this as a future risk to their forecasting capability as we enter an era of massively multicore supercomputers, and our task is to design numerical methods for different computational grids that do not have pole-type singularities. Numerical methods for numerical weather prediction have very challenging and specific requirements, and I will explain what these are in general terms, and will describe how finite elements methods developed at Imperial College address this.

UK MED-BIO: A new platform for the analysis of omic data.

Professor Paul Elliot, Departmant of Medicine

Recent developments in omics technologies and exposure assessment methods provide an unprecedented opportunity to make real advances in understanding the links between environmental stressors, the microbiome, health and disease. The new MRC-funded UK MED-BIO programme brings together patient/cohort data, omics technologies, computing and other resources to create a dedicated infrastructure for effective data management, integration, analysis and visualisation. The ultimate aim is address the challenges posed by the huge data volumes and data complexity being generated by modern science and the new omics technologies; so-called "big data" . UK MED-BIO provides the capacity to assemble data on lifestyle characteristics, socio-demographic information, clinical data, disease phenotypes, progression and outcomes from well-established and phenotyped patient and population cohorts, with multi-omic information derived from analysis of linked biological samples. New knowledge generated on disease aetiology and mechanisms will help to develop more targeted preventive and treatment strategies for chronic disease.

Intel tutorial - compilers, profilers, vectorization, OpenMP

Stephen Blair-Chappell, Intel Corporation, United Kingdom

Stephen is a Technical Consulting Engineer at Intel, and has worked in the Compiler team for the last 15 years. Amongst other things, Stephen provides consulting and training to the Intel Parallel Computing Centers in EMEA. Stephen is author of the book Parallel Programming with Intel Parallel Studio XE.

HPC trends and what they mean for me

Jim Cownie, Principal Engineer, Intel Corporation, United Kingdom

Jim is an Intel principal engineer who was until recently architect for the OpenMP* runtime. He has worked on parallel computing since 1979 when he started at Inmos working on Occam and the Transputer. He served on the HPF and MPI committees, designing the MPI profiling interface as chair of the MPI-1 profiling sub-committee. Since joining Intel ten years ago, amongst other things, he has worked on the Pin profiling infrastructure, Intel® Transactional Synchronization Extensions (Intel® TSX) and OpenMP.

In his talk, Jim will discuss some of the trends in HPC computing and then attempt to demonstrate how those trends impact the codes we’re writing today, so that we we’ll be prepared for the kinds of machines that will be coming soon.

The perils of computer arithmetic

Dan Moore, Department of Mathematics

The details of Computer Arithmetic are considered. Issues arising from approximating a continuous field with a finite number of integers are explored. We examine how Intel CPUs adhere to and deviate from the IEEE-754 International Floating Point Arithmetic Standard. The ways to control how the Intel CPU performs floating point arithmetic from within FORTRAN and C are set out for both Intel and GNU gcc/gfortran. We view an extreme case where rounding error can change results substantially. Finally,we consider reproducibility problems arising from distributing a calculation across many CPUs simultaneously. (A knowledge of C or FORTRAN is assumed.)

Allinea Forge

Florent Lebeau, Allinea

Florent LEBEAU is an HPC Applications and Support Analyst at Allinea and provides trainings on Allinea debugging and profiling tools. Being involved in HPC for many years gave him expertise in parallel programming and developing tools. Before joining Allinea, Florent graduated from the University of Dundee with an MSc in Applied Computing and has worked for CAPS entreprise, were he developed profiling tools for HMPP Workbench and provided trainings on parallel technologies.

An Introduction to Quantum Computing and Programming - D–Wave Systems Inc.

Bo Ewald – President, Murray Thom – Director of Professional Services, Denny Dahl – Application and Programming Specialist, Andy Mason - Sales Director

D-Wave Systems has produced the first and only commercial quantum computer and recently announced a 1000 qubit processor opening the door to endless new possibilities to solve some of the worlds most challenging problems.

The D-Wave session will be an introduction to quantum computing, a brief tour of the D-Wave architecture and embark upon some quantum computer programming.  Topics that will be covered include:

  • An overview of D-Wave
  • An introduction to quantum computing
  • The D-Wave adiabatic quantum computer
  • 1000 qubit benchmark results
  • Real world uses of a quantum computer
  • An overview of the D-Wave programming environment
  • Model, real-time, examples of programming on a D-Wave Two X quantum computer

Research Data Management community session

chair Torsten Reimer, Research Office

speakers:

  • Suzanna Ward, Colin Groom and Amy Sarjeant, The Cambridge Crystallographic Data Centre - Managing crystal structure data at the CCDC - a 50 year perspective
  • Henry Rzepa - The Why of Research Data Management, http://doi.org/7sb
  • Torsten Reimer - Research Data – policies, strategy and College support
  • Christian Jacobs - Research Data Management for Computational Science
  • Sarah Butcher

Research is based on principles like reproducibility - at least in theory. In practice the lack of access to research data, methods, protocols and software has been a barrier to transparency and also to re-use, within and outside of academia. To facilitate re-use and transparency of research, funders are increasingly requiring researchers to develop a systematic approach to managing and sharing their data. This session will introduce participants to the funder and College positions on research data management, give an overview of the emerging support infrastructure and showcase academic exemplars and use cases from concrete projects from across the College.

Computational Molecular Sciences community session

chair Michael Bearpark, Department of Chemistry

Introduction:

  • Michael Bearpark, Professor, Computational Chemist, Gaussian Author and Educator

Case studies:

  • Sheridan Few, Research Assistant, Plastic Electronics, Physics
  • Mimi Hii, Reader, Organic Chemistry, Catalysis
  • Peter Bradshaw, PhD Student, Life Sciences, Medicine
  • Alexandra Simperler, NSCCS Training Facilitator

Panel discussion:

  • chair Michael Bearpark
  • Tricia Hunt, Reader, Computational Chemist, Ionic Liquids, Catalysis, H-Bonding
  • Alexandra Simperler

The aim of the computational molecular sciences session is to bring together researchers across college who use high performance computing for computational chemistry applications, to discover what expertise they share or would benefit from, and to introduce each other. Following several short presentations, there will be a panel discussion for users to ask questions of training and applications specialists and software developers based at Imperial.

Research Software Engineering community session

chair Jeremy Cohen, Department of Computing

Speakers:

  • Dr Jeremy Cohen, Department of Computing - Welcome and session introduction

What is Research Software Engineering? Why is it important to the UK computational science community and what should we be doing about it? 

  • Dr Mark Stillwell, Department of Computing - Current RSE activities in the UK academic community
What is happening already in the UK in terms of an RSE community?  How might we get involved and work with the existing community?  How can Imperial build upon existing activities? 

Research Software Engineering: The computational scientist's perspective

Short talks from two computational scientists who rely on software and advanced infrastructure to support their research. What are the challenges they and their groups/colleagues face? What significance do they attach to software engineering within their group/team? 

  •  Prof. Matthew Foulkes, Department of Physics
  •  Dr Radu Cimpeanu, Department of Mathematics


Res
earch Software Engineering: The RSE's perspective

Short talks from two researchers undertaking research software engineering-type roles highlighting the challenges they help to solve and how their work supports others within the domains in which they work. 

  • Dr Chris Cantwell, Department of Aeronautics
  • Dr Michael Lange, Department of Earth Science and Engineering


Panel:

  • Prof. Matthew Foulkes, Department of Physics
  • Dr Miguel Oliveira, HPC Service, ICT
  • Jazz Mack Smith, Bioinformatics Support Service, Department of Surgery & Cancer 
  • Lawrence Mitchell, Department of Computing 

The panel discussion will provide an opportunity for the audience and panel to investigate some of the issues raised in the first part of the session. A panel of five members covering researchers, academics and technical experts will be assembled. The session will begin with a brief introduction from each of the panel members followed by questions/comments from the audience and discussion. Some possible issues for discussion include:

  • Different models for RSE-type posts – a centralised team or distributed across domains
  • How important are RSEs in different domains? Is there a consistent requirement for such expertise across scientific domains?
  • Sustainability of RSE roles
  • Impact, opportunities and career paths 
Session Overview

Software Engineering has been a key aspect of large-scale scientific work for many years and individuals with specialist skills in the development of scientific software, in addition to domain knowledge, are now often key members of computational science research groups. However, as software and computational infrastructure have become more complex, these individuals need increasingly large skill sets and the knowledge of how to apply their skills within a research context. Despite their important profile, supporting such roles can be a challenge for PIs and the individuals undertaking this work often need to combine their software development work with other postdoctoral research tasks.

The term Research Software Engineering (RSE) has been coined to describe the work undertaken by computational software developers working within research groups and an RSE community has begun to emerge in the UK (http://www.rse.ac.uk). This represents an important step in formally recognising the role of computational software developers who understand the research environment and research processes. Nonetheless, there is generally a lack of long-term sustainability for RSE roles and they often become a step on a route to a more traditional academic career as a means for RSEs to ensure stability and a long-term career path. There is, however, increasing recognition of the need to find ways to maintain the vast technical knowledge and expertise that RSEs build up and, for those who wish to remain in a software development-focused role, to find long-term career options.

This session, consisting of six short "lightning talks" and a panel discussion, will bring together Research Software Engineers (RSEs), computational scientists and research leaders to look at the benefits and challenges of RSE roles from a range of different perspectives. It will highlight the specific challenges that come with modern large-scale computational software and associated research and look at the skills and capabilities that RSEs can bring to a research team to help address these challenges.

Genomics community session

chair Michael Mueller

Speakers:

  • Michael Mueller - Variant Analysis
  • Alona Sosinsky - Mutation Detection
  • Claire Morgan - RNA Expression Analysis
  • Benjamin Lehne - Methylation Analysis
  • Thomas Carroll - ChIP-seq Analysis
Advancements in DNA sequencing technologies over the last decade have revolutionised the field of genome research by enabling the cost effective and rapid generation of unprecedented amounts of genomic information both in terms of depth of data sets as well as number of samples. Interrogation of genomic data sets requires high performance computing infrastructure to store, process and analyse data as well as computational skills to string different analysis tools together into workflows and to parallelise analysis tasks. In this session analysts and researchers from the Imperial BRC Genomics Facility, the MRC Clinical Sciences Centre, the Department of Medicine and the School of Public Health will talk about the implementation of HPC workflows to analyse data from various DNA sequencing applications including variant calling, somatic mutation detection, differential expression calling, ChIP-seq analysis and methylation profiling.

CUDA workshop

Matt Harvey, HPC service

This short workshop will introduce GPU programming using CUDA. It will cover:

  • GPU hardware: what it is, how it's different, when it's useful
  • CUDA programming model, adapting CPU code for GPUs
  • GPU Memory management
  • performance profiling and debugging with the CUDA development tools

Own laptop and familiarity with Linux and C are required.

Introduction to HPC Service at Imperial

Katerina Michalickova, HPC Service

This talk is intended for scientists who would like to start using the HPC service at Imperial and have no (or little) previous experience with cluster computing.  I will introduce the concepts of computer cluster, queueing system, file and data management on the systems and go through examples of job scripts and job execution.  I assume some knowledge of command line and shell scripting.  Our HPC wiki has command line and shell scripting tutorials that might help.  Everyone is welcome to bring their laptop and try out the examples.

Contact us

Email us at: cmse@imperial.ac.uk, or contact directly Directors as shown in People.

You could also write to us at:

Computational Methods in Science and Engineering
Imperial College London
Exhibition Road
South Kensington
London
SW7 2AZ
United Kingdom

For information in how to find us, go to South Kensington Campus.