Supercomputing or High Performance Computing (HPC) platforms are used to power discoveries and to reduce the time-to-results in a wide variety of disciplines (such as, astrophysics, archaeology, and financial trading). For optimally utilizing these high-end platforms, it is critical to have scalable and efficient software (applications, middleware, libraries, and tools) that can take advantage of the innovative hardware features in these platforms. However, developing and maintaining HPC software remains a challenging task because the HPC platforms for which they are developed typically have a short life-span, and are replaced with next generation platforms within a few years. As we progress towards the exascale computing era, the task of developing and maintaining HPC software is likely to become even more challenging than now due to the increasing complexity of the HPC platforms and the pressing need for power-efficiency and memory usage optimization. There is a potential of mitigating some of the challenges related to developing HPC software for the current and future generation systems by adopting the innovations in the advanced software engineering sub-disciplines, such as model-driven engineering, generative programming, and adaptive and reflective software systems.
The goals of the first workshop on “Software Challenges to Exascale Computing” are to foster international collaborations across the HPC and the advanced software engineering disciplines, and to exchange knowledge on the challenges and solution strategies for developing scalable and efficient HPC software. The workshop attendees will learn about the state-of-the-art and the state-of-the-practice in the areas of HPC software development and advanced software engineering through presentations, hands-on sessions, and open-discussion sessions. Those already skilled in the advanced software engineering discipline will learn about the challenges and opportunities in the HPC domain, and can find interesting test cases for generalizing their innovative approaches.
The workshop will provide a forum through which hardware vendors and software developers can communicate with each other and influence the architecture of the next generation supercomputing systems and the supporting software stack. By fostering cross-disciplinary associations, the workshop will serve as a stepping stone towards innovations in the future.
Benefits to the researchers and users in the academia: disseminate your results to the public, and find potential collaborators.
Benefits to software developers: understand the future trends in the HPC hardware and develop collaborations in the code modernization and optimization disciplines.
Benefits to HPC service providers: understand the challenges that the community faces in using the HPC platforms efficiently, and connect with the user-community.
Benefits to HPC hardware vendors: understand the evolving needs of the HPC community, and network with potential customers.
Benefits to students: network with HPC and advanced software engineering professionals and researchers, learn about internship and career opportunities, discuss the opportunities for higher education.
Abstract: Advanced cyberinfrastructure and the ability to perform large-scale simulations and accumulate massive amounts of data have revolutionized scientific and engineering disciplines. In this talk I will give an overview of the National Strategic Computing Initiative (NSCI) that was launched by Executive Order (EO) 13702 in July 2015 to advance U.S. leadership in high performance computing (HPC). The NSCI is a whole-of-nation effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States. I will then discuss NSF’s role in NSCI and present three cross-cutting software programs ranging from extreme scale parallelism to supporting robust, reliable and sustainable software that will support and advance sustained scientific innovation and discovery.
Bio: A veteran of High Performance Computing (HPC), Dr. Chaudhary has been actively participating in the science, business, government, and technology innovation frontiers of HPC for over two decades. His contributions range from heading research laboratories and holding executive management positions, to starting new technology ventures. He is currently a Program Director in the Office of Advanced Cyberinfrastructure at National Science Foundation. He is Empire Innovation Professor of Computer Science and Engineering at the Center for Computational Research at the New York State Center of Excellence in Bioinformatics and Life Sciences at SUNY Buffalo, and the Director of the university’s Data Intensive Computing Initiative. He is also the co-founder of the Center for Computational and Data-Enabled Science and Engineering.
He cofounded Scalable Informatics, a leading provider of pragmatic, high performance software-defined storage and compute solutions to a wide range of markets, from financial and scientific computing to research and big data analytics. From 2010 to 2013, Dr. Chaudhary was the Chief Executive Officer of Computational Research Laboratories (CRL) where he grew the company globally to be an HPC cloud and solutions leader before selling it to Tata Consulting Services. Prior to this, as Senior Director of Advanced Development at Cradle Technologies, Inc., he was responsible for advanced programming tools for multi-processor chips. He was also the Chief Architect at Corio Inc., which had a successful IPO in June, 2000.
Dr. Chaudhary’s research interests are in High Performance Computing and Applications to Science, Engineering, Biology, and Medicine; Big Data; Computer Assisted Diagnosis and Interventions; Medical Image Processing; Computer Architecture and Embedded Systems; and Spectrum Management. He has published approximately 200 papers in peer-reviewed journals and conferences and has been the principal or co-principal investigator on over $28 million in research projects from government agencies and industry. Dr. Chaudhary was awarded the prestigious President of India Gold Medal in 1986 for securing the first rank amongst graduating students at the Indian Institute of Technology (IIT). He received the B.Tech. (Hons.) degree in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur, in 1986 and a Ph.D. degree from The University of Texas at Austin in 1992.
Abstract: Advanced cyberinfrastructure and the ability to perform large-scale simulations and accumulate massive amounts of data have revolutionized scientific and engineering disciplines. In this talk I will give an overview of the National Strategic Computing Initiative (NSCI) that was launched by Executive Order (EO) 13702 in July 2015 to advance U.S. leadership in high performance computing (HPC). The NSCI is a whole-of-nation effort designed to create a cohesive, multi-agency strategic vision and Federal investment strategy, executed in collaboration with industry and academia, to maximize the benefits of HPC for the United States. I will then discuss NSF’s role in NSCI and present three cross-cutting software programs ranging from extreme scale parallelism to supporting robust, reliable and sustainable software that will support and advance sustained scientific innovation and discovery.
Bio: A veteran of High Performance Computing (HPC), Dr. Chaudhary has been actively participating in the science, business, government, and technology innovation frontiers of HPC for over two decades. His contributions range from heading research laboratories and holding executive management positions, to starting new technology ventures. He is currently a Program Director in the Office of Advanced Cyberinfrastructure at National Science Foundation. He is Empire Innovation Professor of Computer Science and Engineering at the Center for Computational Research at the New York State Center of Excellence in Bioinformatics and Life Sciences at SUNY Buffalo, and the Director of the university’s Data Intensive Computing Initiative. He is also the co-founder of the Center for Computational and Data-Enabled Science and Engineering.
He cofounded Scalable Informatics, a leading provider of pragmatic, high performance software-defined storage and compute solutions to a wide range of markets, from financial and scientific computing to research and big data analytics. From 2010 to 2013, Dr. Chaudhary was the Chief Executive Officer of Computational Research Laboratories (CRL) where he grew the company globally to be an HPC cloud and solutions leader before selling it to Tata Consulting Services. Prior to this, as Senior Director of Advanced Development at Cradle Technologies, Inc., he was responsible for advanced programming tools for multi-processor chips. He was also the Chief Architect at Corio Inc., which had a successful IPO in June, 2000.
Dr. Chaudhary’s research interests are in High Performance Computing and Applications to Science, Engineering, Biology, and Medicine; Big Data; Computer Assisted Diagnosis and Interventions; Medical Image Processing; Computer Architecture and Embedded Systems; and Spectrum Management. He has published approximately 200 papers in peer-reviewed journals and conferences and has been the principal or co-principal investigator on over $28 million in research projects from government agencies and industry. Dr. Chaudhary was awarded the prestigious President of India Gold Medal in 1986 for securing the first rank amongst graduating students at the Indian Institute of Technology (IIT). He received the B.Tech. (Hons.) degree in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur, in 1986 and a Ph.D. degree from The University of Texas at Austin in 1992.
Abstract: Today, high-performance computing (HPC) application codes are often optimized and specialized for a particular system configuration to exploit the system's potential. One severe problem is that simply modifying an HPC application code often results in degrading the performance portability, readability, and maintainability of the code. Therefore, we have been developing a code transformation framework, Xevolver, so that users can easily define their own code transformation rules for individual cases, in order to express how each application code should be changed to achieve high performance. In this talk, I will briefly review the Xevolver framework and introduce some case studies to discuss the benefits of the user-defined code transformation approach.
Bio: Hiroyuki Takizawa is currently a professor of Cyberscience Center, Tohoku University. His research interests include performance-aware programming, high-performance computing systems and their applications. Since 2011, he is leading a research project, supported by JST CREST, to explore an effective way of assisting legacy HPC code migration to future-generation extreme-scale computing systems. He received the B.E. Degree in Mechanical Engineering, and the M.S. and Ph.D. Degrees in Information Sciences from Tohoku University in 1995, 1997 and 1999, respectively.
Abstract: Today, high-performance computing (HPC) application codes are often optimized and specialized for a particular system configuration to exploit the system's potential. One severe problem is that simply modifying an HPC application code often results in degrading the performance portability, readability, and maintainability of the code. Therefore, we have been developing a code transformation framework, Xevolver, so that users can easily define their own code transformation rules for individual cases, in order to express how each application code should be changed to achieve high performance. In this talk, I will briefly review the Xevolver framework and introduce some case studies to discuss the benefits of the user-defined code transformation approach.
Bio: Hiroyuki Takizawa is currently a professor of Cyberscience Center, Tohoku University. His research interests include performance-aware programming, high-performance computing systems and their applications. Since 2011, he is leading a research project, supported by JST CREST, to explore an effective way of assisting legacy HPC code migration to future-generation extreme-scale computing systems. He received the B.E. Degree in Mechanical Engineering, and the M.S. and Ph.D. Degrees in Information Sciences from Tohoku University in 1995, 1997 and 1999, respectively.
Abstract: Intel is investing in a broad set of technologies to move computing to the address the challenges of Exascale computing. These technologies are targeted to reach next generation performance in a configurable system that can achieve exceptional performance in data analytics, traditional high performance computing and artificial intelligence. Being able to address all of these application domains is of critical importance.
Intel is investing in processors, fabric, memory and software. Each will be discussed along with their respective importance in achieving Exascale.
Bio: Dr. Nash Palaniswamy has been at Intel since October 2005, and focuses in the area of Enterprise and High Performance Computing in the Datacenter group. He is currently the Senior Director for Worldwide Solutions Enablement and Revenue Management for Enterprise and HPC. In this role, he is responsible for managing all strategic opportunities in Enterprise and HPC and managing and meeting revenue for the Enterprise and Government segment in Intel’s datacenter group. Dr. Palaniswamy leads a team that drives strategic opportunities worldwide (solutions, architecture, products, business frameworks, etc) in collaboration with Intel’s ecosystem partners.
His prior responsibilities at Intel included being the lead for worldwide business development and operations for Intel® Technical Computing Solutions, Intel® QuickAssist Technology based accelerators in HPC, and World Wide Web Consortium Advisory Committee representative from Intel. Prior to joining Intel as part of the acquisition of Conformative Systems, an XML Accelerator Company, he has served in several senior executive positions in the industry including being the Director of System Architecture at Conformative Systems, CTO/VP of Engineering at MSU Devices (a publicly traded company), and Director of Java Program Office and Wireless Software Strategy in the Digital Experience Group of Motorola, Inc.
Dr. Palaniswamy holds a B.S. in Electronics and Communications Engineering from Anna University (Chennai, India) and an M.S. and Ph.D. from the University of Cincinnati in Electrical and Computer Engineering.
Abstract: Intel is investing in a broad set of technologies to move computing to the address the challenges of Exascale computing. These technologies are targeted to reach next generation performance in a configurable system that can achieve exceptional performance in data analytics, traditional high performance computing and artificial intelligence. Being able to address all of these application domains is of critical importance.
Intel is investing in processors, fabric, memory and software. Each will be discussed along with their respective importance in achieving Exascale.
Bio: Dr. Nash Palaniswamy has been at Intel since October 2005, and focuses in the area of Enterprise and High Performance Computing in the Datacenter group. He is currently the Senior Director for Worldwide Solutions Enablement and Revenue Management for Enterprise and HPC. In this role, he is responsible for managing all strategic opportunities in Enterprise and HPC and managing and meeting revenue for the Enterprise and Government segment in Intel’s datacenter group. Dr. Palaniswamy leads a team that drives strategic opportunities worldwide (solutions, architecture, products, business frameworks, etc) in collaboration with Intel’s ecosystem partners.
His prior responsibilities at Intel included being the lead for worldwide business development and operations for Intel® Technical Computing Solutions, Intel® QuickAssist Technology based accelerators in HPC, and World Wide Web Consortium Advisory Committee representative from Intel. Prior to joining Intel as part of the acquisition of Conformative Systems, an XML Accelerator Company, he has served in several senior executive positions in the industry including being the Director of System Architecture at Conformative Systems, CTO/VP of Engineering at MSU Devices (a publicly traded company), and Director of Java Program Office and Wireless Software Strategy in the Digital Experience Group of Motorola, Inc.
Dr. Palaniswamy holds a B.S. in Electronics and Communications Engineering from Anna University (Chennai, India) and an M.S. and Ph.D. from the University of Cincinnati in Electrical and Computer Engineering.
Abstract: As systems scale in size and complexity, reasoning about their properties and controlling their behavior requires complex simulations, which often involves multiple interacting co-simulators that must be deployed and configured on high performance computing resources. Increasingly, cloud platforms which may even be federated, offer cost-effective solutions to realize such deployments. However, researchers and practitioners alike often face a plethora of challenges stemming from the need for rapid provisioning/deprovisioning, ensuring reliability, defining strategies for autoscaling against changing workloads, handling resource unavailabilities, and exploiting modern features such as GPUs, FPGAs, and NUMA architectures to name a few, for which they generally tend to lack the expertise to overcome these challenges. Model-driven engineering (MDE) offers significant promise to address these challenges by providing the users with intuitive abstractions and automating the deployment and configuration tasks. This talk describes our ongoing work in this space and will highlight both the MDE and systems solutions that we are investigating.
Bio: Dr. Aniruddha S. Gokhale is an Associate Professor in the Department of Electrical Engineering and Computer Science, and Senior Research Scientist at the Institute for Software Integrated Systems (ISIS) both at Vanderbilt University, Nashville, TN, USA. His current research focuses on developing novel solutions to emerging challenges in edge-to-cloud computing, real-time stream processing, and publish/subscribe systems as applied to cyber physical systems including smart transportation and smart cities. He is also working on using cloud computing technologies for STEM education. Dr. Gokhale obtained his B.E (Computer Engineering) from University of Pune, India, 1989; MS (Computer Science) from Arizona State University, 1992; and D.Sc (Computer Science) from Washington University in St. Louis, 1998. Prior to joining Vanderbilt, Dr. Gokhale was a member of technical staff at Lucent Bell Laboratories, NJ. Dr. Gokhale is a Senior member of both IEEE and ACM, and a member of ASEE. His research has been funded over the years by DARPA, DoD, industry and NSF including a NSF CAREER award in 2009.
Abstract: As systems scale in size and complexity, reasoning about their properties and controlling their behavior requires complex simulations, which often involves multiple interacting co-simulators that must be deployed and configured on high performance computing resources. Increasingly, cloud platforms which may even be federated, offer cost-effective solutions to realize such deployments. However, researchers and practitioners alike often face a plethora of challenges stemming from the need for rapid provisioning/deprovisioning, ensuring reliability, defining strategies for autoscaling against changing workloads, handling resource unavailabilities, and exploiting modern features such as GPUs, FPGAs, and NUMA architectures to name a few, for which they generally tend to lack the expertise to overcome these challenges. Model-driven engineering (MDE) offers significant promise to address these challenges by providing the users with intuitive abstractions and automating the deployment and configuration tasks. This talk describes our ongoing work in this space and will highlight both the MDE and systems solutions that we are investigating.
Bio: Dr. Aniruddha S. Gokhale is an Associate Professor in the Department of Electrical Engineering and Computer Science, and Senior Research Scientist at the Institute for Software Integrated Systems (ISIS) both at Vanderbilt University, Nashville, TN, USA. His current research focuses on developing novel solutions to emerging challenges in edge-to-cloud computing, real-time stream processing, and publish/subscribe systems as applied to cyber physical systems including smart transportation and smart cities. He is also working on using cloud computing technologies for STEM education. Dr. Gokhale obtained his B.E (Computer Engineering) from University of Pune, India, 1989; MS (Computer Science) from Arizona State University, 1992; and D.Sc (Computer Science) from Washington University in St. Louis, 1998. Prior to joining Vanderbilt, Dr. Gokhale was a member of technical staff at Lucent Bell Laboratories, NJ. Dr. Gokhale is a Senior member of both IEEE and ACM, and a member of ASEE. His research has been funded over the years by DARPA, DoD, industry and NSF including a NSF CAREER award in 2009.
Abstract: GPU’s has been used to accelerate HPC algorithms which are based on first principles theory and are proven statistical models for accurate results in multiple science domains. This talk will provide insights into the HPC domain and how it affects the programs you write today and in the future in various domains.
Bio: Bharatkumar Sharma obtained master degree in Information Technology from Indian Institute of Information Technology, Bangalore. He has around 10 years of development and research experience in domain of Software Architecture, Distributed and Parallel Computing. He is currently working with Nvidia as a Senior Solution Architect, South Asia. He has published papers and journal articles in field of Parallel Computing and Software Architecture.
Abstract: GPU’s has been used to accelerate HPC algorithms which are based on first principles theory and are proven statistical models for accurate results in multiple science domains. This talk will provide insights into the HPC domain and how it affects the programs you write today and in the future in various domains.
Bio: Bharatkumar Sharma obtained master degree in Information Technology from Indian Institute of Information Technology, Bangalore. He has around 10 years of development and research experience in domain of Software Architecture, Distributed and Parallel Computing. He is currently working with Nvidia as a Senior Solution Architect, South Asia. He has published papers and journal articles in field of Parallel Computing and Software Architecture.
Abstract: HPC has huge future scope for scientific applications, large data bases, AI/deep learning, business applications in financial industry, telecom (particularly 5G) etc. Unfortunately, present systems are serving the market with separate products, in a rather fragmented manner. Challenge is to create a "Unified Architecture" that will unify the user space, following a "Top-down" approach instead of “Bottom-up" approach which forces applications to try to fit their code to peculiarities of particular cpu's, accelerators, system architecture etc. Today's potential of HPC and silicon technology is grossly underutilized due to effort and time spent in tailoring parallel code to specific machines. Incorporation of FPGA-based reconfigurability in general applications is getting delayed unnecessarily. If economic benefits based on what is eminently feasible technologically, are not delivered to the society quickly enough, it slows down investments in further technological enhancements. All HPC users will benefit from this in the long run. At the same time, we are focusing on how very complex code can be put together in shorter time-span, and in such a way that investment in top-level code design is long-lived in face of anticipated changes in successive generations of chips, interfaces etc. Due to vast scope, we will present only a broad overview and elaborate only on couple of aspects. Improving programmer productivity by designing and writing parallel code at multiple levels of abstraction by providing more expressive notations, tools for transforming one level to the next is required. It is also necessary to do away with artificial boundary between hardware description languages and how far traditional compilers reach starting from high level languages. This will ensure more seamless utilization of FPGA-based re-configurability in the unified system architecture. Since hardware aspects are too complex, only one particular aspect related to GPU's will be covered. Initially, we’ll be addressing application of optimization algorithms to economic modelling and telecom systems.
Bio: VCV.Rao received his Master degree in Mathematics from Andhra University, India in 1985 and Ph.D degree in Mathematics from IIT-Kanpur in the year 1993. He is associated with C-DAC since 1993 on High Performance Computing projects. He contributed to design, develop and deploy C-DAC’s PARAM Series of Supercomputers, GARUDA Grid Computing project, Parallel Computing workshops, contributed to PARAM series at premier academic Institutions. Currently, he is an Associate Director in the High-Performance Computing Technologies (HPC-Tech) Group, at C-DAC, Pune.
Bio: Karmarkar received his B.Tech in EE from IIT Bombay in 1978, M.S. from the California Institute of Technology in 1979 and Ph.D. in Computer Science from the University of California, Berkeley in 1983. He is well known for linear programming algorithms - a cornerstone in the field of Linear Programming. He is a Fellow of Bell Laboratories (1987 onwards). In 2006-2007, he served as scientific advisor to the Chairman Tata group, founded CRL and architected "EKA" system, which stands for "Embedded Karmarkar Algorithm”. Currently, he is a Consultant Chief Architect in C-DAC, Pune. He is also a distinguished visiting professor at several institutes such as IISc, and IITs.
Abstract: HPC has huge future scope for scientific applications, large data bases, AI/deep learning, business applications in financial industry, telecom (particularly 5G) etc. Unfortunately, present systems are serving the market with separate products, in a rather fragmented manner. Challenge is to create a "Unified Architecture" that will unify the user space, following a "Top-down" approach instead of “Bottom-up" approach which forces applications to try to fit their code to peculiarities of particular cpu's, accelerators, system architecture etc. Today's potential of HPC and silicon technology is grossly underutilized due to effort and time spent in tailoring parallel code to specific machines. Incorporation of FPGA-based reconfigurability in general applications is getting delayed unnecessarily. If economic benefits based on what is eminently feasible technologically, are not delivered to the society quickly enough, it slows down investments in further technological enhancements. All HPC users will benefit from this in the long run. At the same time, we are focusing on how very complex code can be put together in shorter time-span, and in such a way that investment in top-level code design is long-lived in face of anticipated changes in successive generations of chips, interfaces etc. Due to vast scope, we will present only a broad overview and elaborate only on couple of aspects. Improving programmer productivity by designing and writing parallel code at multiple levels of abstraction by providing more expressive notations, tools for transforming one level to the next is required. It is also necessary to do away with artificial boundary between hardware description languages and how far traditional compilers reach starting from high level languages. This will ensure more seamless utilization of FPGA-based re-configurability in the unified system architecture. Since hardware aspects are too complex, only one particular aspect related to GPU's will be covered. Initially, we’ll be addressing application of optimization algorithms to economic modelling and telecom systems.
Bio: VCV.Rao received his Master degree in Mathematics from Andhra University, India in 1985 and Ph.D degree in Mathematics from IIT-Kanpur in the year 1993. He is associated with C-DAC since 1993 on High Performance Computing projects. He contributed to design, develop and deploy C-DAC’s PARAM Series of Supercomputers, GARUDA Grid Computing project, Parallel Computing workshops, contributed to PARAM series at premier academic Institutions. Currently, he is an Associate Director in the High-Performance Computing Technologies (HPC-Tech) Group, at C-DAC, Pune.
Bio: Karmarkar received his B.Tech in EE from IIT Bombay in 1978, M.S. from the California Institute of Technology in 1979 and Ph.D. in Computer Science from the University of California, Berkeley in 1983. He is well known for linear programming algorithms - a cornerstone in the field of Linear Programming. He is a Fellow of Bell Laboratories (1987 onwards). In 2006-2007, he served as scientific advisor to the Chairman Tata group, founded CRL and architected "EKA" system, which stands for "Embedded Karmarkar Algorithm”. Currently, he is a Consultant Chief Architect in C-DAC, Pune. He is also a distinguished visiting professor at several institutes such as IISc, and IITs.
The SCEC workshop will include a track for lightning talks during which ten-minute talks will be presented in a sequence. The talks will provide a high-level overview of the topics that are aligned with the theme of the workshop. While the time allocated for the lightening talks may not be enough for presenting the fine details of the chosen topic, it could be enough for including key information that piques the interest of the audience for an engaging discussion after the talk. The abstract and slides of the talks will be published on the workshop website. There can be multiple authors on a submission but only one presenter is permitted for each lightning talk due to time-constraints.
Guidelines for preparing the submission for the lightning talk are as follows:
In maximum 300 words, the abstract should answer the following questions on a topic that is relevant to the workshop:
A rough draft of the proposed presentation (up to 5 slides) should also be submitted along with the abstract. The abstracts and the slides must be submitted in the PDF format through the submission system at the following URL:
*Abstracts will not be accepted over email. Abstracts which are incomplete or received after the deadline will not be considered. The submission system will close on November 20, 2017.
Presentation guidelines:
The registration fees for the workshop is Rupees 1000 (US $16) and can be paid at the venue using cash or credit/debit card. With the support of our sponsors, we are able to waive off the fee for a selected number of particpants who are registered by December 7, 2017. For requesting the fee waiver, please send an email at "scecforum@gmail.com" with the subject line "Registration Fee Waiver" and let us know how the fee waiver will help you. All the workshop attendees should register in advance by filling the following form: