Intel® Parallel Computing Center at Information Technology Center (ITC), the University of Tokyo

Information Technology Center (ITC), the University of Tokyo

Principal Investigators:

Yutaka Ishikawa, Professor, ITC/University of Tokyo
Kengo Nakajima, Professor, ITC/University of Tokyo
Takahiro Katagiri, Associate Professor, ITC/University of Tokyo
Satoshi Ohshima, Assistant Professor, ITC/University of Tokyo


SCD/ITC, the university of tokyo, japan

The Supercomputing Division (SCD), Information Technology Center (ITC), The University of Tokyo was originally established as the Supercomputing Center of the University of Tokyo in 1965, making it the oldest academic supercomputer center in Japan. ITC is also a core organization of “Joint Usage/Research Center for Interdisciplinary Large-Scale Information Infrastructures (JHPCN)” project, and a part of the “High-Performance Computing Infrastructure (HPCI)” operated by the Japanese Government.

Currently, SCD/ITC consists of more than 10 faculty members, whose expertise covers a wide range of research disciplines in computer science, applications, and applied mathematics. SCD/ITC is now operating three supercomputer systems including a Fujitsu PRIMEHPC FX10 System (Oakleaf-FX) at 1.13 PFLOPS.

Joint center for advanced high performance computing (JCAHPC)

In 2013, Center for Computational Sciences, University of Tsukuba (CCS) and ITC agreed to establish the Joint Center for Advanced High Performance Computing (JCAHPC). Primary mission of JCAHPC is designing, installing and operating the Post T2K System based on many-core architectures, such as Intel® Xeon Phi™. The Post T2K System is expected be 20-30 PFLOPS of peak performance, and will be installed in FY.2015. CCS and ITC will develop system software, numerical libraries, and large-scale applications to for the Post T2K system under intensive collaboration through JCAHPC.


ppOpen-HPC is an open source infrastructure for development and execution of optimized and reliable simulation code on post-peta-scale (pp) parallel computers based on many-core architectures, and it consists of various types of libraries, which cover general procedures for scientific computation. Source code developed on a PC with a single processor is linked with these libraries, and the parallel code generated is optimized for post-peta-scale systems. The target post-peta-scale system is the Post T2K System. ppOpen-HPC supports approximately 2,000 users of the supercomputer system in the University of Tokyo, enabling them to switch from homogeneous multicore clusters to the Post T2K System. ppOpen-HPC is developed by SCD/ICD and collaborators, and a five-year project (FY.2011-2015) supported by Japanese government.


Our primary target as a member of Intel® Parallel Computing Center(s) (Intel® PCC) is intensive optimization of preconditioned iterative solvers for structured/unstructured sparse coefficient matrices in UTbench for the new Intel® Xeon® and Intel® Xeon Phi™ processor, and to construct general strategies for optimization of these procedures for the new processors.

UTbench consists of two codes, GeoFEM-Cube/CG and Poisson3D-OMP. GeoFEM-Cube/CG is a benchmark code based on GeoFEM, and it solves 3D static linear-elastic problems in solid mechanics. It contains typical procedures in finite-element computations, such as matrix assembling and preconditioned iterative solvers. Two types of parallel programming models (Flat-MPI and OpenMP/MPI Hybrid) are implemented to GeoFEM-Cube/CG. Poisson3D-OMP is a finite-volume based 3D Poisson equation solver using ICCG iterative method. Poisson3D-OMP is parallelized by OpenMP. Poisson3D-OMP supports a variety of reordering methods, methods for matrix storage (CRS and ELL), and coalesced/sequential memory access models.

Iterative solvers of GeoFEM-Cube/CG and Poisson3D-OMP also utilized as iterative solvers of ppOpen-HPC. Moreover, UTbench will be adopted as one of the benchmarks for procurement of Post T2K system in JCAHPC.


Related Websites:

For more complete information about compiler optimizations, see our Optimization Notice.