Date: April 8, 2005
Time: 3:15 pm
Refreshments: 3:00 pm
Location: Stata - Gates - 7th Floor Lounge
The Design of the VMware VMkernel -- An OS Kernel for Managing Virtual Machines
Speaker: Boon Ang
Speaker Affiliation: VMWare
VMware ESX Server is a platform for running many Intel-x86 virtual machines on a single physical machine for purposes of consolidating workloads and simplifying system management. ESX Server runs an OS kernel specifically designed to manage virtual machines (VMs). This OS kernel, known as the VMkernel, provides strict resource allocation guarantees for VMs, highly efficient I/O, and advanced reliability features. It runs on servers with up to 16 processors and can manage up to 8 VMs per processor. In this talk we describe the architecture of the VMkernel. We focus on its design for high-performance, reliable networking and disk access, and describe some of the unique issues that it solves in order to manage VMs effectively. We also describe a file system that we have built to provide efficient access to the virtual disk files of virtual machines. We conclude with a description of some interesting research topics that remain.
Boon Seong Ang's Bio:
Boon did his undergraduate studies in Computer Systems Engineering at Stanford University and his SM and PhD in EECS at MIT/LCS. While at LCS, he was a member of CSG where he worked on the Monsoon Data-flow project and the various StarT projects. After graduating from MIT, he joined HP Labs where he worked on more conventional networking (TCP/IP/Ethernet), reconfigurable systems, high performance networking, and architecture for supporting utility computing. He left HP Labs in 2004 to join VMware, where he is part of the ESX team. His research interests include compilation, processor micro-architecture, parallel processing architecture, network/communication interface design, virtual machine platforms and IO virtualization.
Date: Friday, Nov. 5, 2004
Time: 3:30 pm
Refreshments: 3:15 pm
Location: Bldg. 34-401A (Grier Room)
Supporting Multiple Models of Computation in System Level Design Languages
Sandeep K. Shukla
Electrical and Computer Engineering Department
Virginia Polytechnic and State University
System Level Design Languages (SLDL) and frameworks such as SystemC, SpecC, and their other alternatives suffer from the lack of support for heterogeneous and hierarchical modeling with any formal compositional properties. In this talk we will discuss how we have built a heterogeneous modeling and simulation extensions for SystemC. The key to this heterogeneity is rendering distinct Models of Computation (MoCs) first class status in the design framework. This has been achieved without changing SystemC language, or without compromising the ability to compile simulation models using any standard C++ compiler. Our experiment with heterogeneity is illustrative of what is required of SLDLs to adequately raise the abstraction levels for design entry in today’s design flow.
Bio: Sandeep K. Shukla (email@example.com) is an assistant professor of computer engineering and the deputy director of the center for embedded systems for critical applications (CESCA) at Virginia Tech. Sandeep has co-authored “SystemC Kernel Extensions for Heterogeneous Modeling”, and co-edited, “Nano, Quantum and Molecular Computing: Implications to High Level Design and Validation”, and “Formal Methods and Models for System Design: A System Level Perspective”, all which have been published by Kluwer. Sandeep was recently awarded the PECASE award for his research in design automation for embedded systems design.
Date: Friday, October 15, 2004
Time: 3:30 pm
Refreshments: 3:15 pm
Location: Stata Center - 32-G449 (Patil)
Extreme Makeover for System Design
Daniel D. Gajski
Center for Embedded computer Systems
University of California at Irvine
With complexities of Systems-on-Chip rising almost daily, the design community has been searching for new vision that can handle given complexities with increased productivity and decreased times-to-market. The obvious solution, such as increasing levels of abstraction, introducing variety of IPs or offering new design languages will not solve the problem but only prolong the present status of inefficiency and confusion. What is needed is a drastic change in design methodology for complex systems that consist of software and hardware. In order to design such systems efficiently, we need a new approach with a new design flow, with new models with well defined semantics and a new formalism that will support system synthesis and verification of software and hardware.
In order to find the solution, we will look first at the system gap between SW and HW designs and derive requirements for the design flow that includes software as well as hardware. In order to enable new EDA tools for model generation, simulation, synthesis and verification, the design flow has to be well defined with unique abstraction levels, model semantics and model transformations corresponding to design decisions made by the designers. We will introduce the concept of model algebra that supports this approach and can serve as an enabler for the extreme makeover of system design and system EDA. We will support this concept with hard data and finish with a prediction and a roadmap toward the final goal of increasing productivity by several orders of magnitude while reducing expertise level needed for design of complex systems to the basic principles of design science only.
Date: Monday, September 20, 2004
Time: 2:00 pm
Refreshments: 1:45 pm
Location: Stata Center - Gates Tower - 7th floor lounge
Designing Parallel Operating Systems using Modern Interconnects
Los Alamos National Laboratory
The use of clusters as high capability and capacity computers is rapidly growing in the industry, academia, and government. This growth is accompanied by fast-paced progress in cluster-aware hardware, and in particular in interconnection technology. Contemporary networks offer not only excellent performance as expressed by latency and bandwidth, but also advanced architectural features, such as programmable network interface cards, hardware support for collective communication operations, and support for modern communication protocols such as MPI and RDMA. These network mechanisms pave the way to advances in system software for large-scale clusters. Such machines are typically composed of loosely-coupled independent compute nodes, each running a local operating system such as Linux. These solutions are inadequate for many large-scale system tasks, such as resource management, job scheduling, and fault tolerance.
Our research at Los Alamos National Laboratory has focused on leveraging the features of modern interconnects to address these issues in a global, cohesive view. As part of this work, we have implemented two novel job scheduling algorithms, that make use of advanced collective communication capabilities. We have also implemented some of the more traditional job scheduling algorithms, and compared the performance of these algorithms in several scenarios and cluster architectures. This talk discusses the challenges involved in managing large-scale clusters and our proposed solutions to these challenges, relying on modern interconnects. In particular, the talk will cover aspectsof resource management, job scheduling, and user-level communication, and the main experimental results we obtained for these. If time permits, an application of our model to fault-tolerance will also be discussed.
joint work with Dror Feitelson (HUJI) and Fabrizio Petrini (LANL)
Eitan Frachtenberg is a postdoctoral fellow at Los Alamos National Laboratory. His reseach interests include parallel system software, advanced interconnects, and job scheduling. Dr. Frachtenberg obtained his Ph.D, M.Sc, and B.Sc at the Hebrew university in Israel, all in computer science. Additional research and contact information can be found at http://www.cs.huji.ac.il/~etcs
SMARTS: Accelerating Microarchitecture Simulation via Statistical Sampling
July 8, 2003
NE43- Second floor lounge
James C. Hoe
Current software-based microarchitecture simulators are many orders of magnitude slower than the hardware they simulate. Hence, most microarchitecture design studies draw their conclusions from drastically truncated benchmark simulations that are often inaccurate and misleading. This talk presents the Sampling Microarchitecture Simulation (SMARTS) framework as an approach to enable fast and accurate performance measurements of full-length benchmarks. SMARTS accelerates simulation by selectively measuring in detail only an appropriate benchmark subset. SMARTS prescribes a statistically sound procedure for configuring a systematic sampling simulation run to achieve a desired quantifiable confidence in estimates.
Analysis of 41 of the 45 possible SPEC2K benchmark/input combinations show CPI and energy per instruction (EPI) can be estimated to within ± 3% with 99.7% confidence by measuring fewer than 50 million instructions per benchmark. In practice, inaccuracy in micro-architectural state initialization introduces an additional uncertainty which we empirically bound to ~2% for the tested benchmarks. Our implementation of Smarts achieves an actual average error of only 0.64% on CPI and 0.59% on EPI for the tested benchmarks, running with average speedups of 35 and 60 over detailed simulation of 8-way and 16-way out-of-order processors, respectively
James C. Hoe received a PhD degree in EECS from Massachusetts Institute of Technology in 2000 and is currently an Assistant Professor of ECE at Carnegie Mellon University. His research interests cover many aspects of computer architecture and digital hardware design. His present focus is on developing high-level hardware description and synthesis technologies to simplify hardware development. He is also working on innovative processor microarchitectures to address issues in security and reliability. Please see http://www.ece.cmu.edu/~jhoe for more information.