Virtual Machines vs. Containers Revisited - Part 2

16/10/2019 49 min Episodio 82
Virtual Machines vs. Containers Revisited - Part 2

Listen "Virtual Machines vs. Containers Revisited - Part 2"

Episode Synopsis


SponsorsCircle CIEpisode on CI/CD with Circle CIShow DetailsIn this episode, we cover the following topics:Hypervisor implementations Hyper-V Type 1 hypervisor from Microsoft Architecture Implements isolation of virtual machines in terms of a partition Partition is logical unit of isolation in which each guest OS executes Parent partition Virtualization software runs in parent partition and has direct access to hardware Requires supported version of Windows Server There must be at least one parent partition Parent partition creates child partitions which host the guest OSes Done via Hyper-V "hypercall" API Parent partitions run a Virtualization Service Provider (VSP) which connects to the VMBus Handles device access requests from child partition Child partition Does not have direct access to hardware Has virtual view of processor and runs in Guest Virtual Address (not necessarily the entire virtual address space) Hypervisor handles interrupts to processor, and redirects to respective partition Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition VMBus Logical channel which enables inter-partition communication KVM (Kernel-based Virtual Machine) Virtualization module in Linux kernel Turns Linux kernel into hypervisor Available in mainline Linux since 2007 Can run multiple VMs running unmodified Linux or Windows images Leverages hardware virtualization Via CPU virtualization extensions (Intel VT or AMD-V) But also provides paravirtualization support for Linux/FreeBSD/NetBSD/Windows using VirtIO API Architecture Kernel component Consists of: Loadable kernel module, kvm.ko, that provides the core virtualization infrastructure Processor specific module, kvm-intel.ko or kvm-amd.ko Userspace component QEMU (Quick Emulator) Userland program that does hardware emulation Used by KVM for I/O emulations AWS hypervisor choices & history AWS uses custom hardware for faster EC2 VM performance Original EC2 technology ran highly customized version of Xen hypervisor VMs can run using either paravirtualization (PV) or hardware virtual machine (HVM) HVM guests are fully virtualized VMs on top of hypervisor are not aware they are sharing with other VMs Memory allocated to guest OSes is scrubbed by hypervisor when it is de-allocated Only AWS admins have access to hypervisors AWS found that Xen has many limitations that impede their growth Engineers improved performance by moving parts of software stack to purpose-built hardware components C3 instance family (2013) Debut of custom chips in Amazon EC2 Custom network interface for faster bandwidth and throughput C4 instance family (2015) Offload network virtualization to custom hardware with ASIC optimized for storage services C5 instance family (2017) Project Nitro Traditional hypervisors do everything Protect the physical hardware and bios, virtualize the CPU, storage, networking, management tasks Nitro breaks apart those functions, offloading to dedicated hardware and software Replace Xen with a highly optimized KVM hypervisor tightly coupled with an ASIC Very fast VMs approaching performance of bare metal server Amazon EC2 – Bare metal instances (2017) Use Project Nitro LinksXen ProjectKernel Virtual MachineQEMUMastering KVM VirtualizationHyper-VAWS Nitro SystemAWS re:Invent 2018: Powering Next-Gen EC2 Instances: Deep Dive into the Nitro SystemAWS re:Invent 2017: C5 Instances and the Evolution of Amazon EC2 VirtualizationEnd SongFax - StagesFor a full transcription of this episode, please visit the episode webpage.We'd love to hear from you! You can reach us at:Web: https://mobycast.fmVoicemail: 844-818-0993Email: [email protected]: https://twitter.com/hashtag/mobycast 

More episodes of the podcast Mobycast