OpenStack in TSSG
OpenStack in TSSG: From Cloud Computing to Software Defined Infrastructure
Two years on from our last OpenStack talk, we update on the uptake of OpenStack in TSSG and why we actually run multiple OpenStacks. This talk will describe how we moved on from simple virtualisation to Software Defined Infrastructure (SDI), including Software Defined Networking (SDN), Software Defined Storage (SDS), and Orchestration. SDI and OpenStack is a journey and we will also identify our roadmap.
We will discuss the common architecture we have taken across our service layers and the physical infrastructure; why we use 7, yes 7, physical NICs per server plus an iDrac port, why we adopted Hyper Converged Infrastructure, a CLOS (Leaf/Spine) network and how we provide OpenStack services beyond OpenStack.
OpenStack neutron provides a simple network provisioning layer; however once integrated with our physical networking devices via their APIs we start to utilise SDN. We will cover the different networking options in OpenStack, such a Provider and Tenant networks, the different encapsulation types and why we use different deployment models for different workloads.
Hyper Converged Infrastructure also brought an SDS layer to the offering, and whilst this is primarily for OpenStack we have also been able to utilise it with more traditional deployments and directly with VMs. We’ll discuss block and object storage and why we have separate solutions for both as well as the service levels we can provide for block storage.
Standardisation of our network, compute, and storage infrastructure to identical(ish) devices provided and easy path to systems Orchestration. This cookie-cutter approach coupled with Ubuntu MaaS and Ansible enables us to deploy OpenStack from bare metal to fully functional in 90 mins. This approach was taken for all the SDI systems which ensures that any member of the team can deploy any part of our infrastructure.
This high level of automation and large-scale physical infrastructure requires a lot of automated monitoring, performance testing, and penetration and security testing. We’ll discuss the tools and techniques that we used to achieve this, including the metrics that we capture from the system and infrastructure required for the orchestration, testing, and metrics systems.
We have deployed multiple testbeds on this SDI, including Pervasive Nation (IoT) and an SDN/NFV Testbed. As well OpenStack on OpenStack, for additional experimentation and training, and multiple research project test and development environments.
Lastly we’ll discuss our current roadmap and the features that we are looking to implement, such as OpenDaylight, Octavia, Ironic, and Cells; ways to reduce the physical NIC requirements and increase performance and resilience; and why we are looking at moving away from OpenStack for some workloads (yet maintaining the SDI approach).