Transforming IT by delivering infrastructure agility at scale

26 August, 2019
Isaac Sacolick
IBM

From time to time, we invite industry thought leaders to share their opinions and insights on current technology trends to the IBM Systems IT Infrastructure blog. The opinions in these posts are their own, and do not necessarily reflect the views of IBM.

If you’re a startup developing a new application to test, iterate and extend, then you have many architecture, hosting and scalability options. Many startup engineers will look at public clouds for infrastructure since it provides significant flexibility deploying early pilots and then scaling based on usage and other criteria.

However, what if the application is going to have high and consistent usage on day one? Let’s say you’re deploying a new decision support system that will process sensitive data, connect multiple operational datastores, integrate with newly deployed IoT data hubs, pull data from the ERP, trigger workflows in several SaaS solutions, and then provide analytics to employees through multiple mobile applications. Some of these workloads will have consistent usage, especially during business hours. Others will have spike utilization when new data is processed or when predictive analytical models reprocess months and years’ worth of data.

Selecting computing architectures for systems that will experience varying loads, process large volumes of data, interface with multiple connection points, transmit sensitive data, and require high availability on day one is less than trivial. It requires architects to break down the full problem into different domains depending on their platforms, expected loads, and security requirements and then specify optimal infrastructures for each one.

This can be a challenging exercise for architects. A traditional approach is to perform a full, end-to-end analysis to determine an optimal, low-risk architecture.

However, architects today are rarely given sufficient time and knowledge to do this level of planning. When it comes to IT infrastructure, businesses leaders expect IT to be as nimble and responsive as startups and taking months to architect and then deploy infrastructure is not acceptable. They also expect five-nine’s or greater reliability and that the highest level of encryption is employed to assure the privacy of sensitive data.

Enterprise IT departments looking to deliver the same infrastructure agility as startups but with the more demanding computing, reliability, performance and security requirements of the enterprise must expand their computing options. Public clouds are suitable for some applications, but private clouds may be more optimal for applications that connect to multiple enterprise data sources and have more consistent workloads. Many enterprises are opting for hybrid multiclouds as their most versatile option enabling them to provision compute resources optimally.

Defining infrastructure agility

Having a hybrid multicloud option to deploy to is just the beginning. IT departments should be looking to get more capabilities directly from the infrastructure they use in private clouds and also require integration and interoperability between their public and private cloud workloads. Here are several examples where selecting the right infrastructure can provide IT additional capabilities and benefits.

  • Encryption is implemented at the infrastructure level so that all data is encrypted at rest with no development effort and very minimal performance impact.
  • An infrastructure that makes it easy to leverage open source architectures, including Linux, Kubernetes, testing tools, and application development frameworks.
  • Configuration and integration to enable development teams to build, test, and support applications.
  • Handle the performance and scalability needs of emerging technology workloads, including machine learning, blockchain, and IoT.
  • IT organizations require automation and better tools to manage system resources.
  • As applications scale in usage, IT requires more knowledge of costs, tools to optimize them, and methods to report back utilization to business consumers.
  • Selecting infrastructure that optimizes energy consumption is an important criterion. Enterprises are increasingly more concerned about energy costs and the carbon footprint of their data centers.

Of all these infrastructure considerations, three stand out as critical criteria for enterprises to remain competitive. First, agility needs to be a core capability across all technology layers, including infrastructure. That means being able to run the full application development lifecycle and enable rapid infrastructure configuration. Second, emerging technologies will increase the volume and velocity of data and selecting flexible infrastructure with tools to automate system resources is incredibly important. Third, enterprises are increasingly more digital and so they require reliable high performance and security from their infrastructure.

Some new, transformative thinking is needed by IT to meet requirements and expectations. IT must select infrastructure beyond just hardware specifications; it’s equally or even more critical to identify advanced capabilities that the infrastructure enables. IT should understand performance, scalability, and security requirements so that they can be addressed efficiently and are not impediments to the enterprise to experiment and compete reliably. Lastly, establish versatile and cloud infrastructures and partner with development teams so that agile and DevOps capabilities become end-to-end business enablers.

The post Transforming IT by delivering infrastructure agility at scale appeared first on IBM IT Infrastructure Blog.