Cost Savings of Cloud in Retail

November 24, 2015 § Leave a comment

I recently came across an article that contained five predictions around the sale of televisions from Consumer Reports for Black Friday: (http://www.consumerreports.org/cro/lcdtvs/5-predictions-for-black-friday-2015). While reading these predictions, a few caught my interest (especially price matching and pricing transparency). These predictions underline one of the driving forces in retail – how do retailers improve their margins and still remain competitive?   While we commonly talk a lot about cloud being the platform to bring innovative capabilities to touch new clients and keep existing clients in retail, I’d like to focus on another benefit related to the cost savings that cloud can bring to retailers.

Let’s look at two dimensions related to cost: initial cost (to setup) and the ongoing costs or workloads. Most people understand that the initial costs should be lower leveraging cloud solution (e.g. a public cloud from IBM, Amazon, or Microsoft) versus investing in infrastructure in a traditional data center.   However, many consider the ongoing costs for cloud to be more expensive. I’d like to explore this angle for a moment.

Looking at the traditional data center, there are four areas to explore related to ongoing costs and how they might change with cloud:

  • Peak traffic and variability
  • Cost of power/electricity usage
  • Infrastructure labor costs
  • Homogeneity vs. heterogeneity

 

Almost all retailers face the challenge around how to best handle peak traffic or huge variations in traffic. Whether it is black Friday, Valentine’s day, Mother’s day, or a movie lets out, many retailers have to plan how to handle these variations in traffic. Many organizations address this challenge by using the “high-water mark” principle. With this principle, you allocate the maximum computing capacity to deal with the maximum traffic and make this available all year long.   With this approach, there are significant costs associated with keeping the infrastructure available whether it is being used or not. Being able to scale up capacity during these peak times and scale down afterwards is a classic cloud usage scenario that does result in reduced costs.

The cost of power is a metric we sometimes forget. Electricity usage is rising to the point where it is becoming the largest element of TCO for most data centers. It was estimated in 2013 that over 10% of the world’s electricity is consumed by IT. (http://www.theregister.co.uk/2013/08/16/it_electricity_use_worse_than_you_thought/). Many organizations use the PUE metric (power usage effectiveness) as a way to measure power efficiency in data centers. Unfortunately, much of the infrastructure in today’s data centers are obsolete, outdated, or unused and so power usage effectiveness tends to be much lower in traditionally owned data centers. Moving to a cloud based environment removes the burden of you having to measure PUEs, reduce your electricity consumption, and have a positive impact in the environment.

Have you looked at the infrastructure labor costs in your organization? I recently had the opportunity to look at a few cloud based data centers. One thing that impressed me was how few people you see. In the traditional data center, you typically see around 150 servers being managed by a single administrator. In a cloud based data center, the ratio changes to around 1000 servers per administrator. Automation is clearly a critical factor in optimizing labor and work and using your human resources for other activities.

A famous quote about the Model T from Henry Ford was that “any customer can have a car painted any color that he wants, so long as it is black.” Why did he say this? The Model T only came in black because the production line would slow down and have a negative impact on efficiency. There are similarities in cloud. With cloud, homogeneity has an impact in reducing cost. Think about the amount of costs expended because of differences in environments, platforms, and applications that run in your data center. Cloud offers great efficiency due to standardization, which will translate into cost savings. One will need to do the work to move these workloads so they can be executed on common platforms and detailed workload assessments can help.

My final thought struck me when I was in a shopping mall last week. I was making a purchase at a store that was using old point of sale devices. With retailers that have physical stores scattered across many locations, the associated costs for managing the IT infrastructure in each store represents an opportunity to leverage cloud to reduce costs as well as provide buyers a more delightful client experience.

Dev/Test Clouds – Part 1

October 19, 2015 § Leave a comment

Dev/Test clouds are often where many organizations start with Cloud.  I’ve seen Cloud provide some big benefits for organizations wanting to improve their software development and testing practices (especially when you look at the ability to provision environments quickly for testing scenarios).

In working with clients, we’ve gotten into some interesting discussions around what are the key differences with Dev/Test clouds vs production clouds.  First, I am a believer that Dev/Test should mirror production as closely as possible.  Sometimes that is possible and sometimes it is not.  These are the common variances I’ve run into:

  1. Production clouds having higher up-time (e.g. think the 5 9’s for availability)
  2. Data in production clouds is usually more sensitive.  We typically leverage obfuscation and data masking to eliminate any data privacy issues in Dev/Test clouds
  3. Dev/Test clouds may not have all the scale out and support all the HA/DR scenarios that production clouds do.
  4. Dev/Test clouds should leverage service virtualization where it makes sense.  (e.g. if you are waiting for off peak hours to test a CICS transactions on the mainframe in your Dev/Test environment, you might want to explore how this could be improved leveraging service virtualization.

Once you’ve setup your Dev/Test Cloud, these are the gotchas to watch out for:

  1. Dependent systems availability (or lack of availability) due to production schedules, security, or team contention
  2. Improper lifecycle management (e.g. teardown) that looks at the complete application lifecycle and speed (e.g. Agile teams (with a large number of iterations) moving through DEV, QA, and production can lead to virtualized application crawl)
  3. Unpredictable demand spikes
  4. Test data validity, obfuscation, and data movement (setup/teardown, etc)
  5. Usage fees with testing 3rd party services

In my next blog, I’ll talk about approaches to handle these gotchas!

Getting Started

October 19, 2015 § Leave a comment

I’ve never been part of a more “disruptive” movement in my 25 years in the industry than Cloud.  (I’m using the term “disruptive” in a mostly positive way).  I’ve already started to see the good, the bad, and sometimes the ugly emerging as I work with clients which I will share (with names hidden to protect the innocent!).

I’m looking forward to learning from my colleagues about their experiences as well.

Dev/Test Clouds – Part I

July 14, 2015 § Leave a comment

Dev/Test clouds are often where many organizations start with Cloud.  I’ve seen Cloud provide some big benefits for organizations wanting to improve their software development and testing practices (especially when you look at the ability to provision environments quickly for testing scenarios).

In working with clients, we’ve gotten into some interesting discussions around what are the key differences with Dev/Test clouds vs production clouds.  First, I am a believer that Dev/Test should mirror production as closely as possible.  Sometimes that is possible and sometimes it is not.  These are the common variances I’ve run into:

  1. Production clouds having higher up-time (e.g. think the 5 9’s for availability)
  2. Data in production clouds is usually more sensitive.  We typically leverage obfuscation and data masking to eliminate any data privacy issues in Dev/Test clouds
  3. Dev/Test clouds may not have all the scale out and support all the HA/DR scenarios that production clouds do.
  4. Dev/Test clouds should leverage service virtualization where it makes sense.  (e.g. if you are waiting for off peak hours to test a CICS transactions on the mainframe in your Dev/Test environment, you might want to explore how this could be improved leveraging service virtualization.

Once you’ve setup your Dev/Test Cloud, these are the gotchas to watch out for:

  1. Dependent systems availability (or lack of availability) due to production schedules, security, or team contention
  2. Improper lifecycle management (e.g. teardown) that looks at the complete application lifecycle and speed (e.g. Agile teams (with a large number of iterations) moving through DEV, QA, and production can lead to virtualized application crawl)
  3. Unpredictable demand spikes
  4. Test data validity, obfuscation, and data movement (setup/teardown, etc)
  5. Usage fees with testing 3rd party services

In my next blog, I’ll talk about approaches to handle these gotchas!

Metrics that Matter …for Cloud

July 9, 2015 § Leave a comment

Years ago, I had the opportunity of leading one of our Project Management Offices (PMO) in IBM. For anyone that has worked with project managers, one quickly finds out how much they like metrics. In fact, it is more than just project management. We all like metrics and most every leader in IT at some level relies on metrics to make decisions and gauge how they are doing.

I’ve always had a fascination (obsession perhaps) with metrics (ask me sometime about my fantasy football league if you want to see my obsession in action J). One of the challenges we face with metrics is an overload of too many metrics and not all the metrics are necessarily important. With all the metrics out there, I’ve worked hard to “reduce the noise” and find the metrics that really matter in my job.

Recently, I’ve had several conversations with clients on their cloud adoption strategy that led me to an epiphany related to metrics – The epiphany is that many of the key metrics I’ve come to value over the years are 100% applicable in cloud as well.

The classic way we usually categorize metrics is using the “Project Management Triangle” as shown below:

Screen Shot 2015-07-08 at 1.08.50 PM

The classic thought behind the Project Management Triangle is that we need to try and optimize a project based on these three opposing drivers of cost, time, and quality. The common belief is that you can optimize to two of these points but not all three (e.g. I can optimize on cost and time but at the sacrifice of quality).

I’ve found that the classic Project Management Triangle a very useful way to categorize and discuss metrics related to Cloud as well. In several client conversations I’ve had this year, we have used the categories of Cost (Budget), Time (Speed), and Quality (Reliability) as the categories to group the metrics we want to measure. The metrics that I’ve found that “matter” in these cloud discussions are shown below:

Metric Category Metric Comments
Cost / Budget Innovation Ratio How much are we spending on reducing technical debt vs. adding new capabilities?
Cost / Budget Budget underage/overage How well are we at predicting costs and estimating?
Time / Speed On-time Delivery % # of projects delivered on time / total # of projects
Time/ Speed Velocity What is the velocity of our team as well as historical Velocity?
Time / Speed Turnaround time for Changes How long does it take to handle a change request for a new feature or to fix a defect/issue?
Quality / Reliability Uptime/MTBF What is our mean time between failures?
Quality / Reliability Test Escapes How many defects do we have that were not found during testing?
Quality / Reliability System Utilization What is our resource utilization during stress (performance, # of users, etc)?

As I explored some of the root causes of issues for some of these metrics with clients, it was interesting to map how Cloud would address and improve the issues that were causing some of the poor numbers in some of these metrics. In the end, these were the metrics that “mattered” in my cloud discussions.

The other insight I had regarding Cloud and Metrics is whether we can take the classic Project Management Triangle and break the notions around optimization? What if we were able to optimize across all three dimensions instead of two? I believe Cloud is a game changer and we can indeed optimize across cost, time, and quality! I’ll share some of my ideas on how this might be possible in my next blog.

SAP and the Cloud

June 25, 2015 § Leave a comment

I spent the day in Ann Arbor, Michigan with about 20 clients.  The topic of the day was SAP and the Cloud and we had a lot of interesting discussions around “why IBM”.

After our discussions and deep dives, you could start to see the lightbulb coming on as to why SAP and IBM are partnering together for the HEC (Hana Enterprise Cloud) in the room.  In the end, whether it is Hana, or traditional SAP workloads, having a high performance network backbone (non-metered), the ability to do bare metal cloud, and having the experience IBM brings makes the story come together.

Another interesting observation I had was around how to get started.  We had some folks that started with moving isolated workloads in SAP to Cloud with some good success and some folks looking at a more big-bang approach.  In the end, I am one who prefers the crawl-walk-run approach.

Getting Started

June 24, 2015 § 1 Comment

I’ve never been part of a more “disruptive” movement in my 25 years in the industry than Cloud.  (I’m using the term “disruptive” in a mostly positive way).  I’ve already started to see the good, the bad, and sometimes the ugly emerging as I work with clients which I will share (with names hidden to protect the innocent!).

I’m looking forward to learning from my colleagues about their experiences as well.