Posts Tagged cloud computing

Leslie Muller Chats with Mike Laverick of RTFM Education


Leslie Muller, Founder & CTO of DynamicOps, sat down with Mike Laverick of RTFM Education for a chat about cloud computing, cultural challenges, technical needs and how DynamicOps is transforming the datacenter.

Sit back, relax and join us for some Vendorwag –  http://bit.ly/g28xGz

, , ,

Leave a comment

Part 3: Maintaining Control of Your Cloud


by: Rich Bourdeau, VP Product Marketing, DynamicOps

Here we are #3 in our series. Let’s take a quick review of where we are at:

  1. Automated self-service automates the process to provision and manage IT resources. 
  2. Secure multi-tenancy allows you to reserve resources for different groups, assuring that only authorized users will be able to create, reconfigure or decommission machines from resources allocated to that group. 

The next big challenge in deploying on-demand private cloud services is being able to control the amount of resources, the process used, and the management functions that can be performed for each type machine or application.  Pretty simple. Not really. But it can be with some homework and insight.

Moving at the speed of virtualization 

The good thing about virtualization is that it is quicker and easier to provision virtual machines than physical machines.  The bad thing about virtualization is that virtual machines can be provisioned much quicker typically without all the controls that accompanied the lengthy procurement and provisioning process of physical machines. Without appropriate operational governance and control, it is not uncommon for companies to waste 10-20% of their resources on unauthorized and over provisioned machines. And to add to the mix, many virtualization management software solutions on the market do not enforce the same controls to assure that machines are provisioned according to organizations best practices.  This leads to non-compliant machines with outdated software versions that expose companies to unplanned downtime and security risks.  Your management software should help control and contain, not create additional layers of challenge.

Limiting resource consumption

Cloud automation software must have policies which control the quantity and type of resources a user is allowed to consume during the provisioning of a machine or application.  Period. The administrator must be able to specify not only how much CPU, memory, storage and network resources a given user, or application will receive, but also the tier (service level) and pool that the resources will be allocated from.  Unless you want to maintain a large number of service blueprints, you will want to be able to set up service blueprints with a variable amounts of resources but with approval thresholds and the ability to customize the approval workflow. Getting better control over resource consumption by delivering the right size machine at the right service level can translate to significant capital savings.

Enforcing Best Practices 

The advantage of automation is that you have better control and enforcement of your best practices, ensuring that every machine is configured using the same process every time,  thereby eliminating the potential for mistakes or intentional circumvention of company policies. These policies include custom things like approval workflows; build parameters, customization settings, lease duration, archival policies, and what management functions a given user will be allowed to perform against the machine after it has been built. 

Controls should be granular

It is not sufficient to be able to specify policies that that apply to all users, or all machines or even all the users in a business group.  If you think about it, you will quickly realize that different types of machines need different processes and build parameters.  These operational controls need to be granular enough to accommodate what is common vs. what is different between not only different types of machines, but also different users or groups of users. For example:  You may need to provision desktops for both developers and office users. While they both need common policies which control how Windows is configured, and connected to the network, etc  they can be completely different  in the amount of resources, they will be allocated as well as the management functions that developers will be allowed to perform compared to office workers 

Enforce governance with policies not people

The key to automated self service is to replace operational governance with policies not people.  Without the appropriate controls in place you will just be trading reduced operational savings for increased capital savings.  Policies will keep it all aligned to the corporate goals.

Maintaining control is easier than you think. Just stay true to these simple things:

  1. Analyze your process and make sure your vendor addresses all levels
  2. Keep your fingers on the knobs that control consumption
  3. Best practices are called best for a reason – stick to them and make sure your vendor falls in line
  4. Know the needs of all business groups and make sure the solution will scale up AND down to accommodate

Now that we have the control issue covered, join me next time when we look at Deployment Simplicity as the next private cloud management must have.

, ,

Leave a comment

Part 2: How to Share and Play Well with Others in a Private Cloud


by: Richard Bourdeau, VP Product Marketing, DynamicOps

The common infrastructure. What a blessing. What a curse.

Here is a familiar scenario for you…Well mannered IT administrator goes to provision resources for a mission critical application only to find that said resources have already been consumed by someone in a different group. To make matters worse, the other less important function is over-provisioned. Well, a handy automated self service product would have helped this guy out, you say. Not necessarily. Many of today’s typical automation tools just treat your shared infrastructure as a single pool of resources with little or no control over who can consume them. And don’t confuse manual approvals as part of the provisioning process as solving this problem. In a large environment, it’s too easy to lose track over who can consume which resources.

It’s this daily occurrence that makes the ability to deliver secure multi-tenancy one of, if not the most important aspects of cloud computing. By allowing multiple groups or tenants to share a common physical infrastructure, companies can achieve better resource utilization and improved business agility. By dynamically reallocating resources between groups in order to address shifting workloads, companies can more effectively utilize their limited IT resources.

The challenge is to share in such a way that one group does not have access, or even visibility, to the resources that have been allocated to others. Without a secure method of ensuring multi-tenancy, a cloud computing strategy cannot succeed.

Secure multi-tenancy is one of those buzz words thrown about by most cloud automation vendors. Sure, many of them can do it. But to what scale? To what level of control and capacity? Before selecting a vendor make sure their capabilities to securely share a common IT infrastructure meet both your current and future needs.

Multiple Grouping Levels
Make sure that your cloud management tool has enough levels of grouping to support both your organizational constructs as well as the levels of service tiers that you want to provide for those businesses moving ahead.

For Example: You don’t have to be a large company with multiple divisions, each having many departments to need multiple levels of grouping. Maybe your company is not that big, but you want to separate desktop operations from server operations from development and test. In addition you may also want to sub-divide resources allocated to a group into several service tiers (i.e. Tier 1, Tier 2, and Tier 3). Most companies will need a minimum of 2-3 levels of resource grouping.

Think Strategically Act Tactically
Most companies start their private cloud deployments with a single group or in a lab. This is certainly a viable strategy to get experience with new technologies and processes before expanding deployment to multiple groups. The mistake many companies make is selecting their cloud automation platform to only support the requirements of that control group. One of our customers has been so successful with their initial deployment that they not only expanded it to other groups within that company, but are in the process of expanding it to other divisions, creating a community cloud across multiple business of this large multi-national company. And the process is going smoothly for them because they knew to anticipate future needs to maximize their technology investment.

As you look to implement a cloud infrastructure remember the story of our well mannered IT administrator and remember, it can happen in the cloud too. The trick is to know how to avoid it.

Go in knowing these things about your business:

  • What do we need now?
  • What will we need in the future?
  • Can the tech support the transition in scale?
  • What kind of provisions are made to protect allocated resources in shared pools?
  • Ask and ask again, will it scale?

Now onto governance control – who can have what and how much. It can be easier and more effective than you think. Stay tuned!

In the meantime tell us how you maintain secure multi-tenancy. How do you do it?

, ,

Leave a comment

Part 1: Automating Self Service for the Private Cloud


by: Richard Bourdeau, VP Product Marketing, DynamicOps

As promised, so begins our series on the must have’s for your private cloud deployment and what to look for when choosing your technology providers and partners. You will be in it for the long haul with whomever you choose so it is crucial they can do what they promise and you know what to do.

There are many vendors that offer automated self-service for cloud deployment. However when you start to look at what automated self-service means, the implementations vary greatly. Your definition of automation may not be the vendor’s definition and you will soon see gaps between where your automation needs begin and where theirs ends. At DynamicOps, our deployment experience has shown that most vendors provide a one size fits all automation that does not fully automate the entire process or cannot be modified to accommodate the differences in the types of services being delivered or different methodologies used by different groups. Other vendors provide more flexible workflow automation, but do little to actually automate the tasks that need to be performed. It’s a frustrating experience. You think you have done your homework, your strategy is in place, your vendor selected and before you know it production is stalled as you go through the oh so manual task of implementing an effective automation solution.

Before you select automation software to help deploy your private cloud, make sure that it has the functionality to help you with these most common automation challenges.

1.   Automate the entire process
Automated delivery needs to incorporate both the configuration of the IT resources as well as any pre or post configuration steps that need to be completed to either make the IT compute resource usable for the requestor or complete the “paperwork” required to monitor and track the resource throughout its life. Some think that it is a lot to ask to address the entire process and only seek to automate part of the process. So, many private cloud management solutions only address part of the process and focus only on the configuring of the machine vs. the end-to-end process. 

Partial automation, though better than complete manual processing, will still not allow companies to achieve the service level response times and levels of efficiencies desired.  Best way to avoid this trap is map out your process, soup to nuts. Note where compromises cannot be made on automation and understand how the new zero-touch approach will affect your processes on a whole. The right vendor will address your needs and bring additional suggestions and functionality to the table.

2.   Automate the task not just the process
It seems so obvious doesn’t it? But sadly, many service desk solutions that claim to automate the entire process really only automate the workflow that links a bunch of manual configuration steps.  In order to deliver compute resources to its consumer efficiently and reduce service delivery times, automation needs to orchestrate the configuration of both virtual and physical CPU, memory, storage, and network resources.  Ask yourself: Can the solution allow for pre-configured permissions so that resources are allocated with little to no manual intervention?

 3.  Different processes for different services, groups, or users
Every IT administrator dreams of the day when there is one process that addresses every business group and there is a clear view from Point A to Point B. You and I both know that the chances of this happening are even less likely than pigs flying. It is very common that different groups within the same company use different processes to manage their IT resources.  Because of this, production systems typically require more approvals and utilize different best practices than systems created for development and testing. Then, to make life even more interesting, within the same group different IT services can have different components which can necessitate different deployment processes. And we are not done yet! Every use within that group can have different access needs which limit both the services that they can request and the management functions that they can perform against those compute resources. 

I am exhausted just thinking about it. Bottom line – Automation tools which provide a one size fits all approach will not provide enough flexibility as implementations grow beyond typical lab deployments.

4.  Delegated Self-Service
Even with the appropriate governance and controls in place, some companies don’t feel comfortable jumping to  full service modes where end users directly provision and manage their own IT resources. Instead, these companies prefer a delegated self-service model, where an administrator provisions on-behalf of the user. For this to work the software needs to be able to track the actual owner and not the person who provisioned the machine. Ownership tracking is key  to successful lifecycle management. Look at it this way, it’s no use knowing who made the car when you just want to know who put 100k miles on it.

So be sure to look for automation tools that support an administrator initiated provisioning model that  tracks the owner/user. You will thank me later.

I have only scratched the surface on some of the significant differences you should consider when initiating automated self-service. Hopefully I have given you a sense about what to look for.

But don’t think that just because you have automation a private cloud creates. On the contrary, it is just one of the parts to a successful cloud strategy. But fear not, we will be reviewing more. Next we will look at some of the challenges of sharing a common physical infrastructure and what a secure multi-tenant environment will mean to you.

,

Leave a comment

The Must Have’s and Must Know’s for a Private Cloud Deployment – A Series


by: Rich Bourdeau, VP Product Management & Marketing, DynamicOps

Is IT ready for automated service?

Over 10 years ago, back in my EMC days, an attendee at a Customer Council told me that we needed to make management much simpler. The number of things that he was managing was growing, technologies were becoming more complex, and his administrators had to know about more technologies. There was no way for his staff to specialize and become an expert in any specific area. He wanted more automation. He wanted the process to be simple so that his clients, system administrators, and DBAs could self-manage. But what he was saying was heresy to his peers. They chided him – Why would you want to do that? It will be anarchy; you will lose control! Looking back I see this guy for what he really was – a visionary.

The world has changed

In all aspects of our lives, we increasingly interact with automated systems that provide instant access to services that once required manual processing and hours or days to complete.  For example, banking and travel were specialized services in which we relied upon other individuals to grant us access and control. Today, we book flights, hotel and rental cars online without ever talking to an agent or even handling a ticket. Banking pushed our control even further. ATMS and online banking give us instant access to our assets enabling us to make real-time decisions.

Today, IT is far more willing to provide automated self-service of IT resources than they were just 2-3 years ago. Large service providers like Amazon with it Elastic Cloud Computing (EC2) infrastructure service have demonstrated the cost-effectiveness and the near instant access of their on-demand IT services. IT consumers are demanding quicker access to desktops, servers and applications. If they don’t get them fast enough from their IT department, they have shown in the past that they will use alternative options.  This is where IT really loses control.

Is Automated Self-Service ready for IT?

In order to improve the IT service delivery experience, IT is embracing automated self-service. According to Gartner, IDC and others, the growth in private cloud management software will outpace the growth in core virtualization software over the next 5 years. In order to meet this expected demand, there are probably over 50 vendors that profess to have automated self-service management of virtual or cloud computing. These include offerings from the leading virtualization vendors (VMware, Microsoft, Citrix, Red Hat and others), the Big 4 Management vendors (BMC, CA, HP, and IBM) and emerging vendors like DynamicOps.

Automated Cloud Management software accelerate service delivery times while at the same time reducing both operational cost and optimizing capital investment through more effective use of a shared physical infrastructure. This is an attractive proposition for any enterprise. However, without efficient and effective management tools, companies may not be able to achieve the savings they originally envisioned. 

Since being spun out of Credit Suisse in 2008, DynamicOps spent the last three years helping enterprise companies deploy on-demand IT services or private clouds. Over the coming weeks I will share some of our operational expertise and real-world deployment experience. By presenting the some of the challenges you will likely face and discussing the product capabilities you should look for, I will help you accelerate the time to value of your private cloud deployment. 

So stay tuned. Your cloud will never be the same!

,

Leave a comment

Are Virtual Desktops Cost Prohibitive?


by: Rich Bourdeau, VP Product Management & Marketing, DynamicOps

Virtual desktops may never generate the level of cost savings that fueled initial virtual server deployments, but are they really 11% more costly than traditional desktops as the Microsoft commissioned analysis (VDI TCO Analysis for Office Worker Environments) suggest? 

First let me say that the methodology used as well as the depth and breadth of the analysis is truly impressive.  While I agree with some of the conclusions, there does appear to be a few flaws in assumptions used as well as a few observations that I would like to add.

 Increased Software Costs  

The paper suggests that while VDI reduces hardware costs by 32% it increases software costs by 64%, cancelling any savings.  The research did point out that the software costs did vary widely based on the components used and the level of discounting that the customer received from their software vendor.  The flaw in my opinion was using an average configuration at MSRP pricing probably skews the analysis to higher software costs.  Any company that has enough users to consider deploying VDI, probably does not pay MSRP for their software.  Also the software stacks used to deploy virtual desktops vary wildly.  Like any IT implementation the cost used to deliver the service will vary significantly.

Automation Key to Deploying at Scale

This year DynamicOps worked with a number of companies struggling to deploy virtual desktops at scale.  Production pilots that averaged in the hundreds of machines stalled when these companies tried to scale to multiple thousands and tens of thousands of machines.  Manual processes that worked in the pilot phase could not hold up to the weight of large scale production deployments leading to expanding backlogs and increasing service level response times to days and weeks. 

 The companies that we worked with all said that automation of desktop service delivery process helped reduce their operational costs (many by up to 80%).  Automation also improved their service delivery times from days and weeks down to hours and minutes allowing them to significantly reduce their backlog and accelerate their stalled desktop deployments.   

Virtual Desktops – more than cost savings

TCO analysis in general is highly subjective and can easily be biased to prove whatever conclusion the author wants to prove.  By slightly adjusting the software costs down, or factoring in improved operational efficiency, I’m sure that I could make this TCO analysis go from VDI costing slightly more than a traditional desktop to VDI costing less. 

 As VDI technologies mature, cost will be driven down further.  Most virtual desktop deployments are driven by a number of factors including cost savings, deployment flexibility, agility, and security.  From our vantage point cost is no longer an inhibitor for companies looking to realize the other benefits of desktop virtualization.  As with any technology change, the first implementations require a new learning curve.  If these implementations are close to cost neutral, the later implementations and expansions, particularly as they take advantage of economies of scale and management automation will experience even greater savings over time. Let DynamicOps show you how we have helped large enterprise companies deploy tens of thousands of production virtual desktops.

, , ,

1 Comment