Leslie Muller Chats with Mike Laverick of RTFM Education


Leslie Muller, Founder & CTO of DynamicOps, sat down with Mike Laverick of RTFM Education for a chat about cloud computing, cultural challenges, technical needs and how DynamicOps is transforming the datacenter.

Sit back, relax and join us for some Vendorwag –  http://bit.ly/g28xGz

, , ,

Leave a comment

Migrate to a Private Cloud, Not a Virtual Datacenter


“A private cloud can be a very attractive solution, but a bad implementation can lead to ugly results”

That’s what Brian Proffit of Internet.com’s Enterprise Networking Planet has to say in his latest piece – Migrate to a Private Cloud, Not a Virtual Datacenter. A great piece – and not just because it references our own words of wisdom here on DynamicTalks. 

Take a look here  and then let us know your own thoughts.

, ,

Leave a comment

Must-Have Series Part 4: Private Cloud Deployment Simplicity


by: Richard Bourdeau, VP Marketing, DynamicOps

Time to value is a critical success factor in management buy-in and deployment of a private cloud infrastructure.  If the cost to deploy and maintain a private cloud manager is too high it could not only delay the breakeven point relative to the cost to deploy the solution, it could also dramatically reduce the overall return on a private cloud investment.  Sadly, it could also mean that you never get your cloud off the ground.

Agents add complexity

Know this, initial installation and deployment of the standard configuration should take no more than a few hours if the prerequisites are in place and the deployment is planned correctly.  If the solution requires the deployment of agents on physical host or virtual machines this not only complicates the deployment, but also the ongoing maintenance of your deployment.

Making pieces & parts work together

Another factor that contributes to complexity is the number of products required to deploy the solution.  Many of today’s offerings are a collection of loosely coupled products acquired over time with different architectures and databases.  You know, big company acquires little company making their niche solution part of their big solution that you just cannot deploy without. Making them work together smoothly can be like putting together a child’s toy on Christmas Eve.  Some assembly required? More like an all-nighter with the wrong size allen wrench!  I’m not saying that you will find one tool that will provide all your private cloud management needs.  No one tool can do everything.  However the tool you use to automate private cloud service management should be based on a common foundation that allows for the rapid integration of other components regardless if they are offered by your management vendor or not.   

Working with what you already have

Deployment simplicity is important for companies that want standard off-the-shelf capabilities, as well as for companies who want custom integration with their existing ecosystem of tools and processes.  If armies of people are required to deliver a custom solution, this not only significantly increases the cost, it adds risk and delays the time to value. 

In upcoming entries of this series we will discuss how multi-vendor integration and rapid ecosystem integration are essential to allow you to continue to work with your prior investments and will assist in keeping deployment simple.

 For more information on this topic and others in the must-have series download the white paper from DynamicOps –  bit.ly/eS2HJe. But be sure to check back here as we take deeper dives and open it all up for discussion.

, , ,

Leave a comment

Part 3: Maintaining Control of Your Cloud


by: Rich Bourdeau, VP Product Marketing, DynamicOps

Here we are #3 in our series. Let’s take a quick review of where we are at:

  1. Automated self-service automates the process to provision and manage IT resources. 
  2. Secure multi-tenancy allows you to reserve resources for different groups, assuring that only authorized users will be able to create, reconfigure or decommission machines from resources allocated to that group. 

The next big challenge in deploying on-demand private cloud services is being able to control the amount of resources, the process used, and the management functions that can be performed for each type machine or application.  Pretty simple. Not really. But it can be with some homework and insight.

Moving at the speed of virtualization 

The good thing about virtualization is that it is quicker and easier to provision virtual machines than physical machines.  The bad thing about virtualization is that virtual machines can be provisioned much quicker typically without all the controls that accompanied the lengthy procurement and provisioning process of physical machines. Without appropriate operational governance and control, it is not uncommon for companies to waste 10-20% of their resources on unauthorized and over provisioned machines. And to add to the mix, many virtualization management software solutions on the market do not enforce the same controls to assure that machines are provisioned according to organizations best practices.  This leads to non-compliant machines with outdated software versions that expose companies to unplanned downtime and security risks.  Your management software should help control and contain, not create additional layers of challenge.

Limiting resource consumption

Cloud automation software must have policies which control the quantity and type of resources a user is allowed to consume during the provisioning of a machine or application.  Period. The administrator must be able to specify not only how much CPU, memory, storage and network resources a given user, or application will receive, but also the tier (service level) and pool that the resources will be allocated from.  Unless you want to maintain a large number of service blueprints, you will want to be able to set up service blueprints with a variable amounts of resources but with approval thresholds and the ability to customize the approval workflow. Getting better control over resource consumption by delivering the right size machine at the right service level can translate to significant capital savings.

Enforcing Best Practices 

The advantage of automation is that you have better control and enforcement of your best practices, ensuring that every machine is configured using the same process every time,  thereby eliminating the potential for mistakes or intentional circumvention of company policies. These policies include custom things like approval workflows; build parameters, customization settings, lease duration, archival policies, and what management functions a given user will be allowed to perform against the machine after it has been built. 

Controls should be granular

It is not sufficient to be able to specify policies that that apply to all users, or all machines or even all the users in a business group.  If you think about it, you will quickly realize that different types of machines need different processes and build parameters.  These operational controls need to be granular enough to accommodate what is common vs. what is different between not only different types of machines, but also different users or groups of users. For example:  You may need to provision desktops for both developers and office users. While they both need common policies which control how Windows is configured, and connected to the network, etc  they can be completely different  in the amount of resources, they will be allocated as well as the management functions that developers will be allowed to perform compared to office workers 

Enforce governance with policies not people

The key to automated self service is to replace operational governance with policies not people.  Without the appropriate controls in place you will just be trading reduced operational savings for increased capital savings.  Policies will keep it all aligned to the corporate goals.

Maintaining control is easier than you think. Just stay true to these simple things:

  1. Analyze your process and make sure your vendor addresses all levels
  2. Keep your fingers on the knobs that control consumption
  3. Best practices are called best for a reason – stick to them and make sure your vendor falls in line
  4. Know the needs of all business groups and make sure the solution will scale up AND down to accommodate

Now that we have the control issue covered, join me next time when we look at Deployment Simplicity as the next private cloud management must have.

, ,

Leave a comment

Part 2: How to Share and Play Well with Others in a Private Cloud


by: Richard Bourdeau, VP Product Marketing, DynamicOps

The common infrastructure. What a blessing. What a curse.

Here is a familiar scenario for you…Well mannered IT administrator goes to provision resources for a mission critical application only to find that said resources have already been consumed by someone in a different group. To make matters worse, the other less important function is over-provisioned. Well, a handy automated self service product would have helped this guy out, you say. Not necessarily. Many of today’s typical automation tools just treat your shared infrastructure as a single pool of resources with little or no control over who can consume them. And don’t confuse manual approvals as part of the provisioning process as solving this problem. In a large environment, it’s too easy to lose track over who can consume which resources.

It’s this daily occurrence that makes the ability to deliver secure multi-tenancy one of, if not the most important aspects of cloud computing. By allowing multiple groups or tenants to share a common physical infrastructure, companies can achieve better resource utilization and improved business agility. By dynamically reallocating resources between groups in order to address shifting workloads, companies can more effectively utilize their limited IT resources.

The challenge is to share in such a way that one group does not have access, or even visibility, to the resources that have been allocated to others. Without a secure method of ensuring multi-tenancy, a cloud computing strategy cannot succeed.

Secure multi-tenancy is one of those buzz words thrown about by most cloud automation vendors. Sure, many of them can do it. But to what scale? To what level of control and capacity? Before selecting a vendor make sure their capabilities to securely share a common IT infrastructure meet both your current and future needs.

Multiple Grouping Levels
Make sure that your cloud management tool has enough levels of grouping to support both your organizational constructs as well as the levels of service tiers that you want to provide for those businesses moving ahead.

For Example: You don’t have to be a large company with multiple divisions, each having many departments to need multiple levels of grouping. Maybe your company is not that big, but you want to separate desktop operations from server operations from development and test. In addition you may also want to sub-divide resources allocated to a group into several service tiers (i.e. Tier 1, Tier 2, and Tier 3). Most companies will need a minimum of 2-3 levels of resource grouping.

Think Strategically Act Tactically
Most companies start their private cloud deployments with a single group or in a lab. This is certainly a viable strategy to get experience with new technologies and processes before expanding deployment to multiple groups. The mistake many companies make is selecting their cloud automation platform to only support the requirements of that control group. One of our customers has been so successful with their initial deployment that they not only expanded it to other groups within that company, but are in the process of expanding it to other divisions, creating a community cloud across multiple business of this large multi-national company. And the process is going smoothly for them because they knew to anticipate future needs to maximize their technology investment.

As you look to implement a cloud infrastructure remember the story of our well mannered IT administrator and remember, it can happen in the cloud too. The trick is to know how to avoid it.

Go in knowing these things about your business:

  • What do we need now?
  • What will we need in the future?
  • Can the tech support the transition in scale?
  • What kind of provisions are made to protect allocated resources in shared pools?
  • Ask and ask again, will it scale?

Now onto governance control – who can have what and how much. It can be easier and more effective than you think. Stay tuned!

In the meantime tell us how you maintain secure multi-tenancy. How do you do it?

, ,

Leave a comment

Part 1: Automating Self Service for the Private Cloud


by: Richard Bourdeau, VP Product Marketing, DynamicOps

As promised, so begins our series on the must have’s for your private cloud deployment and what to look for when choosing your technology providers and partners. You will be in it for the long haul with whomever you choose so it is crucial they can do what they promise and you know what to do.

There are many vendors that offer automated self-service for cloud deployment. However when you start to look at what automated self-service means, the implementations vary greatly. Your definition of automation may not be the vendor’s definition and you will soon see gaps between where your automation needs begin and where theirs ends. At DynamicOps, our deployment experience has shown that most vendors provide a one size fits all automation that does not fully automate the entire process or cannot be modified to accommodate the differences in the types of services being delivered or different methodologies used by different groups. Other vendors provide more flexible workflow automation, but do little to actually automate the tasks that need to be performed. It’s a frustrating experience. You think you have done your homework, your strategy is in place, your vendor selected and before you know it production is stalled as you go through the oh so manual task of implementing an effective automation solution.

Before you select automation software to help deploy your private cloud, make sure that it has the functionality to help you with these most common automation challenges.

1.   Automate the entire process
Automated delivery needs to incorporate both the configuration of the IT resources as well as any pre or post configuration steps that need to be completed to either make the IT compute resource usable for the requestor or complete the “paperwork” required to monitor and track the resource throughout its life. Some think that it is a lot to ask to address the entire process and only seek to automate part of the process. So, many private cloud management solutions only address part of the process and focus only on the configuring of the machine vs. the end-to-end process. 

Partial automation, though better than complete manual processing, will still not allow companies to achieve the service level response times and levels of efficiencies desired.  Best way to avoid this trap is map out your process, soup to nuts. Note where compromises cannot be made on automation and understand how the new zero-touch approach will affect your processes on a whole. The right vendor will address your needs and bring additional suggestions and functionality to the table.

2.   Automate the task not just the process
It seems so obvious doesn’t it? But sadly, many service desk solutions that claim to automate the entire process really only automate the workflow that links a bunch of manual configuration steps.  In order to deliver compute resources to its consumer efficiently and reduce service delivery times, automation needs to orchestrate the configuration of both virtual and physical CPU, memory, storage, and network resources.  Ask yourself: Can the solution allow for pre-configured permissions so that resources are allocated with little to no manual intervention?

 3.  Different processes for different services, groups, or users
Every IT administrator dreams of the day when there is one process that addresses every business group and there is a clear view from Point A to Point B. You and I both know that the chances of this happening are even less likely than pigs flying. It is very common that different groups within the same company use different processes to manage their IT resources.  Because of this, production systems typically require more approvals and utilize different best practices than systems created for development and testing. Then, to make life even more interesting, within the same group different IT services can have different components which can necessitate different deployment processes. And we are not done yet! Every use within that group can have different access needs which limit both the services that they can request and the management functions that they can perform against those compute resources. 

I am exhausted just thinking about it. Bottom line – Automation tools which provide a one size fits all approach will not provide enough flexibility as implementations grow beyond typical lab deployments.

4.  Delegated Self-Service
Even with the appropriate governance and controls in place, some companies don’t feel comfortable jumping to  full service modes where end users directly provision and manage their own IT resources. Instead, these companies prefer a delegated self-service model, where an administrator provisions on-behalf of the user. For this to work the software needs to be able to track the actual owner and not the person who provisioned the machine. Ownership tracking is key  to successful lifecycle management. Look at it this way, it’s no use knowing who made the car when you just want to know who put 100k miles on it.

So be sure to look for automation tools that support an administrator initiated provisioning model that  tracks the owner/user. You will thank me later.

I have only scratched the surface on some of the significant differences you should consider when initiating automated self-service. Hopefully I have given you a sense about what to look for.

But don’t think that just because you have automation a private cloud creates. On the contrary, it is just one of the parts to a successful cloud strategy. But fear not, we will be reviewing more. Next we will look at some of the challenges of sharing a common physical infrastructure and what a secure multi-tenant environment will mean to you.

,

Leave a comment

VMBlog: 2011 Virtualization and Cloud Predictions


by Leslie Muller, CTO & Founder, DynamicOps

Recently published to VMBlog for their 2011 Virtualization and Cloud Predictions Series.

In the past few years we have had a unique position in the market that has allowed us to see different angles of the future datacenter. The march toward that vision continues and we all know adoption and the acceleration of cloud technologies will continue to grow exponentially. However, as clouds get bigger and users are looking for the most efficient and beneficial route to deployments,  the key to success lies in the details of integration, automation, managing scale and complexity while delivering a consumer experience.

 1. Virtual Desktop “pilots” will start scaling into large production deployments – Management automation key enabler  
I predict we will see more of the mega scale VDI deployments. Sizes in the hundreds of thousands of VMs and bigger. Having said that processes that worked fine with a few hundred machines quickly break down as companies have to scale deployment to thousands or tens of thousands of machines. These implementations go smoothly during the early phases when you can standardize on a single desktop deployment and have a limited catalogue to provision, reconfigure, and decommission. But as varying desktops types, provisioning methodologies and solution components are added, the ability to keep up with the management without blowing the operational budget will stall many projects. Management automation will bubble to the top as a hot button as processes are evaluated and re-addressed to meet increased demands of scale and real word complexity.

2. As virtualization deployment accelerates the challenge will move from server consolidation to management efficiency
Currently the IT industry is only about 30% virtualized.  I see massive pressure in the next year to get the number to 50% or beyond. The primary business challenge will shift from Server consolidation (cap-ex savings) to improved service delivery times and operational efficiency (Op-Ex savings). This will put the focus on managing growth, complexity and security as a means of establishing governance and controls while reducing operational costs.

3. Inflexibility of management tools will stall many initial private cloud deployments
The persistent trade-off of “change your company and process to match the automation tool” or “wait for months and pay huge sums to create a customized tool” will quickly become unacceptable. Customers will demand rapid custom solution delivery. They can’t afford to change their process, and they can’t afford to wait months and pay huge sums for professional services.

4. Early private cloud deployments will expand to community clouds and service multiple business units.
Many of the companies we have worked with are looking to leverage the operational efficiencies of their initial private cloud deployments to other business or divisions within their companies. These groups are acting as service providers setting up community clouds for other groups.  The improvements in service delivery time coupled with lower operational and capital costs of this deployment model will help accelerate expansion into additional groups within a single business or multiple businesses within a large enterprise. 

5. Hardware becomes more virtualized blurring the lines between virtual and physical management
Virtualization is impacting all compute components, not just the partition that the operating system runs in. System, storage, and networking vendors will continue to virtualize more and more components within their offerings providing IT departments with more flexibility about how they utilize their resources. Increasing we are seeing companies treat their physical resources as a pool that can be dynamically reconfigured and reallocated similar to their virtual infrastructure.  

6. On-demand computing is not just for virtual Infrastructures or private clouds
Most companies have, or will have, a combination of virtual and physical systems. More companies will want a single solution to that provides automated self-service of all their assets not just the virtual ones. Even if a company is 100% virtualized they will need to provision and managed the physical hosts that contain their virtual machines. As customers start to dabble with moving systems to a public cloud service like Amazon EC2 they will want the same operational governance and control that they have implemented for their private cloud services.

7. Public Cloud adoption will create additional governance and control challenges
Public cloud adoption will accelerate primarily in the area of Software as a Service (SaaS) and Platform as a Service (PaaS) platforms like Microsoft Azure, Salesforce and Google Apps. We are excited about the potential that these platforms will provide. However, just because your applications move to a public cloud does not obviate the need for governance and control of these deployments. Unified cloud management for hybrid cloud environments will become increasingly important.

, , ,

1 Comment