Archive for January, 2011
Leslie Muller, Founder & CTO of DynamicOps, sat down with Mike Laverick of RTFM Education for a chat about cloud computing, cultural challenges, technical needs and how DynamicOps is transforming the datacenter.
Sit back, relax and join us for some Vendorwag – http://bit.ly/g28xGz
“A private cloud can be a very attractive solution, but a bad implementation can lead to ugly results”
That’s what Brian Proffit of Internet.com’s Enterprise Networking Planet has to say in his latest piece – Migrate to a Private Cloud, Not a Virtual Datacenter. A great piece – and not just because it references our own words of wisdom here on DynamicTalks.
Take a look here and then let us know your own thoughts.
by: Richard Bourdeau, VP Marketing, DynamicOps
Time to value is a critical success factor in management buy-in and deployment of a private cloud infrastructure. If the cost to deploy and maintain a private cloud manager is too high it could not only delay the breakeven point relative to the cost to deploy the solution, it could also dramatically reduce the overall return on a private cloud investment. Sadly, it could also mean that you never get your cloud off the ground.
Agents add complexity
Know this, initial installation and deployment of the standard configuration should take no more than a few hours if the prerequisites are in place and the deployment is planned correctly. If the solution requires the deployment of agents on physical host or virtual machines this not only complicates the deployment, but also the ongoing maintenance of your deployment.
Making pieces & parts work together
Another factor that contributes to complexity is the number of products required to deploy the solution. Many of today’s offerings are a collection of loosely coupled products acquired over time with different architectures and databases. You know, big company acquires little company making their niche solution part of their big solution that you just cannot deploy without. Making them work together smoothly can be like putting together a child’s toy on Christmas Eve. Some assembly required? More like an all-nighter with the wrong size allen wrench! I’m not saying that you will find one tool that will provide all your private cloud management needs. No one tool can do everything. However the tool you use to automate private cloud service management should be based on a common foundation that allows for the rapid integration of other components regardless if they are offered by your management vendor or not.
Working with what you already have
Deployment simplicity is important for companies that want standard off-the-shelf capabilities, as well as for companies who want custom integration with their existing ecosystem of tools and processes. If armies of people are required to deliver a custom solution, this not only significantly increases the cost, it adds risk and delays the time to value.
In upcoming entries of this series we will discuss how multi-vendor integration and rapid ecosystem integration are essential to allow you to continue to work with your prior investments and will assist in keeping deployment simple.
For more information on this topic and others in the must-have series download the white paper from DynamicOps – bit.ly/eS2HJe. But be sure to check back here as we take deeper dives and open it all up for discussion.
by: Rich Bourdeau, VP Product Marketing, DynamicOps
Here we are #3 in our series. Let’s take a quick review of where we are at:
- Automated self-service automates the process to provision and manage IT resources.
- Secure multi-tenancy allows you to reserve resources for different groups, assuring that only authorized users will be able to create, reconfigure or decommission machines from resources allocated to that group.
The next big challenge in deploying on-demand private cloud services is being able to control the amount of resources, the process used, and the management functions that can be performed for each type machine or application. Pretty simple. Not really. But it can be with some homework and insight.
Moving at the speed of virtualization
The good thing about virtualization is that it is quicker and easier to provision virtual machines than physical machines. The bad thing about virtualization is that virtual machines can be provisioned much quicker typically without all the controls that accompanied the lengthy procurement and provisioning process of physical machines. Without appropriate operational governance and control, it is not uncommon for companies to waste 10-20% of their resources on unauthorized and over provisioned machines. And to add to the mix, many virtualization management software solutions on the market do not enforce the same controls to assure that machines are provisioned according to organizations best practices. This leads to non-compliant machines with outdated software versions that expose companies to unplanned downtime and security risks. Your management software should help control and contain, not create additional layers of challenge.
Limiting resource consumption
Cloud automation software must have policies which control the quantity and type of resources a user is allowed to consume during the provisioning of a machine or application. Period. The administrator must be able to specify not only how much CPU, memory, storage and network resources a given user, or application will receive, but also the tier (service level) and pool that the resources will be allocated from. Unless you want to maintain a large number of service blueprints, you will want to be able to set up service blueprints with a variable amounts of resources but with approval thresholds and the ability to customize the approval workflow. Getting better control over resource consumption by delivering the right size machine at the right service level can translate to significant capital savings.
Enforcing Best Practices
The advantage of automation is that you have better control and enforcement of your best practices, ensuring that every machine is configured using the same process every time, thereby eliminating the potential for mistakes or intentional circumvention of company policies. These policies include custom things like approval workflows; build parameters, customization settings, lease duration, archival policies, and what management functions a given user will be allowed to perform against the machine after it has been built.
Controls should be granular
It is not sufficient to be able to specify policies that that apply to all users, or all machines or even all the users in a business group. If you think about it, you will quickly realize that different types of machines need different processes and build parameters. These operational controls need to be granular enough to accommodate what is common vs. what is different between not only different types of machines, but also different users or groups of users. For example: You may need to provision desktops for both developers and office users. While they both need common policies which control how Windows is configured, and connected to the network, etc they can be completely different in the amount of resources, they will be allocated as well as the management functions that developers will be allowed to perform compared to office workers
Enforce governance with policies not people
The key to automated self service is to replace operational governance with policies not people. Without the appropriate controls in place you will just be trading reduced operational savings for increased capital savings. Policies will keep it all aligned to the corporate goals.
Maintaining control is easier than you think. Just stay true to these simple things:
- Analyze your process and make sure your vendor addresses all levels
- Keep your fingers on the knobs that control consumption
- Best practices are called best for a reason – stick to them and make sure your vendor falls in line
- Know the needs of all business groups and make sure the solution will scale up AND down to accommodate
Now that we have the control issue covered, join me next time when we look at Deployment Simplicity as the next private cloud management must have.