• About
  • Services
    • Consulting >
      • ServiceNow Consulting Practice
      • Cloud & Automation Assessment
      • Cloud Management Consulting
      • IT Operations Automation
      • DevOps Automation
      • Ansible
    • Cloud Support Services
    • Expertise
    • Methodology
  • Products
    • ServiceNow Plug-in for Ansible
    • ServiceNow Plug-in for Cloudify
  • Partners
  • Careers
  • Resources
    • Tech Corner
    • Blog
    • News
    • White Papers
  • Our Team
Zefflin
Experts in Data Center Automation, ITSM & Cloud Management

Application Automation Meets IT Service Management

12/16/2015

1 Comment

 

Why Automate Application Lifecycle?

Picture
One of the things we do at Zefflin is help IT organizations to automate the lifecycle of complex, N-tier applications. This means defining applications once (blueprinting), deploying and upgrading them automatically in heterogeneous hybrid cloud environments and auto-scaling up/down based upon demand.

This type of automation is being adopted widely.  The reason is that it pays off in a big way.  

Application automation ROI:
  • Productivity - the same group can manage many more apps in a more complex cloud-based environment
  • Agility - app dev groups can respond faster to business needs or market changes, enabled by improved application portability
  • Speed - by enabling continuous integration and continuous delivery, application development cycles are accelerated because the time between code --> build --> test --> QA --> migrate to production is dramatically reduced  
  • Cost Reduction - by utilizing a hybrid cloud model and adopting modern DevOps practices, customers can reduce both infrastructure and application development costs

Picture
Because of the maturity of tools, the cost of automation is now lower than it has ever been.  In order to get the full benefits of application lifecycle automation, however, it must be fully integrated into an operational environment. IT Service Management and ITIL in particular, has been widely adopted by IT organizations. That means IT support processes are delivered in a consistent way, from the way IT interacts with its users, to responding to environment failures, to managing changes to the production environment. 

Many of our customers are telling us: "We need to integrate application automation with our ITSM processes".  For example, when an application deployment does not go as planned, an incident should be opened so the failure can be tracked and the right people assigned to fix the issue.  When the same application is deployed successfully, it represents a change to the production environment. As a result, a change request should be opened so ITIL-compliant audit requirements are met. In addition, the Configuration Management Data Base (CMDB) should be updated, so applications can be mapped to the infrastructure on which they run.

What if all of that could be done automatically?

Market Leading Software Tools

Picture
Picture
Picture
ServiceNow is a SaaS based IT Service Management tool that has captured a large share of the market in the last 10 years.  It has become the de facto standard for ITSM solutions.

Cloudify is an open source software product distributed by GigaSpaces, and is an industry leading application lifecycle automation tool.
Picture

Integrating Application Automation with ITSM

To fill a need expressed to us often by our customers, Zefflin has developed a Cloudify plugin that supports integration with ITSM tools like ServiceNow.  We have applied years of experience in ITSM and application automation to develop an out-of-the-box integration that will:
  • Auto open an incident if an application deployment or upgrade errors out
  • Auto update the CMDB upon successful application deployment, upgrade, scale up/down or tear down
  • Auto open/close a change request upon successful deployment of an application
  • Accommodate customer-specific policies for application mapping and CMDB
Picture
The Cloudify plug-in for ServiceNow is designed to enable fast, reliable integration with standard ITIL processes with minimal implementation effort.  Customization can be done to accommodate variations of business process for specific needs such as:
  • Specific CMDB governance policies
  • Incident Management assignment rules
  • Specific CMDB application mapping requirements
  • Diagnostic information passed to an incident on auto open

The Difference Between Application Orchestration and Operations Orchestration

In evaluating various solutions for application lifecycle automation, customers must confront a confusing array of orchestration solutions. Orchestration comes in two flavors: application, and operations.  Many times the two are confused, but they are very different. Operations orchestration products are designed as general purpose, event driven process automation tools. They are excellent for automating repeatable IT operations processes like onboarding employees, granting access or integrating disparate systems like monitoring and ITSM. Application orchestration tools are designed for apps only, and all the processes associated with their lifecycle, like define-once (Blueprinting), deploy, upgrade, auto-scale and tear down. They are not Cloud Management Platforms (CMPs), but augment and compliment CMP functionality. Application orchestration tools are procedurally focused (as opposed to event driven). They can be event driven, but only for very specific events, like those that trigger auto scaling.

Choosing the wrong tool to solve your business problem will result in a very long, expensive implementation.
Picture

About Zefflin

Picture
IT organizations need to continually drop operational cost, increase quality of service and support their growing businesses, all at the same time.  The number of technologies that can be leveraged to fulfill these objectives and the pace at which they emerge is overwhelming and impossible to keep up with for the CIO who oversees day to day IT operations.  

Zefflin's focus is exclusively on Data Center Automation and Cloud Management solutions implementation and integration.  As a world-class, agile, center of excellence, our aim is to work with best of breed software, combined with the industry's best technical consulting and integration talent. We cut through the hype, identifying which tools can be implemented and integrated to effectively automate application development and IT operations. We offer high quality, cost effective solutions addressing the automation of the entire lifecycle of complex computing environments, from request/catalog management, automated provisioning (OS, application, database, storage, network), to policy governance and compliance.  

Our vision is to  bring to market consulting/software solutions that enable the lights-out data center. This will allow our customers to implement fully automated, private, public and hybrid cloud systems, delivering low cost, high quality services to their customers while minimizing personnel cost.
​
www.zefflin.com

1 Comment

DevOps Transformation: A Guide to Success

7/12/2015

1 Comment

 
Authored and submitted by Chet Golding, Principal Cloud Architect, Zefflin

Introduction

DevOps
DevOps is about people, process, and tools, working together.  DevOps Transformation is about change from an existing working environment to a new environment encompassing a DevOps model.  If you don’t already have development and operations teams you may not transform anything, but form a DevOps team from scratch.  This paper will highlight a best practices approach, key risk areas, and what you can expect from DevOps, to help you make informed decisions.

In simplest terms the word “DevOps” is a Portmanteau which, by definition, is a term formed by munging two words together into a compressed or tighter single word.  In practical terms, it involves taking two huge and complicated organizations and their associated practices, priorities and processes, “Development” and “Operations” and making them a single, powerful coherent entity. 


 Traditionally, application development and IT operations have been organized in silos, with clear delineation of responsibilities.  App dev has been responsible for design, code, build, test, bug fixes and QA, where operations was responsible for production cutover, migration, support and reporting back of bugs.  Because of the clear lines of responsibility, hand-off processes had to be derived and implemented.  The result?  Both sides optimized their respective organizations as they saw fit.  This seemed like progress, but in fact resulted in inefficiencies, finger pointing and less then optimal use of resources.  DevOps as a discipline seeks to correct this situation and establishes a single set of processes and underlying technologies that enable the next generation of optimization, efficiency and cost reduction in IT.

DevOps Journey

DevOps Transformation
A DevOps of One

One day you have found yourself running a new company with a viable but tight budget.  You can afford one technical resource.  With luck you find that one world class guru you can bank your whole business on and they have to do everything.  One body responsible for ideation, architecture, engineering, software development, infrastructure, laptops, servers, software and today’s clouds, testing and quality assurance, vendor management, mixed platform integration and when you go into production with the app you’re hanging your shingle on that one body is going to be your 24/7 operations  as well.  Of course you have a great person who has access to everything, no limitations beyond budget and focus on what is most important to the product you’re making.  And of course add in some really great processes and the best tools you can find them even if they write their own to fit technical and budgetary needs to keep your business running.

Now keep reading because you’re right: there is more to it.

For some time and for good reason, job specialization in roles such as Java Developer, .NET developer, Oracle DBA, Test Lead or Network Operations Engineer and the management of those roles with responsibilities to conform to standards and endure audits has produced a specific operating model. This model has role-based authorization and in many cases formal handoff of work product from stage to stage.  It has a real side effect of creating roadblocks along the way in the form of checks and balances that are there to ensure quality, security, repeat-ability, durability and standard.

In the DevOps model, we strive to integrate those roles, thereby removing the use of silos.  One means of achieving that integration is the agile team, which typically has seven to nine members, including coders, testers, operations and someone responsible for product.  This team is given ownership and access to its resources and the responsibility to ensure their products work.  In many cases this team follows their product into production and carries the pager answering calls 24/7.  This means that they wake up at 3am and resolve issues they themselves created. This maintains the checks and balances while eliminating roadblocks and increasing aggregate productivity.

So we’ve covered that DevOps is in part the idea that we go back to the basic premise of an integrated DevOps of one.  Therefore, we combine what is done in development and operations again as if our teams were small startup companies where both the work and the reward are shared. 


DevOps Consulting
The DevOps Scrum Team

Most organizations will begin at and perhaps sustain a small DevOps team.  In this case the Agile Model is very relevant to the discussion so we will start there.  The agile Scrum Team is in best practice a small team of people with the skill sets needed to work within a process, enabled by a tool set to provide fast delivery of quality products.  Normally this team will be seven to nine people.  The exact size of the teams can vary as will the number of teams composed from a group of one hundred for example. The functions of architecting at this scale, developing, testing, ensuring quality, delivery and support within an approximate 2 week cycle exist in the team itself, meeting daily to address and remove roadblocks.  The team works from a two tier backlog model long term and short term where priority dictates activity and capacity is constantly reviewed.  The team is DevOps oriented, requiring a tool set nominally composed of a CI/CD framework including the development, test and delivery environments into one.  This enables the developer to commit to trunk or branch to integrate code and deploy to test, thereby providing immediate results with any failures, and the opportunity to rework the components or when viable upgrade status of the components to the next phase, based on quality and dependability.  Then when it moves along the cycle to releasing, staging and then production, everything has been tested including the automatic deployment process, ensuring that when it rolls into production it is a dependable, supportable product.  This team is also mutually responsible for the support of everyone authorized to access and correct any problems in production, with a common commitment to answer at the moments of failure within a SLA. Operations tasks such as answering to a pager call at three in the morning in particularly with the smaller teams is also the responsibility of the DevOps team members.  This eliminates a number of issues with traditional over the fence models where the team is very highly motivated to deliver dependable, supportable product.  The customer feedback chain is much shorter, where the developer knows promptly when a business process or UI faults.  That ensures a high level of responsiveness to customer needs increasing customer satisfaction, feature delivery and hence time to market.

As size grows we find multiple teams and some drift in models.  Multiple scrum teams and integrations with teams operating in the more traditional waterfall model can be working together, scaling to sometimes hundreds of teams as required by the depth of an organizations’ portfolio.  It is common to find an average of twenty-one to thirty-six people in a software organization.  At some point the bulk of people will cause stress in managing the portfolio within a larger section of people.  When you hit the one hundred to two hundred staff mark further focus on process needs to be applied.  At this level we should consider using the same kinds of people, processes and tools but organized as independent sections able to maintain communication and work within closer physical proximity.  Here the organization might also consider advancing from the standard Agile Model to the Scaled Agile for Enterprise model (SAFE.)


DevOps Transformation
The DevOps of Many

The polar opposite of the one body IT startup is a multi-organization global project like the space program.  Here the complexity is so extreme that compression becomes unrealistic.  In this case ideation feeds from many people of various viewpoints, countries, governments, their science organizations, militaries and a daunting number of other named and un-named groups.  All of the ideas, requirements, and government mandates etc. funnel through an overall Space Agency who, via funding of multiple bodies, manages and shares management of the grand operations required to architect, build, and run such a program.  At this scale we face an aggregate IT organization with extended numbers in the hundreds of thousands.  Architects exist in all areas, as do developers, testers and QA staff.  We will see a prime contractor required to have one sub-contractor develop the software that a second sub-contractor tests, a third sub-contractor installs and operates, and a fourth sub-contractor provides Q&A and acceptance before its place in a flight ready platform is established and approved by the central space agency.  This is daunting to say the least. 

Here we can employ the same model as a standard with people, process and tools to best achieve the greater goals with an ability to integrate with flexibility.  


DevOps Tools and Transformation Consulting
Delivering DevOps of Business

The Fortune 500 operate today at widely varying magnitudes of scale with technology and organizational footprints from small to very large.  Firms must at times work together while maintaining internal independence and often need to operate within standards and with high levels of integration between their operations and those of other firms, including their competitors.  For years IT organizations have deployed people, processes and tools to optimize how this work is done at these scales, and have done that effectively, delivering to standards which have evolved as measures of successful organizations (ITIL, PCI DSS, SOX, CMMI, ISO 9001, CISSP).  DevOps delivers to those same standards.

Even so, the idea that cutting through the red tape can yield better and faster results repeatedly proves to be true in business as anywhere else.  Specialization along division of work grows a healthy competition and yet a separation into silos that ensure stability, quality and durability at the expense of good communication, alacrity, agility, and velocity of change which undermine innovation and restrict time to market.  Independent silos also result in redundancy with multiple people on staff doing exactly the same jobs with multiple copies of the same kinds of infrastructure and tools, which increases cost.


Delivering DevOps ROI

DevOps as it turns out, is the compression of all the complexity to reduce the red tape, to improve alacrity, agility and delivery quality of products and support services.  This improves customer satisfaction at high velocity with better time to market and timely release of products to maintain and gain market share.  Properly implemented, cost reduction and fast ROI can be achieved. 

Examples DevOps adoption delivering at scale  

One example of where we see DevOps employed by a global community to deliver relevant, disruptive core products at very large scale is in open source and how it is being employed as an enabler to cloud product delivery.  Keep in mind that open source software is by definition open which allows reuse and contribution to the code base by multiple voluntary entities.  Although it is not always free, open source products are often sold as part of a common business model and can come with durability, dependability, comprehensive support, service level agreements (SLAs), and standards.  Some very large scale IT industry leaders are open source contributors.

OpenStack is an example of this well-established model.  It has a great number of contributing developers worldwide, and many enterprise software companies are jumping on board.  VMWare integrated OpenStack into its product line, for example.  HP also delivers its own distribution of OpenStack.  RedHat includes OpenStack integration and management in its platform.  There is a primary code trunk and multiple branch distributions where product specialization is done and code is contributed back to the community as open source.  These specialized versions are used in some mission critical areas within the enterprise as is the truly open trunk.  The processes to do this are common with standards that evolve from the community.  Practices combining development and test, QA and operations via automated tooling for deployment and solid product management are jointly built, employed and refreshed.  This model is so successful, many organizations large and small, embrace it completely. 

Review  

So we have covered the DevOps of One, of small teams, and DevOps of many.  Therefore, we can draw some conclusions about DevOps:

  • It delivers within industry standards (PCI, HIPPA, etc.)
  • It enables faster time to market
  • It can produce relevant ROI
  • It is employed by both startups and industry leaders
  • It has a broad community support 
  • It integrates well

Steps to Build or Transform to DevOps

Cloud Management
Assess the current state of your organization and scale requirement  

It is important to know where you are to see where you need to go.  Assessing where your organization is currently will normally involve a mini workshop to set expectations and initial interviews, depending on the size of your organization and how that effects scale.

Gap Analysis between current and desired  

In a longer workshop involving interviews with management and subject matter experts within your firm, an analysis is used to asses where you are and what obstacles/milestones exist between this initial states and where you will be going.  .

 Design the transformation in prioritized stages

Given the information gathered in the workshop one result should be a project design to delivery your transformation in phases within a roadmap.

Project review and modification 

Once your plan is in place it is best practice to go back to the beginning and confirm your assumptions still apply.  People, processes and tools change as will scope.  With a review before you move on your project is more likely to succeed. 

Commit and begin transformation  

Commitment from senior staff and executive level in your organization is critical at this phase.  Once you have begun you will need to stay the course.


OpenStack Consulting
Celebrate Success

Keep in mind that your entire organization gains from the benefit of a successful formation or transformation to a DevOps team model.  Be proud of it and take credit with your people.  It matters and it is fun.

What does a DevOps Architecture and Environment Look Like?

DevOps as we’ve covered is about people, process and tools.  We have the people side of this down now.  Here are a couple of images to highlight the process and tools.

First off we highlight the standard environments used in development and operations in an organization.  Some firms don’t develop code but they do have a form of “crash and burn” similar to the traditional “dev” environment where things are “tried out.”  A test environment where an organization has a fairly stable copy they “test” and qualify applications before moving them into the critical position normally called “production.”

The key difference in DevOps here is that the access, responsibility and tooling to run and use these environments is shared.  Here again at the lower point in the diagram is the foundation of Cloud Management and Orchestration.  

Picture
The architectures to achieve the orchestration of these virtual or logical environments will vary both by the technology chosen and as technology evolves.  Below is a diagram showing a modern association of tooling functions to manage a larger scale enterprise cloud service which will support from dozens to thousands of servers and both internal and external use cases.
Picture

What Might the SDLC Workflow Look Like?

DevOps as we’ve covered is about people, process and tools.  Part of the process is the usage of tools to manage work flowing from ideation to operation.  People will ask if we really mean that one tool set will be used for the activities involved in all of the related stages for that to happen.  The answer is conceptually yes.  This is what that looks like if you can achieve it.  One management platform which might include multiple components and capabilities as noted above orchestrates the resources from the raw development that happens at ideation (raw dev) to the first shelf in a repo, where automation also kicks off testing, QA, and testing against reference platforms, integration and into a staged or pre-production shelf before a certified version is promoted to production.  Generally multiple versions will be flowing through this process at the same time.  We commonly see things which are proposed and released in a limited manner, those things which are next to become the primary branded service and the current mainstream app in a portfolio.

External partners and customers also have similar states and if the ideals of common integration and APIs for endpoints exists issues may be contained to user acceptance from proposed, to next and current.  The most stable of all configurations obviously should be current production.

From end to end a modern DevOps team will be using one fully functional tool set including continuous integration and delivery (deployment) known as CI/CD.  Configurable automation of those tools allow for security and safety in all stages.
Picture

What Issues Still Exist?

Must it be this way?  

Does saying you have adopted DevOps or Agile Methodology mean you’re firm is actually operating the same or adhering to the practices or ideals in the same way internally as other organizations?  No, each organization is subject to implementing based on its own understanding, capabilities, constraints and business objectives.  Every organization is different.

Should they be the same?  Not necessarily.  Practical considerations should prevail and business objectives, constraints and organizational realities must be taken into account, rather than blindly jumping on the band wagon.  Evangelism of DevOps is valuable, but the cost of formation in or transformation to this model has to be weighed against other factors

Standards and interoperability remain critical.  As noted in the vision of “DevOps of many”, your organization’s success may depend on its existence in an ecosystem composed in part of other organizations pith proprietary practices and code bases.  This lack of visibility into means with which they operate can inhibit the integration of products.  This maintains the need to operate as closely to a standard practice as possible to maintain interoperability at the integration points the products share.  Commonly today DevOps best practices is to architecture applications with APIs at the integration end points.  This practice is growing in enterprise products development.

The social sides of the equation 

Are Developers part of IT?  Yes and No.  This can be an emotional difference that has resulted from specialization and the “over the fence” model of responsibility, and security.  Just sending people to classes or telling them they are now all the same is a little naïve and most business leaders won’t expect that to work immediately.  Changing how people feel cannot be simply mandated.  That takes time and an investment on the part of the business as well as the individuals.  The outcome will vary as much as the community you’re working with, so electing to go down this path means you need to evaluate the impact of your staff’s willingness to change.

Is Operations part of IT?  Yes and No.  Many development groups do run their own operation and often don’t exist within an enterprise IT organization.  Cool innovations of new apps like “Angry Birds” don’t tend to be developed in the IT Call Center doing tier 1 phone support or Network operations.  Many IT staff have evolved a thick skin to endure the perception of being a necessary evil, a cost, or the police force of the organization.  They also tend to have a bit of an ivory tower complex or at least can be perceived to be that way by other parts of an enterprise.

Program Management offices and IT Operations of Datacenters are groups that tend to hold a lot of power and may hold onto their domains to a degree that the “over the fence” model is not changed and DevOps never includes the application development parts of the organization.  In this model the Datacenter operations teams may embrace a form of DevOps to build IT Operations tools (develop) and operate them.  The benefits can be high but the overall enterprise may still remain fragmented and the “over the fence” trust model may still inhibit velocity, time to market, and innovation. 

Improper application of the models may also exist.  Development teams may use “agile” as a time and task tracking process only and bug tracking tools to document within waterfall style management.  Some improvements may still be seen by converting multiple teams to newer tools, using common tools, and improving communication and those can have clear benefits.  None the less highly controlled development environments can still inhibit velocity, time to market and innovation as much as never changing can.  

What Are Some Common Practices in Development Which are Also Applicable to Operations?

Both Development and Operations require a fairly similar flow where one may have a higher rate of iteration in some ways than the other.

  • Plan
  • Innovate, Create or implement
  • Test
  • Release
  • Review
  • Repeat

What are Some Common Practices?

An enterprise with both Development and Operations requires a means of operating the business itself.  The success of DevOps has dependence on that organization.

  • Advocate people (they are both your producer and your customer)
  • Mentor people to improve their skills and sense of achievement
  • Evangelize the processes, tools, and your product to show business commitment
  • Market people, processes and tools in and out of the organization. People are examples we follow
  • Coach people, teams, and the community.  Elevate everyone.
  • Train, invest and promote

About Zefflin

Zefflin is focused exclusively on Data Center Automation and Cloud Management solutions implementation and integration.  As a world-class, agile, center of excellence, our aim is to work with best of breed software, combined with the industry's best technical consulting and integration talent.  We provide consulting services in data center strategy, DevOps Transformation, DevOps automation, OpenStack consulting and software implementation.  We cut through the hype, identifying which tools can be implemented and integrated to effectively automate application development and IT operations.  We offer high quality, cost effective solutions addressing the automation of the entire lifecycle of complex computing environments, from request/catalog management, automated provisioning (OS, application, database, storage, network), to policy governance and compliance. Our vision is to bring to market consulting/software solutions that enable the lights-out data center. This will allow our customers to implement fully automated private, public and hybrid cloud systems, delivering low cost, high quality services to their customers while minimizing personnel cost.  Our current software resale and implementation portfolio includes Scalr and CloudForms for cloud management and Cloudify for cloud orchestration, as well as support for all major OpenStack distributions.  www.zefflin.com
1 Comment

OpenStack:  What You Need to Know as an IT Leader

6/2/2015

0 Comments

 
This month I am writing an overview of OpenStack, the open source software being rapidly adopted by IT organizations around the world as a way to reduce cost, avoid vendor lock-in and raise the bar on IT service delivery. 
OpenStack for IT Leaders
As more and more companies decide to commit resources to OpenStack, many are going into it with false assumptions or without properly educating themselves. OpenStack shows tremendous promise to streamline IT, reduce cost and improve speed/quality of service.  There are many aspects of planning, purchasing, implementing and running OpenStack, however, that differ significantly from enterprise software.  

I will touch on several of these in an effort to help you avoid some common mistakes. 

Business Case

When you've decided that OpenStack is worth a look from a technological, functional and organizational perspective, there are a number of costs/benefits to consider.  Some of these must be qualitatively assessed.  Others can be specifically quantified.
Cost/Benefit for OpenStack
Research has been published regarding costs and benefits of adopting OpenStack, but each organization is different.  There are many variables involved and it is highly dependent upon several factors, including workloads to be run, existing infrastructure, resource availability and many others.  It is clear, however, that increasing numbers of companies from small to larger enterprises are finding that the benefits out-weigh the costs. 

Governance, and Why This is Important

Picture
In the world of enterprise software, product management and all the processes associated with it are well understood.  Enterprise software companies constantly evaluate market conditions and develop a product roadmap that includes development of future functionality and product direction. 

With open source software, the rules are different.  For some open source software, the product management function is still contained inside a single software company.  For OpenStack (also in the case of Linux), the product management function lies with the OpenStack Foundation, an independent non-profit organization which has its own by-laws, procedures and governance.  It's board members are also elected by the community, which minimizes dominance by any single vendor.  This changes the dynamic completely because market requirements have a different path by which they become product functionality. Any developer can submit a code change for consideration.  Companies can employ (at their own cost) any number of developers, thereby vying for influence by sheer number of heads.  Enterprise software companies like VMWare, HP, IBM and others are embracing OpenStack, but they also have significant software license revenue that is complementary and/or competitive with OpenStack. This is not a bad thing, but is an important nuance to understand for anyone considering a long term commitment to OpenStack, because some product features may have genuine market pull, while others may be influenced by enterprise software.  There are advantages to this also. For example, VMWare Integrated OpenStack (VIO) has strong integration to vSphere, vCloud Director and the entire VCloud Automation suite (recently renamed to vRealize Suite).  HP Helion OpenStack, has strong integration to their cloud automation suite, including Cloud Service Automation (CSA) and Operations Orchestration (OO) products. 

The code development and management process is also a major consideration for any software user.  In the enterprise software world, the entire development process is owned by a single vendor from code, build, test and QA to packaging and distribution.  It is assumed that any enterprise software company has safe-guards and processes that ensure code quality and security.  With OpenStack, the development community is very large.  For the latest release (Kilo), some 1,500 developers put their weight behind Kilo, merging over 19,500 patches and dispatching with nearly 14,000 tickets, all in a 6 month period.  The coding, QA and security processes have to be automated and very disciplined in order to enable this volume and size of developer community.  The OpenStack community has adopted modern DevOps and automation practices. Code checks covering code quality and security are more stringent than most software companies have internally.  This is by necessity and is a very good thing for the user base, because the structure applied represents a lower risk for users.  



OpenStack Distributions:  What Are They and Why Does it Matter?

OpenStack Foundation
Red Hat OpenStack
Mirantis OpenStack
Piston OpenStack
Picture
VMWare OpenStack
Canonical OpenStack
IBM OpenStack
OpenStack is following a similar path as Linux did in its path to mainstream adoption.  Companies have signed up to take the vanilla version ("Trunk"), integrate their own IP, then provide QA, packaging and installation utilities, and offer support contracts.  There are at least 15 companies that have entered the market in this area, including HP, IBM, VMWare, Red Hat, Canonical, Mirantis, Piston and others.  Each company has a different approach to how they deliver OpenStack to market.  Some have architected it in a way that is easy to install and upgrade.  Piston, for example, has developed a powerful IP layer that enables OpenStack to run on commodity hardware, significantly reducing the cost of infrastructure required to deploy.  Each company incorporates their own IP, QA process and packaging, resulting in different software from distributor to distributor.  For these reasons, it is best to develop your infrastructure roadmap and cloud strategy before picking a distributor. 

Open Source Software Licensing Models & Agreements

Open source software has different models associated with it.  It is important to understand the different models and licensing agreements because it will give you insight into how many different aspects of the the solution lifecycle, such as:
  • Degree of vendor lock in
  • Product management and how much input you will have
  • Availability of skills in the market place for that software

Models

Open core - Base functionality is open source, but additional features are license based.  Vendor will sell support contracts for open source portion and license/support for additional functionality

Open source - All code is completely open and available.  Vendor will sell support contracts only.  OpenStack follows this model, but there is a rich vendor ecosystem of technology companies that add additional complimentary solutions, such as cloud management, software defined networking (SDN), virtualization and much more.  Some of those are open source, others are proprietary.

Agreements and Terms


All open source software is released to public domain under specific terms, in most cases referencing an existing open source license model.  There are specific differences and it is important to be aware of which model is being used, in order to minimize IP infringement risks and avoid unjustified charges.  Read your open source license agreement carefully!

Here are some of the common licenses.  Note that OpenStack is distributed under the Apache 2.0 license.
Picture

Risks in Adopting OpenStack and How to Mitigate Them

As with any new technology, there are risks associated with OpenStack adoption. These can be significantly mitigated if they are known up front and thought through before beginning.

1.  Security - this topic is top of mind for almost every IT leader. There are two main areas to consider separately with respect to OpenStack: coding/development, and operational.  Coding and development addresses some of the code checks and QA processes that both the OpenStack community and the distributors must adhere to in order to ensure that no malicious code is inserted into the core OpenStack code set.  With 1500 developers to keep track of, this is no small feat.  Automation software and tightened procedures have greatly reduced the risk associated with the code, but questions should be asked of any distributor as to how they address this issue.  On the operational side, a lot of work has been put in by the OpenStack development community over the last year to ensure that the security is ready for the enterprise.  This includes incorporation of identity management functionality of the Keystone project.  There is a separate committee in the OpenStack Foundation dedicated to security for applications and data. 
Symantec Sues OenStack Foundation
2.  IP Infringement - This issue is often ignored or not given the priority it deserves in the OpenStack world. The risk is represented by the exposure that would come from a developer claiming that his/her proprietary code somehow made it into the OpenStack base code, causing OpenStack to infringe on their IP rights.  

This is an actual real scenario, and has happened. The Symantec case demonstrates how real this scenario is.  If the claimant so chose, they could file suit against the entire OpenStack supply chain, starting with the foundation, working through to the distributor, reseller and finally the end customer. If the claimant wins in court, an injunction could be granted, forcing users to cease using the software. The latter scenario is unlikely but should be factored into a risk analysis before adopting OpenStack.  One key step that all end customers can take to minimize the exposure, is ask your distributor for indemnification for 3rd party IP infringement.  If a distributor provides indemnity, it shields the end user from liability and cost of legal defense, which is definitely worth it!  As of the writing of this newsletter, HP is the only distributor that has publicly come forward and offered unlimited indemnity to customers for this.  Other distributors offer limited protection, some no protection. It is up to you as a customer and user to insist on it.

3.  Implementation - OpenStack is no different from an implementation perspective than any other new technology.  Implementation time, effort and cost is highly dependent upon how different business requirements are from the out-of-box system, how many integration points there are, level of expertise, and size/stability of scope.  The best approach is to start small, implementing a limited scope, and building from there.  Internal training of personnel is highly recommended and you may look to outside consulting companies for assistance in getting started.  Outside help can not only help you set strategy and direction, but accelerate learning of internal resources and cut implementation time and risk.  Use a phased approach and build complexity with each subsequent phase.  For example, you can start phase 1 with compute (Nova) and block/object storage (Cinder/Swift), which would provide you with provisioning and virtualization of OS and storage for apps running in your existing environment.  Phase 2 might add networking (Neutron) and expansion of image storage (Glance).


OpenStack Consulting
There are tremendous gains to be had from adopting OpenStack as part of a full private cloud strategy.  Each IT organization is unique and you should approach it with your business requirements, constraints and current architecture in mind.  Understanding of OpenStack, its ecosystem and open source software is essential, however.  

As big as the advantages can be, it is critical to understand that OpenStack, as an open source component of your operation, is a different world than other proprietary enterprise software you might be used to.  Those who grasp the differences and leverage them to their advantage will be successful with OpenStack.

OpenStack Consulting
About Zefflin Systems

Zefflin is focused exclusively on Data Center Automation and Cloud Management solutions implementation and integration.  As a world-class, agile, center of excellence, our aim is to work with best of breed software, combined with the industry's best technical consulting and integration talent.  We provide consulting services in data center strategy, DevOps Transformation, DevOps automation, OpenStack consulting and software implementation.  We cut through the hype, identifying which tools can be implemented and integrated to effectively automate application development and IT operations.  We offer high quality, cost effective solutions addressing the automation of the entire lifecycle of complex computing environments, from request/catalog management, automated provisioning (OS, application, database, storage, network), to policy governance and compliance. Our vision is to bring to market consulting/software solutions that enable the lights-out data center. This will allow our customers to implement fully automated private, public and hybrid cloud systems, delivering low cost, high quality services to their customers while minimizing personnel cost.  Our current software resale and implementation portfolio includes Scalr and CloudForms for cloud management and Cloudify for cloud orchestration, as well as support for all major OpenStack distributions.  www.zefflin.com

0 Comments

Why Automate Data Center Operations?

5/11/2015

0 Comments

 
Picture
As an IT leader, you are faced with supporting a growing business while budgets remain flat.  At the same time you are expected to increase the speed, quality and reliability of service – all as technology constantly evolves and changes.  Virtualization of the IT compute and storage infrastructure has revolutionized operations, increased productivity and resource utilization, and brought new agility to IT.  But virtualization has also created new challenges and problems.  Even after making virtualization an integral part of operations, many IT organizations find themselves asking “What else can we improve upon?”
The answer lies with the next logical stage in virtualization’s evolutionary path:  automate IT operations functions to increase productivity.  Regardless of the cloud architecture chosen (public, private, hybrid), automation of processes in the areas of Catalog/Request Management, Approvals, Chargeback, Provisioning (not only OS, but storage, network, database and application), Governance and Compliance yields a significant ROI for today’s IT organization.
Software tools that are used to automate IT processes have matured significantly in recent years.  They now cost less to implement and are easier to integrate into existing infrastructure and tools.  Superior integration capability means that previous investments in areas like virtualization can be preserved and a best-of-breed approach can be taken without forcing vendor lock-in.  Open source software like OpenStack™ has put tremendous downward pricing pressure on traditional enterprise software.  This means that the ROI of automating specific parts of IT Operations has changed in favor of the CIO, and what a short time ago might have been a significant financial commitment with high risk is now much less in both cost and risk.  Automation is feasible, affordable and carries much lower risk than even one year ago. 

This blog entry is based on an abbreviated version of a white paper published recently by Zefflin.  To obtain the full copy, click here.


What Processes Should I Automate?

Picture
Data center processes can be both numerous and complex.  Each process should be looked at in terms of cost of automation (i.e., implementation and maintenance), versus labor and other cost savings gained.  There are a set of core processes, however that have a large impact on IT service speed, quality and repeatability, as outlined below:


  1. Request/ Catalog Management - Enable self-service for requesting of complex computing environments resulting in better control over standards that are used in both development and production. Faster response time – not waiting for administrator to analyze requested environments.
  2. Approvals  - Creates an audit trail of all requests and approvals.  Brings transparency to the process, so requestors can see where approvals are stuck and how long they can expect them to take.
  3. Charge-Back/ Show-Back - Enable cost accounting at a department level, which can be an improvement over public cloud providers by requiring less paperwork, like expense reports and manual chargebacks.
  4. Provisioning (OS/ Network/ Storage/ Database/ Application) - Better control, reduction in human error, more efficient use of storage and server capacity, Faster provisioning of complex computing environments, standardization of OS images are just some of the benefits to automating this process.
  5. Governance - Better control over computing environments during their lifecycle, resulting in reduced management cost.
  6. Compliance - Automatically enunciates out-of-compliance situations for key areas including PCI, internal security and ISO, helps to identify previously unknown processes that result in non-compliance (such as  application hotfixes), Provides flexibility in dealing with out of compliance situations (like opening a service management incident, routing to person for correction or automatically remediating, then notifying key personell) 
  7. General Policy Automation - This category features use of orchestration solutions to automate numerous repeatable IT operations tasks such as: 

  • Automating password reset policy
  • Workflow integration with existing systems
  • Automating event remediation (i.e., app restart or server reboot)

What Are Some Common Problems I Can Solve?

Picture
There are many day-to-day activities and tasks that are performed by system administrators and IT support staff.  The opportunities for automation are endless and the following describes just a few.


  • Operating System Provisioning - Once OS templates are built and the deployment processes is automated, the IT infrastructure is better controlled, standards are more easily enforced and compliance is improved - all while increasing productivity of existing staff. 
  • DevOps and Automation - Like any process, once the workflow of code --> build --> test --> release is well defined, it can be automated.  Automation should not only strive to reduce manual effort and improved speed and quality, it should facilitate coordination and communication between development and operations.
  • Automated Problem/Incident Remediation - Automation provides a way to record, track and measure incidents of unknown origin that have well known work arounds or corrections, such as a memory leak requiring server restart periodically.  
  • Virtual Sprawl - What if you could monitor all your vm's in development, test and production and flag them when certain thresholds are reached (last login>60 days, network traffic below a certain level or compute load below a certain level).  With that approach, you could proactively, automatically snap-shot vm's that are not being used and tear them down, freeing up valuable development resources.
  • Password Reset - Most companies have specific password reset policies on both virtual and physical servers.  A typical policy might dictate that all passwords are changed every 90 days.  This whole process can be automated with a simple application of an orchestration tool.


Download the white paper to learn more.


Today's Software Tools

There are many software tools on the market today, with new ones emerging regularly.  The list below is not a comprehensive one, but shows some of the industry leaders.   Tools vary greatly in maturity, cost and scalability.  They vary in scope from pure orchestration, DevOps, OpenStack distributors and full scale Cloud Management Platforms.  The key is to select the right tools for your organization that will minimize cost, risk and resource investment, while enabling your organization to grow the solution as your company grows.

For more details about software vendors and tools, download the white paper.
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture
Picture

What Are My Peers Doing?

Picture
Today’s progressive and forward thinking IT organizations are well past virtualization and templating of OS images.  They are investing in the next round of productivity increase – because they have to if they want to stay relevant and their company competitive.  Their companies are growing and their IT budget as a percentage of company revenue is shrinking.  If they don’t automate, streamline and enable their administrators to do more (much more) with less, they know IT will eventually be the organization that inhibits company growth.  No CIO wants to be the subject of an analyst call.  Medium to large organizations are implementing full private cloud environments, from fully defined service catalogs to automated provisioning, compliance audits and policy-based governance of most computing environments, especially application development environments.  


For more detail, download the white paper.

How Do I Integrate Data Center Automation into My Organization and Environment?

An automation strategy and a plan go hand-in-hand with a cloud strategy.  A cloud strategy and architecture, whether private, public or hybrid is an essential first step, but is only part of the answer.


The following steps are essential in adopting an automation strategy.

1.       Cloud strategy, architecture and roadmap.  It is important to understand what you will be working with before considering automation.  For example, choosing AWS as your primary platform provider may affect the choice of automation tools (like orchestration or server provisioning) and processes (like application provisioning or compliance).

2.       Step back and look at all manual processes.  It is important to objectively look at any manual processes.  It is equally essential to look at each process from a ROI perspective:  how much do I have to invest in automating this process?  How much do I have to invest in maintaining it? and how much labor can I save as a result?  Caution: pride of ownership and turf protection can influence the outcome of this review – it must be strictly objective.  Some processes may have to be adjusted or re-engineered which adds to the cost.  Examples of simple processes to automate would be server root password reset or event remediation.  More complex processes might include application provisioning and configuration.

3.       Develop a short, medium and long term strategy and objectives, with ROI expectations for each stage.  This will help prioritize and set expectations.  Often it is good to start with short, quick win types of automation projects to prove the success and generate internal momentum for the idea of further investment in automation.  This planning should be done with a firm understanding of what is possible, feasible and risk appropriate.

4.       Identify software tools.  Today there are an incomprehensible number and variety of software tools, from open source to startups and well-established enterprise software companies, that purport to automate data center processes of all kinds.  New tools appear on a weekly basis. It is important to filter out the noise, cut through the hype and find out what will work for your organization at a reasonable cost.  It is also crucial to determine if you already own some of the software that can be used which will dramatically cut cost.  For example, if your company has a EULA with an enterprise software company in place, you may have access to some tools already under the terms of that EULA.  A solid orchestration tools is essential, as orchestration is the centerpiece to automation of data center processes.  It should be flexible, able to develop custom workflows without extensive training and have a large library of plug-ins or APIs that can be used to integrate with your existing applications such as service desk, change management or DevOps tools.

5.       Take a baseline for future comparison.  A baseline is essential in order to measure progress and success of future automation efforts.  A baseline should encompass metrics for cost and speed of service


For more information about how to adopt Data Center Automation in your organization and environment, download the white paper.

Summary

Picture
Data center automation is not just an option anymore.  You, as an IT leader, must continually provide value at a lower cost.  In order for your IT organization to continue supporting a growing businesses, remain relevant and prepare for the future, automation has to be an essential part of the strategy.  We have outlined some of the possible approaches, challenges, benefits, risks and returns in this white paper.  Every IT organization is different and should develop an automation strategy and plan in line with the objectives, resources and constraints of their particular business.


0 Comments

Welcome to the Zefflin Blog

4/23/2015

2 Comments

 
Picture
My first topic is something that may affect all of us, mainly because the outcome could disrupt most IT organizations.  This has not been widely published, but VMWare has recently been sued for IP infringement by the Software Freedom Conservancy and an individual Linux developer, Christoph Hellwig. 

This is apparently after several years of negotiation with VMWare.  The allegation is that VMWare used some open source Linux code in ESXi.  If this is true, VMWare illegally charged license and maintenance (excluding support) for what should have been an open source product.  Christoph and SFC have filed in German court, where there is precedence for injunctions for software IP infringement that have world-wide implications.  VMWare says publicly that there is no merit to the case. 

After reading some of the details, I think the decision could go either way.  It is an open question.

What does this mean to you?  It depends on the outcome, of which there are several possibilities.  In my opinion, VMWare has the following options: 

 VMWare’s Options:
  1. Develop a new ESX that does not use the infringing code – it’s not exactly clear on the effort required, but could be significant
  2. Settle with Christoph and SFC
  3. Go to court 
  4. Make ESXi open source

The GNUv2 open source license agreement under which Linux is distributed says clearly that if you use the code and add your own, you are required to make the result open source.  There are ways around this rule, mainly by architecting the resulting product to have a loosely coupled integration rather than embedded code (a “shim layer”).  VMWare chose not to take that route, however.  According to SFC, they have embedded the open source code into their compiled version of ESXi – that’s a direct violation of GNUv2.

Picture
All indications are that VMWare is putting effort into all options at this point.  It has been reported that they have allocated development resources to work on replacing the infringing code.  Even if they are successful, they still have the massive effort of upgrading everyone, and a lot of customers are not going embrace the idea of risking their entire virtualized production environment with a 1.0 of anything.   Even if they eventually replace the infringing code, they would still potentially be open to law suits from customers for all the back license and maintenance they paid for something that is open source.  Regarding option 2, they have tried to settle with the plaintiffs, without success.  Plaintiffs say that they were required to sign an NDA just to review the offer  – they declined.   In the unlikely event that they do settle, that still leaves them open to suits from the hundreds of other Linux developers, who could get very litigious if they saw a settlement was reached.  

Option 3 is go to court.  This could be an option but the stakes are very, very high.  If VMWare wins, this all goes away (assuming no appeal, and assuming no one else takes it up in another court in another country), and nobody’s life changes.  On the other side, the court could grant injunctive relief to the plaintiffs, meaning VMWare would be ordered to stop selling the ESXi immediately, in addition to having to carry out any other associated court order.  There is precedent in German courts for injunctive relief for software infringement that has world-wide implications.   If that happens, VMWare will likely be sued in other countries, including the United States.  They will also be vulnerable to a class action law suit by all their ESX/vSphere customers who could go after a refund of all the license and maintenance they paid (support charges are the only valid charges for open source), plus any damages and/or interest.  This could be significant.

The 4th option is to make ESX open source.  This would settle the suit, but would still leave VMWare open to customer liabilities for all the license and maintenance they paid before it was made open source.  Again, that’s a big pile of money.  This may be a viable option.  VMWare acknowledges the decline of vSphere sales and the threat of OpenStack to vSphere business in its most recent 10-k filing.  It may be pre-empting the inevitable. 

Picture
From VMWare 10-k


It’s difficult to predict the outcome of the law suit at this point.  This could go on for several years.  My guidance to anyone would be to watch this space and IF the time comes, make contingency plans for minimizing dependence on vSphere/ESXi.  OpenStack is gaining traction in both the SMB and Enterprise markets as it matures, governance gets tighter and the ecosystem continues to evolve.  Eventually the market will realize that you can do with OpenStack (and then some) what you can do with vSphere, and that since OpenStack is open source, software is free and support will run you around $2k per year per physical server.

If you have any questions or would like to discuss anything further, please feel free to contact me directly.  I sincerely hope you made it this far in the newsletter and that this was valuable information.

2 Comments

    Sam Melehy

    Data center automation and cloud management executive. Technology 
    entrepreneur, selling & delivering IT solutions to the Global 5,000

    Take advantage of our free offer: 1 day on site with a Sr. Cloud Architect at no cost

    Archives

    December 2015
    July 2015
    June 2015
    May 2015
    April 2015

    Categories

    All

    RSS Feed

Home          Careers          Why Automate?          What is the Lights Out Data Center?          Partners          Contact Us          About


© 2019 Zefflin Systems all rights reserved