Tuesday, November 20, 2012

How to Avoid Application Failures in the Cloud: Part 4

This is the fourth in a series of five blog posts that examine how you can build cloud applications that are secure, scalable, and resilient to failures - whether the failures are in the application components or in the underlying cloud infrastructure itself. In this post we will look at application monitoring.

Monitoring


A key component of any successful application deployment — whether in the cloud or on premise — is the ability to know what is happening with your application at all times. This means monitoring the health of the application and being alerted when something goes wrong, preferably before it becomes noticeable to the application users. For on-premise applications, a wealth of solutions is available, such as HP’s Application Performance Management and Business Availability Center products. Most of the cloud infrastructure providers offer similar capabilities for your applications in the cloud. On Amazon EC2, application monitoring is provided by CloudWatch.

CloudWatch provides visibility into the state of your application running in the Amazon cloud and provides the tools necessary to quickly — and, in many cases, automatically — correct problems by launching new application instances or taking other corrective actions, such as gracefully handling component failures with minimal user disruption.

Cloudwatch allows you to monitor your application instances using pre-defined and user-defined alerts and alarms. If an alarm threshold is breached for a specified period of time (such as more than three monitoring periods), CloudWatch will trigger an alert. The alert can be a notification, such as an email message or an SMS text message sent to a system administrator, or it can be a trigger to automatically take action to try to rectify the problem. For example, the alert might be the trigger for the EC2 auto-scaling feature to start up new application instances or to run a script to change some configuration settings (e.g. remap an elastic IP Address to another application instance).


In the final post we'll look at a real life example of how all of the features that I've described over the first four posts in the series are used to create a secure, scalable and resilient service offering. 

Friday, November 16, 2012

How to Avoid Application Failures in the Cloud: Part 3

This is the third in a series of five blog posts that examine how you can build cloud applications that are secure, scalable, and resilient to failures - whether the failures are in the application components or in the underlying cloud infrastructure itself. In this post we will look at disaster recovery.

Disaster Recovery


While security groups, elastic load balancing, and auto scaling are important for making your application secure, scalable, and reliable, these features alone do not protect you against an outage that affects a whole data center1, like those experienced by Amazon in Virginia and Ireland. To do that, you also need disaster recovery protection. But before we look at disaster recovery solutions for Amazon’s EC2 cloud, we first need to discuss how EC2 is segmented into Regions and Availability Zones, and the relationship between the two.

Amazon EC2 is divided into geographical Regions (U.S. West, U.S. East, EU, Asia Pacific, and so on) that allow you to deploy your application in a location that is best suited for a given customer base or regulatory environment. 

Each region is divided into Availability Zones, which are defined by Amazon as “distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other Availability Zones in the same Region. By launching instances in separate Availability Zones, you can protect your applications from failure of a single location.” Additionally, Amazon states that “…each Availability Zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of failures like generators and cooling equipment are not shared across Availability Zones. Additionally, they are physically separate, such that even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone.

This disaster recovery strategy enables the Amazon EC2 infrastructure to survive a complete failure of a data center in one Availability Zone by recovering applications in another Availability Zone. The key functionality behind the Amazon EC2 recovery solution includes Elastic IP Addresses and Elastic Block Store snapshots and replication.

Elastic IP Addresses


Elastic IP addresses are actually static IP addresses that are specifically designed for the dynamic nature of cloud computing. Similar to a traditional static IP address, they can be mapped to an application instance or to an ELB instance to provide a fixed address through which users can connect to your application. However, unlike traditional static IP addresses, you can programmatically reassign an elastic IP address to a different target instance if the original instance fails. The new target instance can even reside in a different Amazon Availability Zone, thereby allowing your application to fail over to a new Availability Zone in the event of a complete Availability Zone outage.

Amazon EC2 Elastic Block Store (EBS)


The Elastic Block Store (EBS) is a block-level storage system designed for use with Amazon EC2 instances. EBS volumes are automatically replicated within a given Availability Zone to ensure reliability. You can also create EBS snapshots, or incremental backups, which can be stored in a different Availability Zone. EBS snapshots provide a simple mechanism for replicating and synchronizing data across different Availability Zones — a requirement for any enterprise-caliber disaster recovery solution.

The frequency of the EBS snapshot will depend on the nature of your data and the recovery period that you want to provide for the fail over. If your data frequently changes and you need your replicated data to be as current as possible, you will need more frequent snapshots. However, if your data is relatively static or you can live with a fail over situation that uses data that might be a bit stale (e.g. 30 minutes or an hour old), your EBS snapshots can be less frequent.

The combination of an elastic IP address and Elastic Block Store snapshots to support a disaster recovery solution is illustrated in Figure 3.


Figure 3 - Disaster Recovery Using Elastic IP Address and EBS Snapshots


[1] You can use the Elastic Load Balancing functionality to load balance across application instances that reside in different Amazon Availability Zones. While this can protect against the complete failure of an Availability Zone or data center, it introduces more complexity such as real-time database synchronization across geographically distributed databases. If your application doesn’t require all application instances to be using a consistent data set, load balancing across Availability Zones might be a better option than a full disaster recovery solution. However, if you do require all application instances to be using the same consistent data set, it might be simpler to restrict your application to a single Availability Zone with a single data set and utilize a disaster recovery solution to protect against the complete failure of a that Availability Zone.

Tuesday, November 13, 2012

How to Avoid Application Failures in the Cloud: Part 2

This is the second in a series of five blog posts that examine how you can build cloud applications that are secure, scalable, and resilient to failures - whether the failures are in the application components or in the underlying cloud infrastructure itself. In the first post, we looked at securing applications. In this post we will look at scalability and availability.

Scalability and Availability


In today’s multi-tiered application architectures, clustering and load-balancing1 capabilities mean that scalability and availability often go hand-in-hand.

When applications are located on premise, you can configure load-balancing routers to spread connections and inbound traffic across multiple instances of an application in a cluster, providing better response times for users. Load balancing can also provide increased application availability, because the application is less susceptible to the failure of a single application instance. If one does fail, the load balancer can distribute the load over the remaining healthy instances in the cluster. Of course, some sessions or transactions might fail or be rolled back, but the application generally continues to operate unaffected by the instance failure.

Amazon EC2 Elastic Load Balancing (ECB)


Although you don’t have control of the hardware (e.g. routers) used in the Amazon EC2 cloud, you can still implement load balancing strategies for your applications using the Amazon Elastic Load Balancing (ELB) feature2. ELB allows you to load balance incoming traffic over a specified number of application instances, with automatic health-checking of each of the application instances. If an instance fails the health check, ELB will stop sending traffic to it.


Figure 2 – Amazon Elastic Load Balancing

Amazon EC2 Auto Scaling


The Amazon EC2 auto scaling feature can dynamically and automatically scale your applications — up or down — based on demand and other conditions (such as response time), so you only have to pay for the compute capacity you actually need and use. This is a case where cloud computing provides a clear cost advantage. If you wanted to be able to dynamically scale your on-premise applications, especially when using virtualization technologies such as VMware or the Xen hypervisor, you would first need to invest in and maintain excess server capacity to handle the peak application demand.

You can define your own Amazon auto-scaling rules to protect your application against slow response times or to ensure that there are enough ”healthy” application instances running to guarantee application availability.
  • Availability: You can specify that you always need a minimum of, say, four application instances running to ensure availability to users. The auto-scaling feature will check the health of your application instances to ensure that you have the specified minimum number of instances running. If the number of healthy instances drops below the minimum threshold, the auto-scaling feature will automatically start the required number of instances to restore your application to a healthy state.
  • Response time: You can also specify auto-scaling rules based on application response times. For example, you can define a rule to start a new application instance if the response time of the application exceeds 4 seconds for a 15-minute period. If you are using ELB with your application instances, the newly started instances are added to your load balancing group so they can share the user load with the other healthy instances. 

Summary


Given this brief description of load balancing and auto scaling within the Amazon EC2 cloud, you can see how these features can be applied to a multi-tiered application like the one illustrated in Figure 1 to improve scalability and availability. You can imagine that we could use ELB in front of each tier of the application — load balancing across the instances of each security group — and also apply auto-scaling rules to ensure that the application is resilient against an instance failure and can effectively respond to changes in user demand. We will examine a real-life example of combining security groups, load balancing, and auto scaling after we discuss disaster recovery in the next post.

[1] Load balancers provide a host of advanced functionality, including support for sticky user sessions, SSL termination (i.e. handling the SLL processing in the router), and multiple load balancing algorithms.

[2] Amazon ELB capabilities include SSL termination and sticky user sessions, enabling you to implement the same type of load balancing policies as you can with on-premise hardware-based load balancers.

Saturday, November 10, 2012

How to Avoid Application Failures in the Cloud: Part 1

This is the first of a series of five blog posts that examine how you can build cloud applications that are secure, scalable, and resilient to failures - whether the failures are in the application components or in the underlying cloud infrastructure itself.

When people think of “the cloud,” they tend to imagine an amorphous thing that is always there, always on. However, the truth is that the cloud — or, rather, applications running in the cloud — can suffer from failures just like those running on your on-premise systems. This became painfully clear in June, 2012, when an electrical storm in the mid-Atlantic region of the United States knocked out power to an Amazon data center in Virginia, resulting in temporary outages to services such as Netflix and Instagram. Similarly, in 2011, a transformer failure in Dublin, Ireland affected Amazon and Microsoft data centers, bringing down some cloud services for up to two days.  And, as recently as October of 2012, a problem with the storage component of the Amazon EC2 infrastructure caused disruptions for sites including Pinterest, reddit, TMZ, and Heroku.

As these examples show, the cloud itself is not immune to failures. But there are things you can do to protect your applications running in the cloud. In this series of blog posts, we will discuss some of the ways you can make your cloud applications more reliable and less prone to failures.

When looking at improving the resilience and reliability of your applications, you need to consider the following four factors:
  1. Security: Is your application protected against intrusion?
  2. Scalability and Availability: How can you make your application respond effectively to changing demand and, at the same time, protect against component failures?
  3. Disaster Recovery: What happens if, as in the examples above, an entire data center fails?
  4. Monitoring: How do you know when you have problems? And how can you respond quickly enough to prevent outages?
We will look at each of these factors in the context of an application running in the Amazon EC2 cloud infrastructure, as this is the environment in which Axway has the most experience. (Other cloud providers, such as Rackspace, provide similar capabilities.)

Security


Obviously, application security is very important to every organization. Preventing unwanted and unauthorized access to applications and data is critical because the consequences of a security breach, including potential data loss and exposure of confidential information, can be extremely costly in both financial and business terms.

When you are running applications in your own on-premise data center, your IT department can configure and manage security using well-tested methods such as firewalls, DMZs, routers, and secure proxy servers. They can create multi-layered security zones to protect internal applications, with each layer becoming more restrictive in terms of how and by whom it can be accessed. For example, the outer layer might allow access via certain standard ports (e.g. port 80 for HTTP traffic, port 115 for SFTP traffic, port 443 for secure HTTP traffic (SSL), and so on). The next layer might restrict inbound access to certain secure ports and only from servers in the adjacent layer — so, if you have a highly secure inner layer containing your database(s), you can allow access only via Port 1521 (the standard port used by Oracle database servers) and only from servers in the application layer.

When you move to the cloud, however, you are relying on others (the cloud infrastructure providers) to provide these security capabilities on your behalf. But even though you are outsourcing some of these security functions, you are not powerless when it comes to making your applications more secure and less susceptible to security breaches.

Amazon EC2 Security Groups


Amazon EC2 provides a feature called “security groups” that allows you to recreate the same type of security zone protection and isolation you can achieve with on-premise systems. You can use Amazon EC2 security groups to create a DMZ/firewall-like configuration, even though you don’t have access or control of the physical routers within the EC2 cloud. This allows you to isolate and protect the different layers of your application stack to protect against unauthorized access and data loss. Based on rules you define to control traffic, security groups provide different levels of protection and isolation within a multi-tier application by acting as a firewall for a specific set of Amazon EC2 instances. (See Figure 1)

 
Figure 1 - Amazon EC2 Security Groups

In this example, three different security groups are used to isolate and protect the three tiers of the cloud application: the web server tier, the application server tier, and the database server tier.
  • Web server security group: All of the instances of the web server are assigned to the WebServerSG security group, which allows inbound traffic on ports 80 (HTTP) and 443 (HTTPS) only — but from anywhere on the Internet. This makes the web server instances open to anyone who knows their URL, but access is restricted to the standard ports for HTTP and HTTPS traffic. This is typical practice for anyone configuring an on-premise web server. By defining security groups, you can have the same type of configuration in the Amazon EC2 cloud.
  • Application server security group: The AppServerSG security group restricts inbound application server access to those instances in the previously defined WebServerSG security group or to developers using SSH (port 22) from the corporate network. This illustrates a couple of important capabilities of security groups:
    1. You can specify other security groups as a valid source of inbound traffic.
    2. You can restrict inbound access by IP address.
    Specifying other defined security groups as a valid source of inbound traffic means that you can dynamically scale the web server group to meet demand by launching new web server instances — without having to update the application server security group configuration. All instances in the web server security group are automatically allowed access to the application servers based on the application server security group rule. Being able to restrict inbound access by IP address means that you can open ports within the security group, but only allow access by known (and presumably friendly) sources. In our example, we allow access to the application servers via SSH (for updates, etc.) only to developers connecting from the corporate network.
  • Database server security group: The DBServerSG security group is used to control access to the database server instances. Because this tier of the application contains the data, access is more restricted than the other layers. In our example, only the application server instances in the AppServerSG security group can access the database servers. All other access is denied by the security group filters. In addition to restricting access to the instances in the AppServerSG security group, you can also restrict the access to certain ports.  In our case, we’ve restricted access from the application servers so they can use only port 1521, the standard Oracle port.

In the next blog post in this series, we'll look at scalability and availability.

Thursday, October 11, 2012

Engaging the Hybrid Cloud (Complete Post)

What is a “hybrid cloud”?

Is it 1) an environment where applications and processes exist both in the public and private cloud and on premise? Or is it 2) a combination public/private cloud without an on-premise component?

For the sake of this discussion, we’ll concede definition 1. Clarifying this concept is important because the vast majority of cloud-adopting organizations — which is to say the vast majority of organizations, period — are about to become hybrid-cloud-adopting organizations, and for good reason: they’re not ready to simply switch off their existing on-premise systems — legacy systems that already have significant business and operational value — and re-invent them in the cloud.

Let’s solidify this hybrid notion with a simple example of a business process nearly all organizations are familiar with: the HR onboarding process.
  1. Onboarding begins. A cloud-based recruiting system like Taleo is used to identify a candidate. When the candidate is hired, the business process moves from the cloud-based recruiting system to the on-premise HR system.
  1. Onboarding continues. The candidate is given systems access, login credentials, and an e-mail account. IT is cued to furnish the candidate with a laptop and other equipment. The office manager assigns the candidate an office space.
  1. Onboarding concludes. HR moves the business process back to the cloud by using a cloud-based performance-management system like SumTotal, where new-hire details are updated.
Cloud. On-premise. Cloud again.

This isn’t some supposed future scenario. This hybridized process is happening now, throughout most organizations, and in many other departments besides HR. To ensure the success of those departments in a hybrid cloud environment, organizations should address three key issues: security, service level agreements (SLAs), and application integration.

Security

The move to the cloud does mean that security and data privacy — something that was previously your IT department’s concern — is now your cloud provider’s concern. Yet it doesn’t mean your organization is absolved from ensuring that the cloud provider is doing its part. You need to demand that the cloud provider is clear about how they secure and protect your customers’, partners’, and employees’ data — both when it’s stored in the cloud and when it’s transferred to and from your on-premise systems.

A cloud-based application in isolation is reason enough for insisting on a clear understanding of how your cloud provider stores your data. Imagine, then, how imperative a clear understanding becomes when that cloud-based application is no longer isolated but integrated into a hybrid cloud environment. It’s now transferring data out into the world — perhaps from an Amazon data center in Europe or the Pacific Northwest to your offices on the other side of the globe. Or perhaps it’s transferring data to your trading partner’s systems, where you have much less control over security and protection.

This spawns several questions you should ask your cloud provider:
  • Is the data encrypted both when it’s in motion and at rest?
  • If cloud-application access is via an application programming interface (API), is the security token secured and encrypted when it’s used in the API core?
  • What’s the security token’s lifetime? Is it per-session or permanent?
  • How easily could this security token be hijacked and reused?
  • Is the security token tied to IP addresses?

Getting solid answers to important questions like these will ensure that the cloud part of your hybrid environment is always serving your business and never compromising the strength of its security profile.

SLAs

What is your cloud-based application’s availability and reliability? When an application is hosted on-premise, availability and reliability is your responsibility, and if it’s critical to business operations, you put a lot of effort into maintaining it.

Again, with the move to the cloud, this becomes the cloud provider’s concern, but you still need to keep in mind the application’s role in the bigger picture. How well would the business tolerate moments of application unavailability and unreliability?

For example, if a cloud-based HR application wasn’t available for a day or two, it probably wouldn’t impact a supermarket’s business process.

However, if a cloud-based supply-chain application wasn’t available for even an hour or two, it would wreak havoc on a supermarket’s business process. The lack of availability would mean a lack of deliveries, empty shelves, and loss of revenue.

A thorough SLA will communicate to your cloud provider in no uncertain terms which applications your business counts on the most, and what the consequences will be should those applications fail.

Application integration

In order to reap the benefits and realize the full potential of your new cloud applications, you must embrace the term “hybrid” by fully integrating them with your existing, on-premise applications and business processes.

Questions to ask include:
  • How are you going to get data into or out of the cloud application and into your on-premise systems?
  • Does the cloud application have an API and/or support on-demand exchange of data?
  • Does the cloud application have a scheduled exchange (e.g., daily updates instead of on demand)?
  • Does the cloud application support standards like Web services, XML, etc.?
Further, how will integrating cloud applications affect your existing business processes?

For example, if you move from an old, back-end integration to an on-demand, real-time integration, will this have a knock-on effect (i.e., a secondary effect) with other applications, especially your on-premise applications? How will the applications accommodate this effect (particularly in light of the fact that you actually have less flexibility when integrating applications in the cloud, as you have to work with the integration points provided by the cloud application itself, not the on-premise points you’ve provided)?

By considering the above three key issues and answering the questions surrounding them, the daunting implications of our initial question, “What is a ‘hybrid cloud’?” will diminish. Organizations that aren’t ready to simply switch off their existing on-premise systems and re-invent them in the cloud can rest assured that they aren’t losing anything from holding onto a legacy system. Instead, they can benefit from a new approach — one that draws on the incomparable agility of the public/private cloud and the time-tested security profile of on-premise systems — and enjoy enhanced business operations using a hybridized whole that’s truly greater than the sum of its parts.

(This post was first published at http:blogs.axway.com)  


Wednesday, October 10, 2012

Engaging the Hybrid Cloud: Part 3: Application Integration

Application integration

In order to reap the benefits and realize the full potential of your new cloud applications, you must embrace the term “hybrid” by fully integrating them with your existing, on-premise applications and business processes.

Questions to ask include:
  • How are you going to get data into or out of the cloud application and into your on-premise systems?
  • Does the cloud application have an API and/or support on-demand exchange of data?
  • Does the cloud application have a scheduled exchange (e.g., daily updates instead of on demand)?
  • Does the cloud application support standards like Web services, XML, etc.?
Further, how will integrating cloud applications affect your existing business processes?

For example, if you move from an old, back-end integration to an on-demand, real-time integration, will this have a knock-on effect (i.e., a secondary effect) with other applications, especially your on-premise applications? How will the applications accommodate this effect (particularly in light of the fact that you actually have less flexibility when integrating applications in the cloud, as you have to work with the integration points provided by the cloud application itself, not the on-premise points you’ve provided)?

By considering the above three key issues and answering the questions surrounding them, the daunting implications of our initial question, “What is a ‘hybrid cloud’?” will diminish. Organizations that aren’t ready to simply switch off their existing on-premise systems and re-invent them in the cloud can rest assured that they aren’t losing anything from holding onto a legacy system. Instead, they can benefit from a new approach — one that draws on the incomparable agility of the public/private cloud and the time-tested security profile of on-premise systems — and enjoy enhanced business operations using a hybridized whole that’s truly greater than the sum of its parts.

(This post was first published at http:blogs.axway.com)  

Monday, October 8, 2012

Engaging the Hybrid Cloud: Part 2: SLAs

SLAs

What is your cloud-based application’s availability and reliability? When an application is hosted on-premise, availability and reliability is your responsibility, and if it’s critical to business operations, you put a lot of effort into maintaining it.

Again, with the move to the cloud, this becomes the cloud provider’s concern, but you still need to keep in mind the application’s role in the bigger picture. How well would the business tolerate moments of application unavailability and unreliability?

For example, if a cloud-based HR application wasn’t available for a day or two, it probably wouldn’t impact a supermarket’s business process.

However, if a cloud-based supply-chain application wasn’t available for even an hour or two, it would wreak havoc on a supermarket’s business process. The lack of availability would mean a lack of deliveries, empty shelves, and loss of revenue.

A thorough SLA will communicate to your cloud provider in no uncertain terms which applications your business counts on the most, and what the consequences will be should those applications fail.

(TO BE CONTINUED)

(This post was first published at http:blogs.axway.com

Sunday, September 30, 2012

Engaging the Hybrid Cloud: Part 1: Security

What is a “hybrid cloud”?

Is it 1) an environment where applications and processes exist both in the public and private cloud and on premise? Or is it 2) a combination public/private cloud without an on-premise component?

For the sake of this discussion, we’ll concede definition 1. Clarifying this concept is important because the vast majority of cloud-adopting organizations — which is to say the vast majority of organizations, period — are about to become hybrid-cloud-adopting organizations, and for good reason: they’re not ready to simply switch off their existing on-premise systems — legacy systems that already have significant business and operational value — and re-invent them in the cloud.

Let’s solidify this hybrid notion with a simple example of a business process nearly all organizations are familiar with: the HR onboarding process.
  1. Onboarding begins. A cloud-based recruiting system like Taleo is used to identify a candidate. When the candidate is hired, the business process moves from the cloud-based recruiting system to the on-premise HR system.
  1. Onboarding continues. The candidate is given systems access, login credentials, and an e-mail account. IT is cued to furnish the candidate with a laptop and other equipment. The office manager assigns the candidate an office space.
  1. Onboarding concludes. HR moves the business process back to the cloud by using a cloud-based performance-management system like SumTotal, where new-hire details are updated.
Cloud. On-premise. Cloud again.

This isn’t some supposed future scenario. This hybridized process is happening now, throughout most organizations, and in many other departments besides HR. To ensure the success of those departments in a hybrid cloud environment, organizations should address three key issues: security, service level agreements (SLAs), and application integration.

Security

The move to the cloud does mean that security and data privacy — something that was previously your IT department’s concern — is now your cloud provider’s concern. Yet it doesn’t mean your organization is absolved from ensuring that the cloud provider is doing its part. You need to demand that the cloud provider is clear about how they secure and protect your customers’, partners’, and employees’ data — both when it’s stored in the cloud and when it’s transferred to and from your on-premise systems.

A cloud-based application in isolation is reason enough for insisting on a clear understanding of how your cloud provider stores your data. Imagine, then, how imperative a clear understanding becomes when that cloud-based application is no longer isolated but integrated into a hybrid cloud environment. It’s now transferring data out into the world — perhaps from an Amazon data center in Europe or the Pacific Northwest to your offices on the other side of the globe. Or perhaps it’s transferring data to your trading partner’s systems, where you have much less control over security and protection.

This spawns several questions you should ask your cloud provider:
  • Is the data encrypted both when it’s in motion and at rest?
  • If cloud-application access is via an application programming interface (API), is the security token secured and encrypted when it’s used in the API core?
  • What’s the security token’s lifetime? Is it per-session or permanent?
  • How easily could this security token be hijacked and reused?
  • Is the security token tied to IP addresses?

Getting solid answers to important questions like these will ensure that the cloud part of your hybrid environment is always serving your business and never compromising the strength of its security profile.

(TO BE CONTINUED)

(This post was first published at http:blogs.axway.com)  

Wednesday, August 29, 2012

An Example of User Innovation

In an earlier post, I talked about user innovation and how to harness this inventive power for your products. I want to give an example of user innovation that I have come across when working for my previous company, SirsiDynix.

 SirsiDynix builds software for libraries. The software products run many aspects of a library's operations, including the library's web presence. The web site allows library users to search for library materials online, check their availability, and, if desired, reserve them (to be picked up later). This functionality is termed the OPAC (Online Public Access Catalog) and is a basic component of almost all library management systems. SirsiDynix also provides a feature rich  Web Services API for their library management system (called Symphony) which allows developers to access the data and functionality of the system. The Web Services API provides an interface to the Symphony system and is intended to allow developers to enhance and extend the base product.

The Role-Playing Game — How Enterprise IT Should Prepare for Cloud Adoption

I’m often asked, “What’s the biggest thing standing in the way of enterprise IT getting on board with cloud adoption?”

My response is always the same: “The biggest thing standing in the way of enterprise IT cloud adoption is IT’s unwillingness to accept that business units (BUs) are already adopting the cloud.”

By 2012, BUs are eager to flout IT authority and circumvent IT constraints in order to solve problems now rather than see their requests languish in IT’s backlog of special projects, hostage to unreasonable wait times.
Those days are over. IT now has two options: Get on board or get left behind.

I’m seeing this exact scenario in our customer organizations as well. Customer BUs approach IT seeking solutions, without ever involving IT in the preliminary decision making process. They prefer instead to drag IT in at the very end and inform them of what is going to happen, rather than consult them about what may happen.

IT’s role has changed, whether they choose to recognize it or not. Their long-standing position as “policy police,” arbiter of good taste in applications, judge over whether an application requires IT policy and corporate security standards or not — it’s all coming to an end.

IT must face the fact that BUs are increasingly adopting the cloud, and support that move by:
• Becoming more aligned with the BUs and their goals;
• Providing security in the cloud;
• Managing service level agreements with cloud providers;
• Following escalation procedures;
• and advising the BUs on how — not whether — to adopt the cloud.

Don’t wait for cloud adoption to start getting your house in order. If IT stays in reactive mode as BUs make cloud decisions, they’ll end up with “integration” minus “strategy” — applications will be integrated on an ad hoc, project-by-project basis, creating a proliferation of point-to-point connections that is a repeat of the fragile, “spaghetti” integrations of the past.

IT must act now to get ahead of the curve — meaning ahead of BU demand — defining a solid integration strategy before the cloud apps start building out (or as early in that process as possible).

Does moving to the cloud mean that IT will lose some control? Yes. But I challenge them to be big-minded about it: Support BU adoption of the cloud, embrace your new role, shed your service-manager chrysalis and spread your trusted-adviser wings.

(This post was first published at http:blogs.axway.com

Tuesday, August 21, 2012

Page One

By 2015, the cloud will become the preferred mechanism for software delivery, which means organizations everywhere will have more choices when selecting an application provider, and fewer reasons to maintain their own applications on-premise.

Compare that to 2012. Today, if you use on-premise business application suites like SAP or Oracle, you’re effectively tied to those applications.

But not so in 2015. By then, you’ll be able to be selective when it comes to components. In fact, that level of selectivity is already becoming commonplace: Today, organizations everywhere are moving away from big packaged business suites, and toward best-of-breed components for their CRM and HR applications (e.g., Salesforce and Workday, respectively).

But the organization’s new advantage of increased selectivity puts a burden on the IT department to manage multiple vendors, transforming their role from providers of on-premise services to managers of off-premise cloud applications. The increased complexity IT will have to manage will be substantial and will add new dimensions to their role. IT will have to be ready and able to:
• Work with different types of providers
• Enforce a host of widely disparate SLAs
• Evaluate varying levels of performance
• Prepare for different disaster-recovery scenarios
• Implement a variety of support and escalation processes
• Accommodate different subscription and billing models (whether transaction- or api-based).

Not all IT departments will be ready to manage this level of broadened responsibility, so they’ll consolidate all of their multiple vendor agreements with one of the many cloud brokers we can expect to appear on the scene – intermediaries who will integrate various applications and services, aggregate it all to create a single view, and manage the service vendors on behalf of the IT department.

We can even expect that cloud brokers may end up doing going beyond vendor-agreement stewardship, to provide value-added services as well.

For example, it is not at all out of the question to expect that cloud brokers will map the data collected across the services their client organization has charged them to manage, integrate it with free services like Google Maps, and empower the client’s HR department to get a better idea of how the organization’s employees are spread out around the world – which could then inspire new, previously inconceivable tactics for employee-enablement initiatives.

The writing is on the wall: 2015 will mark the end of the cloud’s lengthy foreword and the start of its first chapter, and it will be exciting to witness, in 2013 and 2014, which organizations won’t be able to resist flipping to page one.

(This post was first published at http:blogs.axway.com)  

Wednesday, August 8, 2012

Moving Up the Stack

The 2014 tier of Axway’s infographic, “The Cloud: Impact and Adoption – Predictions for Today and Tomorrow,” features a note that “SaaS vendors and enterprises (will) pressure IaaS vendors to ‘move up the stack’ to PaaS and provide management, security, regulatory and disaster recovery services.”

We see this happening already. Amazon, for instance, is adding services on top of its EC2 offering’s raw infrastructure, including data-storage capabilities via elastic block storage, described as “off-instance storage that persists independently from the life of an instance”; messaging capabilities that allow cloud applications to communicate with one another; and disaster recovery capabilities that ensure all data is safe and all applications have optimal uptime.

If you’re merely looking to use cloud applications like Salesforce (a CRM application) or Workday (an HR application), then this trend of consolidation—where large infrastructure players are adding more and more capabilities to their infrastructure offerings and becoming platform offerings—might not be so important to you, as all infrastructure and platform issues that might affect you are hidden behind your application.

But if you’re looking to move your own proprietary applications to the cloud, then you must consider the long-term potential of your cloud provider very carefully.

Taking advantage of one of the smaller PaaS vendors and building applications using their technology might be a tempting option. But keep in mind that it’s very likely that in two or three years, IaaS vendors that have successfully “moved up the stack” will add more and more of the smaller PaaS vendors’ capabilities to their basic offerings, forcing those vendors out of the market and sending those vendors’ clients scrambling to find new cloud homes.

What would you do if you found yourself in those clients’ shoes?

(This post was first published at http:blogs.axway.com

Due Diligence

Cloud Industry Forum noted in their 2011 paper, “Cloud Adoption and Trends for 2012,” that when asked to name their biggest concerns around cloud adoption, 62 and 55 percent of respondents, respectively, “… were clear that data security and privacy stood out above all others.”

Those are understandable concerns. After all, when you move to the cloud, you’re entrusting the availability of key applications and the security and privacy of your data (including sensitive information about your customers and partners) to a third party.

What’s not so understandable is why those concerns should be inhibitors for adopting the cloud, since most cloud providers already recognize that, by 2013, security and penetration tests will be a requirement of cloud implementations.

Axway’s infographic, “The Cloud: Impact and Adoption – Predictions for Today and Tomorrow,” cites Gartner’s note that, “By 2016, 40 percent of enterprises will make proof of independent security testing a precondition for using any type of cloud service.” This makes perfect sense, but it begs an important question: Shouldn’t that 40 percent be asking for that proof today?

If your cloud provider isn’t willing to discuss their security analyses and penetration tests, your next action is simple — find a cloud provider who will.

Because while having concerns about any brave new world is understandable, denying your business countless advantages because of a lack of due diligence is not.

(This post was first published at http:blogs.axway.com

With the Cloud, IT’s Best Days Lie Ahead

When the cloud becomes the primary operating model for the enterprise, the IT department’s role will change. It will no longer be a systems administrator, an arbiter of what the enterprise can and cannot have. Instead, it’ll be a service administrator, an agent who is there to help users get the most out of their cloud-based applications.

First, IT will need to change its view of security. Traditionally, IT has owned security and acted as the guardian of data and systems access. But with the move to the cloud, most security will be provided by the cloud provider, which means IT will have to act as a liaison between the business units (BUs) and the cloud provider, helping the former understand the security model of the latter, and helping the latter build a security model which takes the former’s particular needs into account.

Next, IT will need to reposition itself as a trusted advisor. In a stark reversal of the traditional view of IT as an inhibitor of productivity, IT will now be viewed as an agent who works in the best interests of the business units, as the business units will be free to choose the cloud-based applications they wish to use without consulting IT. Rather than heavy-handedly dictate which applications the BUs can and can’t use, IT will be tasked to passively suggest best practices for the BUs, and do everything in its power to ensure their success.

Further, IT will need to reconsider its key internal processes, things like helpdesks, support policies, and support procedures. Today, those processes are based on the premise that all processes are on-premise and within IT’s control. But with a third-party cloud provider involved, IT will find that there are limits to what it can do, that their existing model won’t necessarily accommodate cloud-based systems and applications. It will be imperative, then, for IT to fully own its role as a service administrator. Failing that, the BUs will be apt to bypass IT whenever they have a technical issue and instead go straight to the cloud provider, and IT will lose even more control.

 Finally, IT will need to switch its thinking from “maintenance mode” to “strategic mode.” Today, some 80 percent of IT’s resources are focused on systems maintenance — merely keeping things up and running. But when the cloud becomes the primary operating model for the enterprise, and maintenance falls squarely on the shoulders of the cloud-service provider, IT will have the time to change their reputation in the organization. They should determine which critical business initiatives they can support, consider how emerging technologies can benefit the enterprise, and take this opportunity to become more proactive and less reactive.

 Of course, some applications will likely never move to the cloud. When it comes to those applications, IT’s role will remain unchanged. For example, a trading algorithm at a Financial Services company — a proprietary application with high intellectual-property value — may simply be too integral to the value of the enterprise to ever comfortably host off-premise.

But most applications will find a home in the cloud, and it’s up to today’s IT departments to anticipate the coming paradigm shift and embrace the opportunities it will create for them, not the least of which will be the chance to gain a reputation for facilitating productivity and lose the reputation for inhibiting it.

(This post was first published at http:blogs.axway.com)