Semantics: “Public,” “Private,” and “Hybrid” in Cloud Computing, Part II

Welcome back! In the second post in this two-part series, we’ll look at the third definition of “public” and “private,” and we’ll have that broader discussion about “hybrid”—and we’ll figure out where we go after the dust has cleared on the semantics. If you missed the first part of our series, take a moment to get up to speed here before you dive in.

Definition 3—Control: Bare Metal v. Virtual

A third school of thought in the “public v. private” conversation is actually an extension of Definition 2, but with an important distinction. In order for infrastructure to be “private,” no one else (not even the infrastructure provider) can have access to a given hardware node.

In Definition 2, a hardware node provisioned for single-tenancy would be considered private. That single-tenant environment could provide customers with control of the server at the bare metal level—or it could provide control at the operating system level on top of a provider-managed hypervisor. In Definition 3, the latter example would not be considered “private” because the infrastructure provider has some level of control over the server in the form of the virtualization hypervisor.

Under Definition 3, infrastructure provisioned with full control over bare metal hardware is “private,” while any provider-virtualized or shared environment would be considered “public.” With complete, uninterrupted control down to the bare metal, a user can monitor all access and activity on the infrastructure and secure it from any third-party usage.

Defining “public cloud” and “private cloud” using the bare metal versus virtual delineation is easy. If a user orders infrastructure resources from a provider, and those resources are delivered from a shared, virtualized environment, that infrastructure would be considered public cloud. If the user orders a number of bare metal servers and chooses to install and maintain his or her own virtualization layer across those bare metal servers, that environment would be a private cloud.

 

Source: http://blog.softlayer.com/2015/semantics-public-private-and-hybrid-cloud-computing-part-ii

Adventures in Bluemix: Migrating to MQ Light

One of my pet projects at SoftLayer is looking at a small collection of fancy scripts that scan through all registered Internet domain names to see how many of them are hosted on SoftLayer’s infrastructure. There are a lot of fun little challenges involved, but one of the biggest challenges is managing the distribution of work so that this scan doesn’t take all year. Queuing services are great for task distribution, and for my initial implementation I decided to give running a RabbitMQ instance a try, since at the time it was the only queuing service I was familiar with. Overall, it took me about a week and one beefy server to go from “I need a queue,” to “I have a queue that is actually doing what I need it to.”

While what I had set up worked, looking back, there is a lot about RabbitMQ that I didn’t really have the time to figure out properly. Around the time I finished the first run of this project, Bluemix announcedthat its MQLight service would allow connections from non-Bluemix resources. So when I got some free time, I decided to move the project to a Bluemix-hosted MQ Light queue, and take some notes on how the migration went.

Project overview

To better understand how much work was involved, let me quickly explain how the whole “scanning through every registered domain for SoftLayer hosted domains” thing works.

There are three main moving parts in the project:

  1. The Parser, which is responsible for reading through zone files (which are obtained from the various registrars), filtering out duplicates, and putting nicely formatted domains into a queue.
  2. The Resolver, which is responsible from taking the nicely formatted domains from queue #1, looking up the domain’s IP address, and putting the result into queue #2.
  3. The Checker, which takes the domains from queue #2, checks to see if the domains’ IPs belong to SoftLayer or not, and saves the result in a database.

Source : http://blog.softlayer.com/bluemix-migrating-mqlight

Tips from the Abuse Department: DMCA Takedown Notices

If you are in the web hosting business or you provide users with access to store content on your servers, chances are that you’re familiar with the Digital Millennium Copyright Act (DMCA). If you aren’t familiar with it, you certainly should be. All it takes is one client plagiarizing an article or using a filesharing program unscrupulously, and you could find yourself the recipient of a scary DMCA notice from a copyright holder. We’ve talked before about how to file a DMCA complaint with SoftLayer, but we haven’t talked in detail about SoftLayer’s role in processing DMCA complaints or what you should do if you find yourself on the receiving end of a copyright infringement notification.

The most important thing to understand when it comes to the way the abuse team handles DMCA complaints is that our procedures aren’t just SoftLayer policy — they are the law. Our role in processing copyright complaints is essentially that of a middleman. In order to protect our Safe Harbor status under the Online Copyright Infringement Liability Limitation Act (OCILLA), we must enforce any complaint that meets the legal requirements of a takedown notice. That DMCA complaint must contain specific elements and be properly formatted in order to be considered valid.

Responding to a DMCA Complaint

When we receive a complaint that meets the legal requirements of a DMCA takedown notice, we must relay the complaint to our direct customer and enforce a deadline for removal of the violating material. We are obligated to remove access to infringing content when we are notified about it, and we aren’t able to make a determination about the validity of a claim beyond confirming that all DMCA requirements are met.

Source : http://blog.softlayer.com/2013/tips-from-the-abuse-department-dmca-takedown-notices

Disaster Recovery in the Cloud: Are You Prepared?

While the importance of choosing the right disaster recovery solution and cloud provider cannot be understated, having a disaster recovery runbook is equally important (if not more). I have been involved in multiple conversations where the customer’s primary focus was the implementation of the best-suited disaster recovery technology, but conversation regarding DR runbook was either missing completely or lacked key pieces of information. Today, my focus will be to lay out a frame work for what your DR runbook should look like.

“Eighty percent of businesses affected by a major incident either never re-open or close within 18 months.” (Source: Axa Report)

What is a disaster recovery runbook?

A disaster recovery runbook is a working document that outlines a recovery plan with all the necessary information required for execution of this plan. This document is unique to every organization and can include processes, technical details, personnel information, and other key pieces of information that may not be readily available during a disaster situation.

What should I include in this document?

As previously stated, a runbook is unique to every organization depending on the industry and internal processes, but there is standard information that applies to all organizations and should be included in every runbook. Below is a list of the most important information:

  • Version control and change history of the document.
  • Contacts with titles, phone numbers, email addresses, and job responsibilities.
  • Service provider and vendor list with point of contact, phone numbers, and email addresses.
  • Access Control List: application/system access and physical access to offices/data centers.
  • Updated organization chart.
  • Use case scenarios based on DR testing, i.e., what to do in the event of X, and the chain of events that must take place for recovery.
  • Alert and custom notifications/emails that need to be sent for a failure or DR event.
  • Escalation procedures.
  • Technical details and explanation of the disaster recovery solution (network layouts, traffic flows, systems and application inventory, backup configurations, etc.).
  • Application-based personnel roles and responsibilities.
  • How to revert back and failover/failback procedures.

Source : http://blog.softlayer.com/2016/disaster-recovery-cloud-are-you-prepared