Opening Up the Cloud

With OpenStack, cloud computing becomes easily accessible to everyone. It tears down financial barriers to cloud deployments and tackles the fear of lock-in. One of the main benefits of OpenStack is the fact that it is open source and supported by a wide ecosystem, with contributions from more than 200 companies, including Canonical and IBM. Users can change service providers and hardware at any time, and compared to other clouds using virtualization technology, OpenStack can double server utilization to as much as 85 percent. This means that an OpenStack cloud is economical and delivers more flexibility, scalability, and agility to businesses. The challenge however lies in recruiting and retaining OpenStack experts, who are in high demand, making it hard for companies to deploy OpenStack on time and on budget. But BootStack, Canonical’s managed cloud product solved that problem by offering all the benefits of a private cloud without any of the pain of day-to-day infrastructure management.

Addressing the Challenge of Finding OpenStack Experts

Resourcing an OpenStack six-strong team to work 24×7 would cost between $900,000 and $1.5 million and can take months of headhunting. Thus the savings that OpenStack should bring companies are eroded so Canonical created BootStack, short for Build, Operate, and Optionally Transfer. It’s a new service for setting up and operating an OpenStack cloud, in both on-premises and hosted environments, and it gives users the option of taking over the management of your cloud in the future.

After working with each customer to define their requirements and specify the right cloud infrastructure for their business, Canonical’s experienced engineering and support team builds and manages the entire cloud infrastructure of the customer, including Ubuntu OpenStack, the underlying hypervisor, and deployment onto hosted or on-premises hardware. As a result, users get all the benefits of a private cloud without any of the pain of day-to-day infrastructure management. For added protection, BootStack is backed by a clear SLA that covers cloud availability at the user’s desired scale as well as uptime and responsiveness metrics.

 

Source: http://blog.softlayer.com/2015/opening-cloud

Semantics: “Public, “Private,” and “Hybrid” in Cloud Computing, Part I

What does the word “gift” mean to you? In English, it most often refers to a present or something given voluntarily. In German, it has a completely different meaning: “poison.” If a box marked “gift” is placed in front of an English-speaker, it’s safe to assume that he or she would interact with it very differently than a German-speaker would.

In the same way, simple words like “public,” “private,” and “hybrid” in cloud computing can mean very different things to different audiences. But unlike our “gift” example above (which would normally have some language or cultural context), it’s much more difficult for cloud computing audiences to decipher meaning when terms like “public cloud,” “private cloud,” and “hybrid cloud” are used.

We, as an industry, need to focus on semantics.

In this two-part series, we’ll look at three different definitions of “public” and “private” to set the stage for a broader discussion about “hybrid.”

“Public” v. “Private”

Definition 1—Location: On-premises v. Off-premises

For some audiences (and the enterprise market), whether an infrastructure is public or private is largely a question of location. Does a business own and maintain the data centers, servers, and networking gear it uses for its IT needs, or does the business use gear that’s owned and maintained by another party?

This definition of “public v. private” makes sense for an audience that happens to own and operate its own data centers. If a business has exclusive physical access to and ownership of its gear, the business considers that gear “private.” If another provider handles the physical access and ownership of the gear, the business considers that gear “public.”

 

Source : http://blog.softlayer.com/2015/semantics-public-private-and-hybrid-cloud-computing-part-i

The Importance of Data’s Physical Location in the Cloud

If top-tier cloud providers use similar network hardware in their data centers and connect to the same transit and peering bandwidth providers, how can SoftLayer claim to provide the best network performance in the cloud computing industry?

Over the years, I’ve heard variations of that question asked dozens of times, and it’s fairly easy to answer with impressive facts and figures. All SoftLayer data centers and network points of presence (PoPs) are connected to our unique global network backbone, which carries public, private, and management traffic to and from servers. Using our network connectivity table, some back-of-the-envelope calculations reveal that we have more than 2,500Gbps of bandwidth connectivity with some of the largest transit and peering bandwidth providers in the world (and that total doesn’t even include the private peering relationships we have with other providers in various regional markets). Additionally, customers may order servers with up to 10Gbps network ports in our data centers.

For the most part, those stats explain our differentiation, but part of the bigger network performance story is still missing, and to a certain extent it has been untold—until today.

The 2,500+Gbps of bandwidth connectivity we break out in the network connectivity table only accounts for the on-ramps and off-ramps of our network. Our global network backbone is actually made up of an additional 2,600+Gbps of bandwidth connectivity … and all of that backbone connectivity transports SoftLayer-related traffic.

This robust network architecture streamlines the access to and delivery of data on SoftLayer servers. When you access a SoftLayer server, the network is designed to bring you onto our global backbone as quickly as possible at one of our network PoPs, and when you’re on our global backbone, you’ll experience fewer hops (and a more direct route that we control). When one of your users requests data from your SoftLayer server, that data travels across the global backbone to the nearest network PoP, where it is handed off to another provider to carry the data the “last mile.”

With this controlled environment, I decided to undertake an impromptu science experiment to demonstrate how location and physical distance affect network performance in the cloud.

Source : http://blog.softlayer.com/2015/importance-datas-physical-location-cloud

Streamlining the VMware licenses ordering process

IBM and VMware’s agreement (announced in February) enables enterprise customers to extend their existing on-premises workloads to the cloud—specifically, the IBM Cloud. Customers can now leverage VMware technologies with IBM’s worldwide cloud data centers, giving them the power to scale globally without incurring CAPEX and reducing security risks.

So what does this mean to customers’ VMware administrators? They can quickly realize cost-effective hybrid cloud characteristics by deploying into SoftLayer’s enterprise-grade global cloud platform (VMware@SoftLayer). One of these characteristics is that vSphere workloads and catalogs can be provisioned onto VMware vSphere environments within SoftLayer’s data centers without modification to VMware VMs or guests. The use of a common vSphere hypervisor and management/orchestration platform make these deployments possible.

vSphere implementations on SoftLayer also enable utilization of other components. Table 1 contains a list of VMware products that are now available for ordering through the SoftLayer customer portal. Note that prices are subject to change. Visit VMware Solutions for the most current pricing.

 

Source : http://blog.softlayer.com/2016/streamlining-vmware-licenses-ordering-process

Make the most of Watson Language Translation on Bluemix

How many languages can you speak (sorry, fellow geeks; I mean human languages, not programming)?

Every day people across the globe depend more and more on the Internet for their day-to-day activities, increasing the need for software to support multiple languages to accommodate the growing diversity of its users. If you work developing software, this means it is only a matter of time before you get tasked to translate your applications.

Wouldn’t it be great if you could learn something with just a few key strokes? Just like Neo in The Matrix when he learns kung fu. Well, wish no more! I’ll show you how to teach your applications to speak in multiple languages with just a few key strokes using Watson’s Language Translation service, available through Bluemix. It provides on-the-fly translation between many languages. You pay only for what you use and it’s consumable through web services, which means pretty much any application can connect to it—and it’s platform and technology agnostic!

I’ll show you how easy it is to create a PHP program with language translation capabilities using Watson’s service.

Step 1: The client.

You can write your own code to interact with Watson’s Translation API, but why should you? The work is already done for you. You can pull in the client via Composer, the de-facto dependency manager for PHP. Make sure you have Composer installed, then create a composer.json file with the following contents:

 

Source: http://blog.softlayer.com/watson-bluemix-language-translation

Bringing the power of GPUs to cloud

The GPU was invented by NVIDIA back in 1999 as a way to quickly render computer graphics by offloading the computational burden from the CPU. A great deal has happened since then—GPUs are now enablers for leading edge deep learning, scientific research, design, and “fast data” querying startups that have ambitions of changing the world.

That’s because GPUs are very efficient at manipulating computer graphics, image processing, and other computationally intensive high performance computing (HPC) applications. Their highly parallel structure makes them more effective than general purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. GPUs, capable of handling multiple calculations at the same time, also have a major performance advantage. This is the reason SoftLayer (now part of IBM Cloud) has brought these capabilities to a broader audience.

We support the NVIDIA Tesla Accelerated Computing Platform, which makes HPC capabilities more accessible to, and affordable for, everyone. Companies like Artomatix and MapD are using our NVIDIA GPU offerings to achieve unprecedented speed and performance, traditionally only achievable by building or renting an HPC lab.

By provisioning SoftLayer bare metal servers with cutting-edge NVIDIA GPU accelerators, any business can harness the processing power needed for HPC. This enables businesses to manage the most complex, compute-intensive workloads—from deep learning and big data analytics to video effects—using affordable, on-demand computing infrastructure.

Take a look at some of the groundbreaking results companies like MapD are experiencing using GPU-enabled technology running on IBM Cloud. They’re making big data exploration visually interactive and insightful by using NVIDIA Tesla K80 GPU accelerators running on SoftLayer bare metal servers.

Source : http://blog.softlayer.com/2016/bringing-power-gpus-cloud

For a Limited Time Only: Free POWER8 Servers

So maybe you’ve heard that POWER8 servers are now available from SoftLayer. But did you know you can try them for free?

Yep. That’s right. For. Free.

Even better: We’re excited to extend this offer to our new and existing customers. For a limited time only, our customers can take up to $2,238 off their entire order using promo code FREEPOWER8.

That’s a nice round number. (Not!)

I bet you’re wondering how we came up with that number. Well, $2,238 gets you the biggest, baddest POWER8-est machine we offer: POWER8 C812L-SSD, loaded with 10 cores, 3.49GHz, 512GB RAM, and 2x960GB SSDs. Of course, if you don’t need that much POWER (pun intended), we offer three other configs that might fit your lifestyle a little bit better.

 

Source: http://blog.softlayer.com/2016/limited-time-only-free-power8-servers