Make the most of Watson Language Translation on Bluemix

How many languages can you speak (sorry, fellow geeks; I mean human languages, not programming)?

Every day people across the globe depend more and more on the Internet for their day-to-day activities, increasing the need for software to support multiple languages to accommodate the growing diversity of its users. If you work developing software, this means it is only a matter of time before you get tasked to translate your applications.

Wouldn’t it be great if you could learn something with just a few key strokes? Just like Neo in The Matrix when he learns kung fu. Well, wish no more! I’ll show you how to teach your applications to speak in multiple languages with just a few key strokes using Watson’s Language Translation service, available through Bluemix. It provides on-the-fly translation between many languages. You pay only for what you use and it’s consumable through web services, which means pretty much any application can connect to it—and it’s platform and technology agnostic!

I’ll show you how easy it is to create a PHP program with language translation capabilities using Watson’s service.

Step 1: The client.

You can write your own code to interact with Watson’s Translation API, but why should you? The work is already done for you. You can pull in the client via Composer, the de-facto dependency manager for PHP. Make sure you have Composer installed, then create a composer.json file with the following contents:

 

Source: http://blog.softlayer.com/watson-bluemix-language-translation

Bringing the power of GPUs to cloud

The GPU was invented by NVIDIA back in 1999 as a way to quickly render computer graphics by offloading the computational burden from the CPU. A great deal has happened since then—GPUs are now enablers for leading edge deep learning, scientific research, design, and “fast data” querying startups that have ambitions of changing the world.

That’s because GPUs are very efficient at manipulating computer graphics, image processing, and other computationally intensive high performance computing (HPC) applications. Their highly parallel structure makes them more effective than general purpose CPUs for algorithms where the processing of large blocks of data is done in parallel. GPUs, capable of handling multiple calculations at the same time, also have a major performance advantage. This is the reason SoftLayer (now part of IBM Cloud) has brought these capabilities to a broader audience.

We support the NVIDIA Tesla Accelerated Computing Platform, which makes HPC capabilities more accessible to, and affordable for, everyone. Companies like Artomatix and MapD are using our NVIDIA GPU offerings to achieve unprecedented speed and performance, traditionally only achievable by building or renting an HPC lab.

By provisioning SoftLayer bare metal servers with cutting-edge NVIDIA GPU accelerators, any business can harness the processing power needed for HPC. This enables businesses to manage the most complex, compute-intensive workloads—from deep learning and big data analytics to video effects—using affordable, on-demand computing infrastructure.

Take a look at some of the groundbreaking results companies like MapD are experiencing using GPU-enabled technology running on IBM Cloud. They’re making big data exploration visually interactive and insightful by using NVIDIA Tesla K80 GPU accelerators running on SoftLayer bare metal servers.

Source : http://blog.softlayer.com/2016/bringing-power-gpus-cloud

For a Limited Time Only: Free POWER8 Servers

So maybe you’ve heard that POWER8 servers are now available from SoftLayer. But did you know you can try them for free?

Yep. That’s right. For. Free.

Even better: We’re excited to extend this offer to our new and existing customers. For a limited time only, our customers can take up to $2,238 off their entire order using promo code FREEPOWER8.

That’s a nice round number. (Not!)

I bet you’re wondering how we came up with that number. Well, $2,238 gets you the biggest, baddest POWER8-est machine we offer: POWER8 C812L-SSD, loaded with 10 cores, 3.49GHz, 512GB RAM, and 2x960GB SSDs. Of course, if you don’t need that much POWER (pun intended), we offer three other configs that might fit your lifestyle a little bit better.

 

Source: http://blog.softlayer.com/2016/limited-time-only-free-power8-servers

Semantics: “Public,” “Private,” and “Hybrid” in Cloud Computing, Part II

Welcome back! In the second post in this two-part series, we’ll look at the third definition of “public” and “private,” and we’ll have that broader discussion about “hybrid”—and we’ll figure out where we go after the dust has cleared on the semantics. If you missed the first part of our series, take a moment to get up to speed here before you dive in.

Definition 3—Control: Bare Metal v. Virtual

A third school of thought in the “public v. private” conversation is actually an extension of Definition 2, but with an important distinction. In order for infrastructure to be “private,” no one else (not even the infrastructure provider) can have access to a given hardware node.

In Definition 2, a hardware node provisioned for single-tenancy would be considered private. That single-tenant environment could provide customers with control of the server at the bare metal level—or it could provide control at the operating system level on top of a provider-managed hypervisor. In Definition 3, the latter example would not be considered “private” because the infrastructure provider has some level of control over the server in the form of the virtualization hypervisor.

Under Definition 3, infrastructure provisioned with full control over bare metal hardware is “private,” while any provider-virtualized or shared environment would be considered “public.” With complete, uninterrupted control down to the bare metal, a user can monitor all access and activity on the infrastructure and secure it from any third-party usage.

Defining “public cloud” and “private cloud” using the bare metal versus virtual delineation is easy. If a user orders infrastructure resources from a provider, and those resources are delivered from a shared, virtualized environment, that infrastructure would be considered public cloud. If the user orders a number of bare metal servers and chooses to install and maintain his or her own virtualization layer across those bare metal servers, that environment would be a private cloud.

 

Source: http://blog.softlayer.com/2015/semantics-public-private-and-hybrid-cloud-computing-part-ii

Adventures in Bluemix: Migrating to MQ Light

One of my pet projects at SoftLayer is looking at a small collection of fancy scripts that scan through all registered Internet domain names to see how many of them are hosted on SoftLayer’s infrastructure. There are a lot of fun little challenges involved, but one of the biggest challenges is managing the distribution of work so that this scan doesn’t take all year. Queuing services are great for task distribution, and for my initial implementation I decided to give running a RabbitMQ instance a try, since at the time it was the only queuing service I was familiar with. Overall, it took me about a week and one beefy server to go from “I need a queue,” to “I have a queue that is actually doing what I need it to.”

While what I had set up worked, looking back, there is a lot about RabbitMQ that I didn’t really have the time to figure out properly. Around the time I finished the first run of this project, Bluemix announcedthat its MQLight service would allow connections from non-Bluemix resources. So when I got some free time, I decided to move the project to a Bluemix-hosted MQ Light queue, and take some notes on how the migration went.

Project overview

To better understand how much work was involved, let me quickly explain how the whole “scanning through every registered domain for SoftLayer hosted domains” thing works.

There are three main moving parts in the project:

  1. The Parser, which is responsible for reading through zone files (which are obtained from the various registrars), filtering out duplicates, and putting nicely formatted domains into a queue.
  2. The Resolver, which is responsible from taking the nicely formatted domains from queue #1, looking up the domain’s IP address, and putting the result into queue #2.
  3. The Checker, which takes the domains from queue #2, checks to see if the domains’ IPs belong to SoftLayer or not, and saves the result in a database.

Source : http://blog.softlayer.com/bluemix-migrating-mqlight

Tips from the Abuse Department: DMCA Takedown Notices

If you are in the web hosting business or you provide users with access to store content on your servers, chances are that you’re familiar with the Digital Millennium Copyright Act (DMCA). If you aren’t familiar with it, you certainly should be. All it takes is one client plagiarizing an article or using a filesharing program unscrupulously, and you could find yourself the recipient of a scary DMCA notice from a copyright holder. We’ve talked before about how to file a DMCA complaint with SoftLayer, but we haven’t talked in detail about SoftLayer’s role in processing DMCA complaints or what you should do if you find yourself on the receiving end of a copyright infringement notification.

The most important thing to understand when it comes to the way the abuse team handles DMCA complaints is that our procedures aren’t just SoftLayer policy — they are the law. Our role in processing copyright complaints is essentially that of a middleman. In order to protect our Safe Harbor status under the Online Copyright Infringement Liability Limitation Act (OCILLA), we must enforce any complaint that meets the legal requirements of a takedown notice. That DMCA complaint must contain specific elements and be properly formatted in order to be considered valid.

Responding to a DMCA Complaint

When we receive a complaint that meets the legal requirements of a DMCA takedown notice, we must relay the complaint to our direct customer and enforce a deadline for removal of the violating material. We are obligated to remove access to infringing content when we are notified about it, and we aren’t able to make a determination about the validity of a claim beyond confirming that all DMCA requirements are met.

Source : http://blog.softlayer.com/2013/tips-from-the-abuse-department-dmca-takedown-notices

Disaster Recovery in the Cloud: Are You Prepared?

While the importance of choosing the right disaster recovery solution and cloud provider cannot be understated, having a disaster recovery runbook is equally important (if not more). I have been involved in multiple conversations where the customer’s primary focus was the implementation of the best-suited disaster recovery technology, but conversation regarding DR runbook was either missing completely or lacked key pieces of information. Today, my focus will be to lay out a frame work for what your DR runbook should look like.

“Eighty percent of businesses affected by a major incident either never re-open or close within 18 months.” (Source: Axa Report)

What is a disaster recovery runbook?

A disaster recovery runbook is a working document that outlines a recovery plan with all the necessary information required for execution of this plan. This document is unique to every organization and can include processes, technical details, personnel information, and other key pieces of information that may not be readily available during a disaster situation.

What should I include in this document?

As previously stated, a runbook is unique to every organization depending on the industry and internal processes, but there is standard information that applies to all organizations and should be included in every runbook. Below is a list of the most important information:

  • Version control and change history of the document.
  • Contacts with titles, phone numbers, email addresses, and job responsibilities.
  • Service provider and vendor list with point of contact, phone numbers, and email addresses.
  • Access Control List: application/system access and physical access to offices/data centers.
  • Updated organization chart.
  • Use case scenarios based on DR testing, i.e., what to do in the event of X, and the chain of events that must take place for recovery.
  • Alert and custom notifications/emails that need to be sent for a failure or DR event.
  • Escalation procedures.
  • Technical details and explanation of the disaster recovery solution (network layouts, traffic flows, systems and application inventory, backup configurations, etc.).
  • Application-based personnel roles and responsibilities.
  • How to revert back and failover/failback procedures.

Source : http://blog.softlayer.com/2016/disaster-recovery-cloud-are-you-prepared