Solita Cloud tests – How migration tools perform?

Public Cloud providers (AWS, Azure and GCP) have been acquiring migration-tools to provide fast ways to migrate workloads to the public cloud. As we have not found any study of tools and hands-on-experience reports so we decided to do it with our skilled cloud team

Background

Public Cloud providers (AWS, Microsoft Azure and GCP) have been acquiring migration-tools to provide fast ways to migrate existing workloads to the public cloud. As we have not found any comprehensive study of tools and hands-on-experience reports so we decided to do it with our skilled cloud engineers and consultants.

We selected tools listed in chapter 2 for the following reasons: 

  1. they are preferred tools by cloud vendors
    1. CloudEndure – AWS
    2. Azure Migrate: Server Migrate – Microsoft Azure
    3. Migrate for Compute Engine (formerly Velostrata)  – GCP
  2. customer demand for tools has been high during 2019
  3. we need to understand fundamental differences/restrictions of the tools

Our teams included skills from all public cloud providers and plenty of migration experience from various methods (https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/). The trick was that we formed our teams so that AWS experts used GCP tools and GCP experts Azure and so on.

Selected tools and experiences

CloudEndure (Acquired by Amazon Web Services) 

https://www.cloudendure.com/

Product description: 

CloudEndure offers highly automated disaster recovery and migration solutions into AWS.

With support for any source infrastructure and all applications running on supported operating systems, CloudEndure ensures that your entire IT landscape will remain robust and reliable as it continues to grow.

CloudEndure requires installation of agent to the operating system. The Installation is easy and fast for Linux and Windows operating systems. For small linux installation it took less than one minute:

After the agent is installed the machine appears to CloudEndure website under the migration project. The first initial data transfer takes about 1 minutes per 1 GB of compressed and encrypted data in moderate environment. The data is transferred to CloudEndure replication instance via AWS elastic (public) ip and 1500/tcp port. The replication instance is automatically created by CloudEndure to the customer target AWS account when first agent connects to CloudEndure. The replication instance saves information to EBS volume (one volume per each disk from each machine) and takes snapshot after the initial sync phase is completed.

When initial phase is completed, the agent starts transfer only changes to replication instance, and user can start actual migrated instances to the AWS account. In CloudEndure you define Blueprint (template to describe new migrated instance), which ip address it should use, which subnet, which security groups etc. The new instance is created in a few minutes:

Experiences/Summary:

  • Migration tool without any big issues
  • Straightforward approach
  • We managed to migrate all source systems within given time frame
  • Automatic migration target EC2-type choosing algorithm isn’t spot on. You can choose the correct target instance type when defining Blueprint.
  • CloudEndure migrates licenses to Bring-Your-Own-License model in AWS. Eg. if source machine has Windows Server with SQL Server the migrated instance will be Windows Server, not Windows Server with SQL Server instance.
  • We are sure that the CloudEndure console will be integrated to AWS Console (look and feel, user logins, etc.).

Technical approach:

  • Agentful approach with block level copy of disk (Download and run agent on source server. No installation needed.)
  • Tool is very scalable
  • All source machines supported if you can install the agent

Unique features:

  • Free of charge (migration target to AWS)
  • Continuous data replication

KPIs:

  • Setup the tool (1 hour) – this was really straightforward task
  • Time to implement first migration (30 minutes)
  • Migration of the source machines (1 hour)

Google Cloud with Migrate for Compute Engine (formerly Velostrata) 

https://cloud.google.com/migrate/compute-engine/

Product description: 

Cloud migration creates a lot of questions. Migrate for Compute Engine (formerly Velostrata) by Google Cloud has the answers. Whether you’re looking to migrate one application from on-premises or one thousand applications across multiple data centers and clouds, Migrate for Compute Engine gives IT teams the power to migrate these systems to Google Cloud.

Experiences/Summary:

  • There are major issues in the documentation with Migrate for Compute Engine
  • Some challenging phases in the migration (some limits, naming etc issues)
  • Integration to Google Cloud Platform has been partly implemented
  • Learning curve quite steep (major usability challenges with UI), you just have to know how to use the product
  • Seems that REST API is the best way to use this, which is, of course, the right way in any large scale operation.

Technical approach:

  • Migration sources supported: VMware, AWS, Azure
  • Needs worker in the source environment (for example AWS AMI is available from Market Place)

Unique features:

  • Run in cloud mode for transparent/seamless migration 
  • Agentless (Block mode based migration approach)

KPIs:

  • Setup the tool (4 hours) (This time will reduce dramatically when we have more experience)
  • Time to implement first migration (1 hour)
  • Migration of the source machines (90 minutes)

Azure Migrate

https://azure.microsoft.com/en-us/services/azure-migrate/

Product description: 

Streamline your migration journey

Discover, assess, and migrate all your on-premises applications, infrastructure and data. Centrally plan and track the migration across multiple Microsoft and independent software vendor (ISV) tools.

Azure Database Migration Service (DMS) – Cloud and agent-based migration solution. Migrate data from other Azure PaaS SQL Databases or SQL Server on Virtual Machines to Azure database services.

Experiences/Summary:

  • Heavy capacity requirements (amount of memory, number of vCPUs)
  • Customizable assessment (infrastructure can be customized and not transferred as is)
  • Automation possibility with push installation, automated deployment
  • Limited OS support (AWS Linux kernel not supported but this is usually not a problem in VMware environments)
  • Relatively well documented but UI / documentation / terminology sometimes inconsistent
  • Community support lacking. Does anybody really use these tools?

Technical approach:

  • Discovery functionality limited
  • Agentful approach (Agent needs to be installed all servers (AWS / physical servers) 
  • Agentless features available for VMware vSphere
  • Configuration / Process server pushes data to Azure Migration Server Migration tool. 
  • Doesn’t include features for assessment / discovery of cloud workloads

Unique features:

  • No need to allow inbound traffic to source environment from external sources

KPIs:

  • Setup the tool (3 hours) 
  • Time to implement first migration (1 hours)
  • Migration of the source machines (1 hours)

Result / Verdict

The migration tools market is booming at the moment. There are plenty of tools available to assist migration, but there are still major challenges in all tested tools. Migration tools can assist you in your migration challenge but there is definitely a need for skilled experts who can analyze your current environment and propose a suitable approach for migration. In most cases, your preferred approach is re-factor / re-platform (especially if you have CI/CD pipeline already built) but the tooling is there to help you.  Azure Database Migration Service (DMS) is a really powerful tool for migrating MS SQL servers to PaaS SQL.

Can we migrate your workloads and data also?

Solita offers comprehensive end-to-end migration services for all major public cloud platforms. We are able to assist you with all migration needs from refactoring to mass-migration.

Read more from our reference case:

https://www.solita.fi/en/customers/changing-the-business-through-cloud-migration/

Interested to learn more – please contact:

Petja Venäläinen
Head of Cloud Consulting
+358-40-5815666
petja.venalainen@solita.fi

#wemigratebigtime

No public cloud? Then kiss AI goodbye

What’s the crucial enabling factor that’s often missing from the debate about the myriad uses of AI? The fact that there is no AI without a proper backend for data (cloud data warehouses/data lakes) or without pre-built components. Examples of this are Cloud Machine Learning (ML) in Google Cloud Platform (GCP) and Sagemaker in Amazon Web Services (AWS). In this cloud blog I will explain why public cloud offers the optimum solution for machine learning (ML) and AI environments.

Why is public cloud essential to AI/ML projects?

  • AWS, Microsoft Azure and GCP offer plenty of pre-built machine learning components. This helps projects to build AI/ML solutions without requiring a deep understanding of ML theory, knowledge of AI or PhD level data scientists.
  • Public cloud is built for workloads which need peaking CPU/IO performance. This lets you pay for an unlimited amount of computing power on a per-minute basis instead of investing millions into your own data centres.
  • Rapid innovation/prototyping is possible using public cloud – you can test and deploy early and scale up in the production if needed.

Public cloud: the superpower of AI

Across many types of projects, AI capabilities are being democratised. Public cloud vendors deliver products, like Sagemaker or CloudML, that allow you to build AI capabilities for your products without a deep theoretical understanding. This means that soon a shortage of AI/ML scientists won’t be your biggest challenge.  Projects can use existing AI tools to build world-class solutions such as customer support, fraud detection, and business intelligence.

My recommendation is that you should head towards data enablement. First invest in data pipelines, data quality, integrations, and cloud-based data warehouses/data lakes. So rather than using over-skilled AI/ML scientists, build up the essential twin pillars – cloud ops and skilled team of data engineers.

Enablement – not enforcement

In my experience, many organisations have been struggling to transition to public cloud due to data confidentiality and classification issues. Business units have been driving the adoption of modern AI-based technology. IT organisations have been pushing back due to security concerns.  After plenty of heated debate we have been able to find a way forward. The benefits of using public cloud components in advanced data processing have been so huge that IT has to find ways to enable the use of public cloud.

The solution for this challenge has proven to be proper data classification and the use of private on-premises facilities to support operations in public cloud. Data location should be defined based on the data classification. Solita has been building secure but flexible automated cloud governance controls. These enable business requests but keep the control in your hands, as well as meeting the requirements usually defined by a company’s chief information security officer (CISO). Modern cloud governance is built on automation and enablement – rather than enforcing policies.

Conclusion

  • The pathway to effective AI adoption usually begins by kickstarting or boosting the public cloud journey and competence within the company.
  • Our recommendation – the public cloud journey should start with proper analyses and planning.
  • Solita is able to help with data confidentiality issues: classification, hybrid/private cloud usage and transformation.
  • Build cloud governance based on enablement and automation rather than enforcement.

Download a free Cloud Buyer's Guide