Solita achieves Google Cloud specialization: Infrastructure – Services

Solita has achieved Google Cloud specialization, demonstrating our competency in building infrastructure and migrating workloads to the Google Cloud Platform.

Solita and Google share a longstanding partnership, with multiple significant real-time, data-intensive services running in production. Solita boasts a growing number of certified professionals and possesses unique skills in developing complete services on the Google Cloud Platform, not just in developing and operating the cloud infrastructure.

As a Google Cloud Platform Partner, our partnership aims to accelerate the pace at which our clients can capitalize on digital opportunities.

Our expertise is evident in our CloudBlox offering, but Solita’s know-how extends beyond our cloud infrastructure services to encompass software development, data analytics, and integration services as well. This specialization, therefore, reflects Solita’s comprehensive capabilities as a whole, demonstrating our ability to provide a wide range of solutions for our clients.

We are excited about the prospect of further tightening and deepening our partnership with Google Cloud in the future. Our continued collaboration will undoubtedly bring more innovation, growth, and success for both parties while offering enhanced value to our clients.

Is cloud always the answer?

Now and then it might feel convenient that an application should be transferred to cloud quickly. For those situations this blog won’t offer any help. But for occasions when the decision is not yet made and a bit more analysis is required to justify the transformation, this blog post will propose a tool. We believe that often it is wise to think about various aspects of the cloud adoption before actually perform it.

For all applications there will be a moment in their lifecycle that the question whether the application should be modernised or just to be updated slightly. The question is rather straightforward. The answer might not as there are business and technological aspects that should be considered. Having the rational answer available is not easy task. Cloud transformation should have always business need as well as it should be technologically feasible. Many times there might be an interest to make the decision rather hasty and just move forward due to the fact that it is difficult to gather the holistic view to the application. But just neglect the rational analysis because it is difficult might not always be the suitable path to follow. Success in cloud journey requires guidance from business needs as well as technical knowledge.

To address this issue companies can formalize cloud strategy. Some companies find it as an excellent choice to move forward as during the cloud strategy work the holistic understanding is gathered and guidance for the next steps is identified. Cloud strategy also provides the main reason why cloud transition is supporting the value generation and how it is connected to the organisation strategy. However, sometimes the cloud strategy work might be contemplated to be too large and premature activity. In particular when the cloud journey is not really started and knowledge gap might be considered to be too vast to overcome and it is challenging to think about structured utilisation of the cloud. Organizations might face challenges in maneuvering through the mist to find the right path on their cloud journey. There are expectations and there are risks. There are low-hanging-fruits but there might be something scary ahead that has not even have a name.

Canvas to help the cloud journey

Benefits and risks should be considered analytically before transferring application to the cloud. Inspired by Business Model Canvas we came up a canvas to address various aspects of the cloud transformation discussion.  Application Evaluation Canvas (AEC) (presented in figure 1) guides the evaluation to take into account aspects widely from the current situation to expectations of the cloud.

 

cloud transformation

Figure 1. Application Evaluation Canvas

The main expected benefit is the starting point for the any further considerations. There should be a clear business need and concrete target that justifies the cloud journey for that application. And that target enables also the work to define dedicated risks that might hinder reaching the benefits. Migration to cloud and modernisation should always have a positive impact on value proposition.

The left-hand side of the canvas

The current state of the application is addressed with the left-hand side of the Application Evaluation Canvas. The current state is evaluated through 4 perspectives;  key partners, key resources, key activities and cost related. Key Partner section advice seeking answers to questions such who are the ones that are working with the application currently. The migration and modernisation activities will impact those stakeholders inevitably. In addition to the key partners, also some of the resources might be crucial for the current application. For example in-house competences that relates to rare technical expertise. These crucial resources should be identified. Furthermore, not only competences are crucial but also lots of activities are processed every day to keep the application up-and-running. Understanding about those will help the evaluation to be more meaningful and precise. After key partners, resources, and activities have been identified, the good understanding about the current state is established but that is not enough. Cost structure must also be well known. Without knowledge of the cost related to the current state of the application the whole evaluation is not on the solid ground. Costs should be identified holistically, ideally not only those direct costs but also indirect ones.

…and the right-hand side

On the right-hand side the focus is on cloud and the expected outcome. Main questions that should be considered are relating to the selection of the hyperscaler, expected usage, increasing the awareness of the holistical change of the cloud transformation, and naturally the exit plan.

The selection of the hyperscaler might be trivial when organisation’s cloud governance guides the decision towards pre-selected cloud provider. But for example lacking of central guidance or due to the autonomous teams or application specific requirements might bring the hyperscaler selection on the table. So in any case the clear decision should be made when evaluate paths towards the main benefit.

The cloud transformation will affect the cost structure by shifting it from CAPEX to OPEX. Therefore realistic forecast about the usage is highly important. Even though the costs will follow the usage, the overall cost level will not necessary immediately decrease dramatically, at least from the beginning the migration. There will be an overlapping period of current cost structure and the cloud cost structure as CAPEX costs won’t immediately decrease but OPEX based costs will start occurring. Furthermore the elasticity of the OPEX might not be as smooth as predicted due to the contractual issues; preferring for example annual pricing plans for SAAS might be difficult to be changed during the contract period.

The cost structure is not the only thing that is changing after cloud adoption. The expected benefit will be depending on several impact factors. Those might include success in organisational change management, finding the required new competences, or application might require more than lift-and-shift -type of migration to cloud before the main expected benefit can easily be reached.

Don’t forget exit costs

In the final section of the canvas is addressing the exit costs. Before any migration the exit costs should be discussed to avoid possible surprises if the change has to be rolled back.  The exit cost might relate to vendor lock-in. Vendor lock-in itself is vague topic but it is crucial to understand that there is always a vendor lock-in. One cannot get rid of vendor lock-in with multicloud approach as instead of vendor lock-in there is multicloud-vendor lock-in. Additionally, orchestration of microservices is vendor specific even a microservice itself might be transferable. Utilising somekind of cloud agnostic abstraction layer will form a vendor lock-in to that abstraction layer provider. Cloud vendor lock-in is not the only kind of lock-in that has a cost. Utilising some rare technology will inevitable tide the solution to that third party and changing the technology might be very expensive or even impossible. Furthermore, lock-in can have also in-house flavour, especially when there is a competence that only a couple of employees’ master. So the main question is not to avoid any lock-ins as that is impossible but to identify the lock-ins and decide the type of lock-in that is feasible.

Conclusion

As a whole the Application Evaluation Canvas can help to gain a holistic understanding about the current state. Connecting expectations to the more concrete form will to support the decision-making process how the cloud adoption can be justified with business reasons.

Cloud services

Hybrid Cloud Trends and Use Cases

Let's look at different types of cloud services and learn more about the hybrid cloud and how this cloud service model can exist at an organisation. We’ll also try to predict the future a bit and talk about what hybrid cloud trends we are expecting.

As an IT person, I dare say that today we use the cloud without even thinking about it. All kinds of data repositories, social networks, streaming services, media portals – they work thanks to cloud solutions. The cloud now plays an important role in how people interact with technology.

Cloud service providers are inventing more and more features and functionalities, bringing them to the IT market. Such innovative technologies offer even more opportunities for organisations to run a business. For example, AWS, one of the largest providers of public cloud services, announces over 100 product or service updates each year.

Cloud services

Cloud technologies are of interest to customers due to their economy, flexibility, performance and reliability.

For IT people, one of the most exciting aspects of using cloud services is the speed at which the cloud provides access to a resource or service. A few clicks at a cloud provider’s portal – and you have a server with a multi-core processor and large storage capacity at your disposal. Or a few commands on the service provider’s command line tool – and you have a powerful database ready to use.

Cloud deployment models

In terms of the cloud deployment model, we can identify three main models:

• A public cloud – The service provider has publicly available cloud applications, machines, databases, storage, and other resources. All this wealth runs on the IT infrastructure of the public cloud service provider, who manages it. The best-known players in the public cloud business are AWS, Microsoft Azure and Google Cloud.

In my opinion, one of the most pleasant features of a public cloud is its flexibility. We often refer to it as elasticity. An organisation can embark on its public cloud journey with low resources and low start costs, according to current requirements. 

Major public cloud players offer services globally. We can easily launch cloud resources in a geographical manner which best fits our customer market reach. 

For example, in a globally deployed public cloud environment, an organization can serve its South American customers from a South American data centre. A data centre located in one of the European countries would serve European customers. This greatly improves the latency and customer satisfaction.

There is no need to invest heavily in hardware, licensing, etc. – organisation spends money over time and only on the resources actually used.

• A private cloud – This is an infrastructure for a single organisation, managed by the organisation itself or by a service provider. The infrastructure can be located in the company’s data centre or elsewhere.

The definition of a private cloud usually includes the IT infrastructure of the organisation’s own data centre. Most of these state-of-the-art on-premise solutions are built using virtualisation software. They offer the flexibility and management capabilities of a cloud.

Here, however, we should keep in mind that the capacity of a data centre is not unlimited. At the same time, the private cloud allows an organisation to implement its own standards for data security. It also allows to follow regulations where applicable. Also, to store data in a suitable geographical area in its data centre, to achieve an ultra low latency, for example. 

As usual, everything good comes with trade-offs. Think how complex activity it might be to expand the private cloud into a new region, or even a new continent. Hardware, connectivity, staffing, etc – organisation needs to take care of all this in a new operating area. 

• A hybrid cloud – an organisation uses both its data centre IT infrastructure (or its own private cloud) and a public cloud service. Private cloud and public cloud infrastructures are separate but interconnected.

Using this combination, an organisation can store sensitive customer data in an on-premise application according to regulation in a private cloud. At the same time, it can integrate this data with corporate business analytics software that runs in a public cloud. The hybrid cloud allows us to use the strengths of both cloud deployment models.

Hybrid cloud model

When is a hybrid cloud useful?

Before we dive into the talk about hybrid cloud, I’d like to stress that we at Solita are devoted advocates of cloud-first strategy, referring to public cloud. At the same time, cloud-first does not mean cloud-only, and we recognize that there might be use-cases when running a hybrid model is justified, be it regulation reasons or very low latency requirements.

Let’s look at some examples of when and how a hybrid cloud model can benefit an organisation. 

Extra power from the cloud

Suppose that the company has not yet made its migration to public cloud. Reasons can be lack of resources or cloud competence. It is running its private cloud in a colocation data centre. The private cloud is operating at a satisfactional level while the load and resource demand remains stable. 

However, the company’s private cloud lacks extra computing resources to handle future events of demand growth. But an increased load on the IT systems is expected due to an upcoming temporary marketing campaign. As a result of the campaign, the number of visitors to the organisation’s public systems will increase significantly. How to address this concern?

The traditional on-premise way used to be getting extra resources in the form of additional hardware. It means new servers, larger storage arrays, more powerful network devices, and so on. This causes additional capital investment, but it is also important that this addition of resources may not be fast.

The organisation must deliver, install, configure the equipment – and these jobs cannot always be automated to save time. After the load on the IT systems has decreased with the end of the marketing campaign, the situation may arise that the acquired additional computing power is not used any more.

But given the capabilities of the cloud, a better solution is to get additional resources from the public cloud. Public cloud allows to do this flexibly and on-demand, as much as the situation requires. The company spends and pays for resources only as it needs them, without large monetary commitments. Let the cloud adoption start 😊

The organisation can access additional resources from the public cloud in hours or even minutes. We can order these programmatically, and in automated fashion in advance, according to the time of the marketing campaign.

When the time comes and there are many more visitors, the company will still keep the availability of its IT systems. They will continue to operate at the required level with the help of additional resources. This method of use is known as cloud bursting, i.e. resources “flow over” to another cloud environment.

This is the moment when a cloud journey begins for the organization. It is an extremely important point of time when the organization must carefully evaluate its cloud competence. It needs to consider possible pitfalls on the road to cloud adoption. 

For an organisation, it is often effective to find a good partner to assist with cloud migration. The partner with verified cloud competence will help to get onto cloud adoption rails and go further with cloud migration. My colleagues at Solita have written a great blog post about cloud migration and how to do it right.

High availability and recovery

Implementing high availability in your data centre and/or private cloud can be expensive. As a rule, high availability means that everything must be duplicated – machines, disk arrays, network equipment, power supply, etc. This can also mean double costs.

An additional requirement can be to ensure geo-redundancy of the data and have a copy in another data centre. In such case, the cost of using another data centre will be added.

A good data recovery plan still requires a geographically duplicated recovery site to minimise risk. From the recovery site, a company can quickly get its IT systems back up and running in the event of a major disaster in a major data centre. Is there a good solution to this challenge? Yes, there is.

A hybrid cloud simplifies the implementation of a high availability and recovery plan at a lower cost. As in the previous scenario described above, this is often a good starting point for an organisation’s cloud adoption. Good rule of thumb is to start small, and expand your public cloud presence in controlled steps.

A warm disaster recovery site in the public cloud allows us to use cloud resources sparingly and without capital investment. Data is replicated from the main data centre and stored in the public cloud, but bulky computing resources (servers, databases, etc.) are turned off and do not incur costs.

In an emergency case, when the main data centre is down, the resources on the warm disaster recovery site turn on quickly – either automatically or at the administrator’s command. Because data already exists on the replacement site, such switching is relatively quick and the IT systems have minimal downtime.

Once there is enough cloud competence on board, the organisation will move towards cloud-first strategy. Eventually it would switch its public cloud recovery site to be a primary site, whereas recovery site would move to an on-premise environment.

Hybrid cloud past and present

For several years, the public cloud was advertised as a one-way ticket. Many assumed that organisations would either move all their IT systems to the cloud or continue in their own data centres as they were. It was like there was no other choice, as we could read a few years ago.

As we have seen since then, this paradigm has now changed. It’s remarkable that even the big cloud players AWS and Microsoft Azure don’t rule out the need for a customer to design their IT infrastructure as a hybrid cloud.

Hybrid cloud adoption

Organisations have good reasons why they cannot always move everything to a public cloud. Reasons might include an investment in the existing IT infrastructure, some legal reasons, technical challenges, or something else.

Service providers are now rather favouring the use of a hybrid cloud deployment model. They are trying to make it as convenient as possible for the customer to adopt it. According to the “RightScale 2020 State of the Cloud” report published in 2020, hybrid cloud is actually the dominant cloud strategy for large enterprises:

Hybrid cloud is the dominant strategy

Back in 2019, only 58% of respondents preferred the hybrid cloud as their main strategy. There is a clear signal that the hybrid cloud offers the strengths of several deployment models to organisations. And companies are well aware of the benefits.

Cloud vendors vs Hybrid

How do major service providers operate on the hybrid cloud front? Microsoft Azure came out with Azure Stack – a service that is figuratively speaking a public cloud experience in the organisation’s own data centre.

Developers can write the same cloud native code. It runs in the same way both in a public Azure cloud and in a “small copy” of Azure in the enterprise’s data centre. It gives the real cloud feeling, like a modern extension to a good old house that got small for the family.

Speaking of multi-cloud strategy as mentioned in the above image, Azure Arc product by Microsoft is worth mentioning, as it is designed especially for managing multi-cloud environments and gaining consistency across multiple cloud services.

AWS advertises its hybrid cloud offering portfolio with the message that they understand that not all applications can run in the cloud – some must reside on customers premises, in their machines, or in a specific physical location.

A recent example of hybrid cloud thinking is AWS’s announcement of launching its new service ECS Anywhere. It’s a service that allows customers to run and manage their containers right on their own hardware, in any environment, while taking advantage of all the ECS capabilities that AWS offers in the “real” cloud to operate and monitor the containers. Among other things, it supports “bare” physical hardware and Raspberry Pi. 😊

As we’ve also seen just recently, the next step for Amazon to win hybrid cloud users was the launch of EKS Anywhere – this allows customers using Kubernetes to enjoy the convenience of a managed AWS EKS service while keeping their containers and data in their own environment, on their own data centre’s machines.

As we see, public cloud vendors are trying hard with their hybrid service offerings. It’s possible that after a critical threshold of hybrid services users is reached, it will create the next big wave of cloud adoption in the next few years. 

Hybrid cloud trends

The use of hybrid cloud related services mentioned above assumes that there is cloud competence in the organisation. These services integrate tightly with the public cloud. It is important to have skills to manage these correctly in a cloud-native way.

I think we will see a general trend in the near future that the hybrid cloud will remain. Multi-cloud strategy as a whole will grow even bigger. Service providers will assist customers in deploying a hybrid cloud while maintaining a “cloud native” ecosystem. So that the customer has the same approach to developing and operating their IT systems. It will not matter whether the IT system runs in a “real” cloud or on a hybrid model. 

The convergence of public, private and hybrid models will evolve, whereas public cloud will continue to lead in the cloud-first festival. Cloud competence and skills around it will become more and more important. The modern infrastructure will not be achievable anymore without leveraging the public cloud.

Solita Cloud tests – How migration tools perform?

Public Cloud providers (AWS, Azure and GCP) have been acquiring migration-tools to provide fast ways to migrate workloads to the public cloud. As we have not found any study of tools and hands-on-experience reports so we decided to do it with our skilled cloud team

Background

Public Cloud providers (AWS, Microsoft Azure and GCP) have been acquiring migration-tools to provide fast ways to migrate existing workloads to the public cloud. As we have not found any comprehensive study of tools and hands-on-experience reports so we decided to do it with our skilled cloud engineers and consultants.

We selected tools listed in chapter 2 for the following reasons: 

  1. they are preferred tools by cloud vendors
    1. CloudEndure – AWS
    2. Azure Migrate: Server Migrate – Microsoft Azure
    3. Migrate for Compute Engine (formerly Velostrata)  – GCP
  2. customer demand for tools has been high during 2019
  3. we need to understand fundamental differences/restrictions of the tools

Our teams included skills from all public cloud providers and plenty of migration experience from various methods (https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/). The trick was that we formed our teams so that AWS experts used GCP tools and GCP experts Azure and so on.

Selected tools and experiences

CloudEndure (Acquired by Amazon Web Services) 

https://www.cloudendure.com/

Product description: 

CloudEndure offers highly automated disaster recovery and migration solutions into AWS.

With support for any source infrastructure and all applications running on supported operating systems, CloudEndure ensures that your entire IT landscape will remain robust and reliable as it continues to grow.

CloudEndure requires installation of agent to the operating system. The Installation is easy and fast for Linux and Windows operating systems. For small linux installation it took less than one minute:

After the agent is installed the machine appears to CloudEndure website under the migration project. The first initial data transfer takes about 1 minutes per 1 GB of compressed and encrypted data in moderate environment. The data is transferred to CloudEndure replication instance via AWS elastic (public) ip and 1500/tcp port. The replication instance is automatically created by CloudEndure to the customer target AWS account when first agent connects to CloudEndure. The replication instance saves information to EBS volume (one volume per each disk from each machine) and takes snapshot after the initial sync phase is completed.

When initial phase is completed, the agent starts transfer only changes to replication instance, and user can start actual migrated instances to the AWS account. In CloudEndure you define Blueprint (template to describe new migrated instance), which ip address it should use, which subnet, which security groups etc. The new instance is created in a few minutes:

Experiences/Summary:

  • Migration tool without any big issues
  • Straightforward approach
  • We managed to migrate all source systems within given time frame
  • Automatic migration target EC2-type choosing algorithm isn’t spot on. You can choose the correct target instance type when defining Blueprint.
  • CloudEndure migrates licenses to Bring-Your-Own-License model in AWS. Eg. if source machine has Windows Server with SQL Server the migrated instance will be Windows Server, not Windows Server with SQL Server instance.
  • We are sure that the CloudEndure console will be integrated to AWS Console (look and feel, user logins, etc.).

Technical approach:

  • Agentful approach with block level copy of disk (Download and run agent on source server. No installation needed.)
  • Tool is very scalable
  • All source machines supported if you can install the agent

Unique features:

  • Free of charge (migration target to AWS)
  • Continuous data replication

KPIs:

  • Setup the tool (1 hour) – this was really straightforward task
  • Time to implement first migration (30 minutes)
  • Migration of the source machines (1 hour)

Google Cloud with Migrate for Compute Engine (formerly Velostrata) 

https://cloud.google.com/migrate/compute-engine/

Product description: 

Cloud migration creates a lot of questions. Migrate for Compute Engine (formerly Velostrata) by Google Cloud has the answers. Whether you’re looking to migrate one application from on-premises or one thousand applications across multiple data centers and clouds, Migrate for Compute Engine gives IT teams the power to migrate these systems to Google Cloud.

Experiences/Summary:

  • There are major issues in the documentation with Migrate for Compute Engine
  • Some challenging phases in the migration (some limits, naming etc issues)
  • Integration to Google Cloud Platform has been partly implemented
  • Learning curve quite steep (major usability challenges with UI), you just have to know how to use the product
  • Seems that REST API is the best way to use this, which is, of course, the right way in any large scale operation.

Technical approach:

  • Migration sources supported: VMware, AWS, Azure
  • Needs worker in the source environment (for example AWS AMI is available from Market Place)

Unique features:

  • Run in cloud mode for transparent/seamless migration 
  • Agentless (Block mode based migration approach)

KPIs:

  • Setup the tool (4 hours) (This time will reduce dramatically when we have more experience)
  • Time to implement first migration (1 hour)
  • Migration of the source machines (90 minutes)

Azure Migrate

https://azure.microsoft.com/en-us/services/azure-migrate/

Product description: 

Streamline your migration journey

Discover, assess, and migrate all your on-premises applications, infrastructure and data. Centrally plan and track the migration across multiple Microsoft and independent software vendor (ISV) tools.

Azure Database Migration Service (DMS) – Cloud and agent-based migration solution. Migrate data from other Azure PaaS SQL Databases or SQL Server on Virtual Machines to Azure database services.

Experiences/Summary:

  • Heavy capacity requirements (amount of memory, number of vCPUs)
  • Customizable assessment (infrastructure can be customized and not transferred as is)
  • Automation possibility with push installation, automated deployment
  • Limited OS support (AWS Linux kernel not supported but this is usually not a problem in VMware environments)
  • Relatively well documented but UI / documentation / terminology sometimes inconsistent
  • Community support lacking. Does anybody really use these tools?

Technical approach:

  • Discovery functionality limited
  • Agentful approach (Agent needs to be installed all servers (AWS / physical servers) 
  • Agentless features available for VMware vSphere
  • Configuration / Process server pushes data to Azure Migration Server Migration tool. 
  • Doesn’t include features for assessment / discovery of cloud workloads

Unique features:

  • No need to allow inbound traffic to source environment from external sources

KPIs:

  • Setup the tool (3 hours) 
  • Time to implement first migration (1 hours)
  • Migration of the source machines (1 hours)

Result / Verdict

The migration tools market is booming at the moment. There are plenty of tools available to assist migration, but there are still major challenges in all tested tools. Migration tools can assist you in your migration challenge but there is definitely a need for skilled experts who can analyze your current environment and propose a suitable approach for migration. In most cases, your preferred approach is re-factor / re-platform (especially if you have CI/CD pipeline already built) but the tooling is there to help you.  Azure Database Migration Service (DMS) is a really powerful tool for migrating MS SQL servers to PaaS SQL.

Can we migrate your workloads and data also?

Solita offers comprehensive end-to-end migration services for all major public cloud platforms. We are able to assist you with all migration needs from refactoring to mass-migration.

Read more from our reference case:

https://www.solita.fi/en/customers/changing-the-business-through-cloud-migration/

Interested to learn more – please contact:

Petja Venäläinen
Head of Cloud Consulting
+358-40-5815666
petja.venalainen@solita.fi

#wemigratebigtime

New call-to-action

No public cloud? Then kiss AI goodbye

What’s the crucial enabling factor that’s often missing from the debate about the myriad uses of AI? The fact that there is no AI without a proper backend for data (cloud data warehouses/data lakes) or without pre-built components. Examples of this are Cloud Machine Learning (ML) in Google Cloud Platform (GCP) and Sagemaker in Amazon Web Services (AWS). In this cloud blog I will explain why public cloud offers the optimum solution for machine learning (ML) and AI environments.

Why is public cloud essential to AI/ML projects?

  • AWS, Microsoft Azure and GCP offer plenty of pre-built machine learning components. This helps projects to build AI/ML solutions without requiring a deep understanding of ML theory, knowledge of AI or PhD level data scientists.
  • Public cloud is built for workloads which need peaking CPU/IO performance. This lets you pay for an unlimited amount of computing power on a per-minute basis instead of investing millions into your own data centres.
  • Rapid innovation/prototyping is possible using public cloud – you can test and deploy early and scale up in the production if needed.

Public cloud: the superpower of AI

Across many types of projects, AI capabilities are being democratised. Public cloud vendors deliver products, like Sagemaker or CloudML, that allow you to build AI capabilities for your products without a deep theoretical understanding. This means that soon a shortage of AI/ML scientists won’t be your biggest challenge.  Projects can use existing AI tools to build world-class solutions such as customer support, fraud detection, and business intelligence.

My recommendation is that you should head towards data enablement. First invest in data pipelines, data quality, integrations, and cloud-based data warehouses/data lakes. So rather than using over-skilled AI/ML scientists, build up the essential twin pillars – cloud ops and skilled team of data engineers.

Enablement – not enforcement

In my experience, many organisations have been struggling to transition to public cloud due to data confidentiality and classification issues. Business units have been driving the adoption of modern AI-based technology. IT organisations have been pushing back due to security concerns.  After plenty of heated debate we have been able to find a way forward. The benefits of using public cloud components in advanced data processing have been so huge that IT has to find ways to enable the use of public cloud.

The solution for this challenge has proven to be proper data classification and the use of private on-premises facilities to support operations in public cloud. Data location should be defined based on the data classification. Solita has been building secure but flexible automated cloud governance controls. These enable business requests but keep the control in your hands, as well as meeting the requirements usually defined by a company’s chief information security officer (CISO). Modern cloud governance is built on automation and enablement – rather than enforcing policies.

Conclusion

  • The pathway to effective AI adoption usually begins by kickstarting or boosting the public cloud journey and competence within the company.
  • Our recommendation – the public cloud journey should start with proper analyses and planning.
  • Solita is able to help with data confidentiality issues: classification, hybrid/private cloud usage and transformation.
  • Build cloud governance based on enablement and automation rather than enforcement.

Download a free Cloud Buyer's Guide