Is cloud always the answer?

Now and then it might feel convenient that an application should be transferred to cloud quickly. For those situations this blog won’t offer any help. But for occasions when the decision is not yet made and a bit more analysis is required to justify the transformation, this blog post will propose a tool. We believe that often it is wise to think about various aspects of the cloud adoption before actually perform it.

For all applications there will be a moment in their lifecycle that the question whether the application should be modernised or just to be updated slightly. The question is rather straightforward. The answer might not as there are business and technological aspects that should be considered. Having the rational answer available is not easy task. Cloud transformation should have always business need as well as it should be technologically feasible. Many times there might be an interest to make the decision rather hasty and just move forward due to the fact that it is difficult to gather the holistic view to the application. But just neglect the rational analysis because it is difficult might not always be the suitable path to follow. Success in cloud journey requires guidance from business needs as well as technical knowledge.

To address this issue companies can formalize cloud strategy. Some companies find it as an excellent choice to move forward as during the cloud strategy work the holistic understanding is gathered and guidance for the next steps is identified. Cloud strategy also provides the main reason why cloud transition is supporting the value generation and how it is connected to the organisation strategy. However, sometimes the cloud strategy work might be contemplated to be too large and premature activity. In particular when the cloud journey is not really started and knowledge gap might be considered to be too vast to overcome and it is challenging to think about structured utilisation of the cloud. Organizations might face challenges in maneuvering through the mist to find the right path on their cloud journey. There are expectations and there are risks. There are low-hanging-fruits but there might be something scary ahead that has not even have a name.

Canvas to help the cloud journey

Benefits and risks should be considered analytically before transferring application to the cloud. Inspired by Business Model Canvas we came up a canvas to address various aspects of the cloud transformation discussion.  Application Evaluation Canvas (AEC) (presented in figure 1) guides the evaluation to take into account aspects widely from the current situation to expectations of the cloud.

 

cloud transformation

Figure 1. Application Evaluation Canvas

The main expected benefit is the starting point for the any further considerations. There should be a clear business need and concrete target that justifies the cloud journey for that application. And that target enables also the work to define dedicated risks that might hinder reaching the benefits. Migration to cloud and modernisation should always have a positive impact on value proposition.

The left-hand side of the canvas

The current state of the application is addressed with the left-hand side of the Application Evaluation Canvas. The current state is evaluated through 4 perspectives;  key partners, key resources, key activities and cost related. Key Partner section advice seeking answers to questions such who are the ones that are working with the application currently. The migration and modernisation activities will impact those stakeholders inevitably. In addition to the key partners, also some of the resources might be crucial for the current application. For example in-house competences that relates to rare technical expertise. These crucial resources should be identified. Furthermore, not only competences are crucial but also lots of activities are processed every day to keep the application up-and-running. Understanding about those will help the evaluation to be more meaningful and precise. After key partners, resources, and activities have been identified, the good understanding about the current state is established but that is not enough. Cost structure must also be well known. Without knowledge of the cost related to the current state of the application the whole evaluation is not on the solid ground. Costs should be identified holistically, ideally not only those direct costs but also indirect ones.

…and the right-hand side

On the right-hand side the focus is on cloud and the expected outcome. Main questions that should be considered are relating to the selection of the hyperscaler, expected usage, increasing the awareness of the holistical change of the cloud transformation, and naturally the exit plan.

The selection of the hyperscaler might be trivial when organisation’s cloud governance guides the decision towards pre-selected cloud provider. But for example lacking of central guidance or due to the autonomous teams or application specific requirements might bring the hyperscaler selection on the table. So in any case the clear decision should be made when evaluate paths towards the main benefit.

The cloud transformation will affect the cost structure by shifting it from CAPEX to OPEX. Therefore realistic forecast about the usage is highly important. Even though the costs will follow the usage, the overall cost level will not necessary immediately decrease dramatically, at least from the beginning the migration. There will be an overlapping period of current cost structure and the cloud cost structure as CAPEX costs won’t immediately decrease but OPEX based costs will start occurring. Furthermore the elasticity of the OPEX might not be as smooth as predicted due to the contractual issues; preferring for example annual pricing plans for SAAS might be difficult to be changed during the contract period.

The cost structure is not the only thing that is changing after cloud adoption. The expected benefit will be depending on several impact factors. Those might include success in organisational change management, finding the required new competences, or application might require more than lift-and-shift -type of migration to cloud before the main expected benefit can easily be reached.

Don’t forget exit costs

In the final section of the canvas is addressing the exit costs. Before any migration the exit costs should be discussed to avoid possible surprises if the change has to be rolled back.  The exit cost might relate to vendor lock-in. Vendor lock-in itself is vague topic but it is crucial to understand that there is always a vendor lock-in. One cannot get rid of vendor lock-in with multicloud approach as instead of vendor lock-in there is multicloud-vendor lock-in. Additionally, orchestration of microservices is vendor specific even a microservice itself might be transferable. Utilising somekind of cloud agnostic abstraction layer will form a vendor lock-in to that abstraction layer provider. Cloud vendor lock-in is not the only kind of lock-in that has a cost. Utilising some rare technology will inevitable tide the solution to that third party and changing the technology might be very expensive or even impossible. Furthermore, lock-in can have also in-house flavour, especially when there is a competence that only a couple of employees’ master. So the main question is not to avoid any lock-ins as that is impossible but to identify the lock-ins and decide the type of lock-in that is feasible.

Conclusion

As a whole the Application Evaluation Canvas can help to gain a holistic understanding about the current state. Connecting expectations to the more concrete form will to support the decision-making process how the cloud adoption can be justified with business reasons.

Avoid the pitfalls: what to keep in mind for a smooth start with cloud services

Many companies are looking for ways to migrate their data centre to the cloud platform. How to avoid potential pitfalls in migrating data centres to the public cloud? How to plan your migration so that you are satisfied with the end result and achieve the set goals?

Why the public cloud?

The public cloud provides the ability to scale as needed, to use a variety of convenient SAAS (Software as a Service), PAAS (Platform as a Service) and IAAS (Infrastructure as a Service) solutions, and to pay for the service exactly as much as you use it.

 The public cloud gives a company the opportunity for a great leap in development, the opportunity to use various services of a service provider during development, as those accelerate the development and help create new functionality.

 All of this can be conveniently used without having to house a personal data centre.

Goal setting

The first and most important step is to set a goal for the enterprise. The goal cannot be general; it must be specific and, if possible, measurable, so that it would be possible to assess at the end of the migration whether the set goal has been achieved or not.

Goal setting must take the form of internal collaboration between the business side and the technical side of the company. If excluding even one party, it is very difficult to reach a satisfactory outcome.

The goals can be, for example, the following:

  • Cost savings. Do you find that running your own data centre is too expensive and operating costs are very high? Calculate the cost, how much resource the company will spend on it, and set a goal of what percentage in savings you want to achieve. However, cost savings are not recommended as the main goal. Cloud providers also aim to make a profit. Rather, look for goals in the following areas to help you work more efficiently.
  • Agility, i.e. faster development of new functionalities and the opportunity to enter new markets.
  • Introduction of new technologies (ML or Machine Learning, IOT or Internet of Things, AI or Artificial Intelligence). The cloud offers a number of already developed services that have been made very easy to integrate.
  • End of life for hardware or software. Many companies are considering migrating to the cloud at the moment when their hardware or software is about to reach its end of life.
  • Security. Data security is a very important issue and it is becoming increasingly important. Cloud providers invest heavily in security. Security is a top priority for cloud providers because an insecure service compromises customer data and thus they are reluctant to buy the service.

The main reason for migration failure is the lack of a clear goal (the goal is not measurable or not completely thought out)

Mapping the architecture

The second step should be to map the services and application architecture in use. This mapping is essential to choose the right migration strategy.

In broad strokes, applications fall into two categories: applications that are easy to migrate and applications that require a more sophisticated solution. Let’s take, for example, a situation where a large monolithic application is used, the high availability of which is ensured by a Unix cluster. An application with this type of architecture is difficult to migrate to the cloud and it may not provide the desired solution.

The situation is similar with security. Although security is very important in general, it is especially important in situations where sensitive personal data of users, credit card data, etc. must be stored and processed. Cloud platforms offer great security solutions and tips on how to run your application securely in the cloud.

Security is critical to AWS, Azure, and GCP, and their security is invested into much more than individual customers could ever do.

Secure data handling requires prior experience. Therefore, I recommend migrating applications with sensitive personal data at a later stage of the migration, where experience has been gained. It is also recommended to use the help of different partners. Solita has previous experience in managing sensitive data in the cloud and is able to ensure the future security of data as well. Partners are able to give advice and draw attention to small details that may not be evident due to lack of previous experience.

This is why it is necessary to map the architecture and to understand what types of applications are used in the companies. An accurate understanding of the application architecture will help you choose the right migration method.

Migration strategies

‘Lift and Shift’ is the easiest way, transferring an application from one environment to another without major changes to code and architecture.

Advantages of the ‘Lift and Shift’ way:

  • In terms of labour, this type of migration is the cheapest and fastest.
  • It is possible to quickly release the resource used.
  • You can quickly fulfil your business goal – to migrate to the cloud.

 Disadvantages of the ‘Lift and Shift’ way:

  • There is no opportunity to use the capabilities of the cloud, such as scalability.
  • It is difficult to achieve financial gain on infrastructure.
  • Adding new functionalities is a bit tricky.
  • Almost 75% of migrations take place again within two years. Either they go back to their data centre or they use another migration method. At first glance, it seems like a simple and fast migration strategy, but in the long run, it will not open up the cloud’s opportunities and no efficiency gains will be achieved.

‘Re-Platform’ is a way to migrate where a number of changes are made to the application that enable the use of services provided by the cloud service provider, such as using the AWS Aurora database.

Benefits:

  • It is possible to achieve long-term financial gain.
  • It can be scaled as needed.
  • You can use a service, the reliability of which is the service provider’s responsibility.

 Possible shortcomings:

  • Migration takes longer than, for example, with the ‘Lift and Shift’ method.
  • The volume of migration can increase rapidly due to the relatively large changes made to the code.

‘Re-Architect’ is the most labour- and cost-intensive way to migrate, but the most cost-effective in the long run. During the re-architecture, the application code is changed sufficiently that it can be handled smoothly in the cloud. This means that the application architecture will take advantage of the opportunities and benefits offered by the cloud

Advantages:

  • Long-term cost savings.
  • It is possible to create a highly manageable and scalable application.
  • An application based on the cloud and micro services architecture enables to add new functionality and to modify the current one.

Disadvantages:

  • It takes more time and therefore more money for the development and migration.

Start with the goal!

Successful migration starts with setting and defining a clear goal to be achieved. Once the goals have been defined and the architecture has been thoroughly mapped, it is easy to offer a suitable option from those listed above: either ‘Lift and Shift’, ‘Re-Platform’ or ‘Re-Architect’.

Each strategy has its advantages and disadvantages. To establish a clear and objective plan, it is recommended to use the help of a reliable partner with previous experience and knowledge of migrating applications to the cloud.

Using Azure policies to audit and automate RBAC role assignments

Usually different RBAC role assignments in Azure might be inherited from subscription / management group level but there may come a time when that's just way too broad spectrum to give permissions to an AD user group.

While it’s tempting to assign permissions on a larger scope, sometimes you might rather prefer to have only some of the subscription’s resource groups granted with a RBAC role with minimal permissions to accomplish the task at hand. In those scenarios you’ll usually end up with one of the following options to handle the role assignments:

  1. Include the role assignments in your ARM templates / Terraform codes / Bicep templates
  2. Manually add the role to proper resource groups

If neither these appeal to you, there’s a third option: define an Azure policy which identifies correct resource groups and then deploys RBAC role assignments automatically if conditions are met. This blog will go over with step-by-step instructions how to:

  • Create a custom Azure policy definition for assigning Contributor RBAC role for an Azure AD group
  • Create a custom RBAC role for policy deployments and add it to your policy definition
  • Create an assignment for the custom policy

The example scenario is very specific and the policy definition is created to match this particular scenario. You can use the solution provided in this post as a basis to create something that fits exactly to your needs.

Azure policies in brief

Azure policies are a handy way to add automation and audit functionality to your cloud subscriptions. The policies can be applied to make sure resources are created following the company’s cloud governance guidelines for resource tagging or picking the right SKUs for VMs as an example. Microsoft provides a lot of different type built-in policies that are pretty much ready for assignment. However, for specific needs you’ll usually end up creating a custom policy that better suits your needs.

Using Azure policies is divided into two main steps:

  1. You need to define a policy which means creating a ruleset (policy rule) and actions (effect) to apply if a resource matches the defined rules.
  2. Then you must assign the policy to desired scope (management group / subscription / resource group / resource level). Assignment scope defines the maximum level of scanning if resources match the policy criteria. Usually the preferable levels are management group / subscription.

Depending on how you prefer governing your environment, you can resolve to use individual policies or group multiple policies into initiatives. Initiatives help you simplify assignments by working with groups instead of individual assignments. It also helps with handling service principal permissions. If you create a policy for enforcing 5 different tags, you’ll end up with having five service principals with the same permissions if you don’t use an initiative that groups the policies into one.

Creating the policy definition for assignment of Contributor RBAC role

The RBAC role assignment can be done with policy that targets the wanted scope of resources through policy rules. So first we’ll start with defining some basic properties for our policy which tells the other users what this policy is meant for. Few mentions:

  • Policy type = custom. Everything that’s not built-in is custom.
  • Mode = all since we won’t be creating a policy that enforces tags or locations
  • Category can be anything you like. We’ll use “Role assignment” as an example
{
	"properties": {
		"displayName": "Assign Contributor RBAC role for an AD group",
		"policyType": "Custom",
		"mode": "All",
		"description": "Assigns Contributor RBAC role for AD group resource groups with Tag 'RbacAssignment = true' and name prefix 'my-rg-prefix'. Existing resource groups can be remediated by triggering a remediation task.",
		"metadata": {
			"category": "Role assignment"
		},
		"parameters": {},
		"policyRule": {}
	}
}

Now we have our policy’s base information set. It’s time to form a policy rule. The policy rule consists of two blocks: policyRule and then. First one is the actual rule definition and the latter is the definition of what should be done when conditions are met. We’ll want to target only a few specific resource groups so the scope can be narrowed down with tag evaluations and resource group name conventions. To do this let’s slap an allOf operator (which is kind of like the logical operator ‘and’) to the policy rule and set up the rules

{
	"properties": {
		"displayName": "Assign Contributor RBAC role for an AD group",
		"policyType": "Custom",
		"mode": "All",
		"description": "Assigns Contributor RBAC role for AD group resource groups with Tag 'RbacAssignment = true' and name prefix 'my-rg-prefix'. Existing resource groups can be remediated by triggering a remediation task.",
		"metadata": {
			"category": "Role assignment"
		},
		"parameters": {},
		"policyRule": {
			"if": {
				"allOf": [{
						"field": "type",
						"equals": "Microsoft.Resources/subscriptions/resourceGroups"
					}, 	{
						"field": "name",
						"like": "my-rg-prefix*"
					},	{
						"field": "tags['RbacAssignment']",
						"equals": "true"
					}
				]
			},
			"then": {}
		}
	}
}

As can be seen from the JSON, the policy is applied to a resource (or actually a resource group) if

  • It’s type of Microsoft.Resources/subscriptions/resourceGroups = the target resource is a resource group
  • It has a tag named RbacAssignment set to true
  • The resource group name starts with my-rg-prefix

In order for the policy to actually do something, an effect must be defined. Because we want the role assignment to be automated, the deployIfNotExists effect is perfect. Few mentions of how to set up an effect:

  • The most important stuff is in the details block
  • The type of the deployment and the scope of an existence check is Microsoft.Authorization/roleAssignments for RBAC role assignments
  • An existence condition is kind of an another if block: the policy rule checks if a resource matches the conditions which makes it applicable for the policy. Existence check then confirms if the requirements of the details are met. If not, an ARM template will be deployed to the scoped resource

The existence condition of then block in the code example below checks the role assignment for a principal id through combination of Microsoft.Authorization/roleAssignments/roleDefinitionId and Microsoft.Authorization/roleAssignments/principalId. Since we want to assign the policy to a subscription, roleDefinitionId path must include the /subscriptions/<your_subscription_id>/.. in order for the policy to work properly.

{
	"properties": {
		"displayName": "Assign Contributor RBAC role for an AD group",
		"policyType": "Custom",
		"mode": "All",
		"description": "Assigns Contributor RBAC role for AD group resource groups with Tag 'RbacAssignment = true' and name prefix 'my-rg-prefix'. Existing resource groups can be remediated by triggering a remediation task.",
		"metadata": {
			"category": "Role assignment"
		},
		"parameters": {},
		"policyRule": {
			"if": {
				"allOf": [{
						"field": "type",
						"equals": "Microsoft.Resources/subscriptions/resourceGroups"
					}, 	{
						"field": "name",
						"like": "my-rg-prefix*"
					}, {
						"field": "tags['RbacAssignment']",
						"equals": "true"
					}
				]
			},
			"then": {
				"effect": "deployIfNotExists",
				"details": {
					"type": "Microsoft.Authorization/roleAssignments",
					"roleDefinitionIds": [
						"/providers/microsoft.authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9" // Use user access administrator role update RBAC role assignments
					],
					"existenceCondition": {
						"allOf": [{
								"field": "Microsoft.Authorization/roleAssignments/roleDefinitionId",
								"equals": "/subscriptions/your_subscription_id/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" // RBAC role definition ID for Contributor role
							}, {
								"field": "Microsoft.Authorization/roleAssignments/principalId",
								"equals": "OBJECT_ID_OF_YOUR_AD_GROUP" // Object ID of desired AD group
							}
						]
					}
				}
			}
		}

The last thing to add is the actual ARM template that will be deployed if existence conditions are not met. The template itself is fairly simple since it’s only containing the definitions for a RBAC role assignment.

{
	"properties": {
		"displayName": "Assign Contributor RBAC role for an AD group",
		"policyType": "Custom",
		"mode": "All",
		"description": "Assigns Contributor RBAC role for AD group resource groups with Tag 'RbacAssignment = true' and name prefix 'my-rg-prefix'. Existing resource groups can be remediated by triggering a remediation task.",
		"metadata": {
			"category": "Tags",
		},
		"parameters": {},
		"policyRule": {
			"if": {
				"allOf": [{
						"field": "type",
						"equals": "Microsoft.Resources/subscriptions/resourceGroups"
					}, 	{
						"field": "name",
						"like": "my-rg-prefix*"
					}, {
						"field": "tags['RbacAssignment']",
						"equals": "true"
					}
				]
			},
			"then": {
				"effect": "deployIfNotExists",
				"details": {
					"type": "Microsoft.Authorization/roleAssignments",
					"roleDefinitionIds": [
						"/providers/microsoft.authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9" // Use user access administrator role update RBAC role assignments
					],
					"existenceCondition": {
						"allOf": [{
								"field": "Microsoft.Authorization/roleAssignments/roleDefinitionId",
								"equals": "/subscriptions/your_subscription_id/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" // RBAC role definition ID for Contributor role
							}, {
								"field": "Microsoft.Authorization/roleAssignments/principalId",
								"equals": "OBJECT_ID_OF_YOUR_AD_GROUP" // Object ID of desired AD group
							}
						]
					},
					"deployment": {
						"properties": {
							"mode": "incremental",
							"template": {
								"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
								"contentVersion": "1.0.0.0",
								"parameters": {
									"adGroupId": {
										"type": "string",
										"defaultValue": "OBJECT_ID_OF_YOUR_AD_GROUP",
										"metadata": {
											"description": "ObjectId of an AD group"
										}
									},
									"contributorRbacRole": {
										"type": "string",
										"defaultValue": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c')]",
										"metadata": {
											"description": "Contributor RBAC role definition ID"
										}
									}
								},
								"resources": [{
										"type": "Microsoft.Authorization/roleAssignments",
										"apiVersion": "2018-09-01-preview",
										"name": "[guid(resourceGroup().id, deployment().name)]",
										"properties": {
											"roleDefinitionId": "[parameters('contributorRbacRole')]",
											"principalId": "[parameters('adGroupId')]"
										}
									}
								]
							}
						}
					}
				}
			}
		}
	}
}

And that’s it! Now we have the policy definition set up for checking and remediating default RBAC role assignment for our subscription. If the automated deployment feels too daunting, the effect can be swapped to auditIfNotExist version. That way you won’t be deploying anything automatically but you can simply audit all the resource groups in the scope for default RBAC role assignments.

{
	"properties": {
		"displayName": "Assign Contributor RBAC role for an AD group",
		"policyType": "Custom",
		"mode": "All",
		"description": "Assigns Contributor RBAC role for AD group resource groups with Tag 'RbacAssignment = true' and name prefix 'my-rg-prefix'. Existing resource groups can be remediated by triggering a remediation task.",
		"metadata": {
			"category": "Tags",
		},
		"parameters": {},
		"policyRule": {
			"if": {
				"allOf": [{
						"field": "type",
						"equals": "Microsoft.Resources/subscriptions/resourceGroups"
					}, 	{
						"field": "name",
						"like": "my-rg-prefix*"
					}, {
						"field": "tags['RbacAssignment']",
						"equals": "true"
					}
				]
			},
			"then": {
				"effect": "auditIfNotExist",
				"details": {
					"type": "Microsoft.Authorization/roleAssignments",
					"existenceCondition": {
						"allOf": [{
								"field": "Microsoft.Authorization/roleAssignments/roleDefinitionId",
								"equals": "/subscriptions/your_subscription_id/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c" // RBAC role definition ID for Contributor role
							}, {
								"field": "Microsoft.Authorization/roleAssignments/principalId",
								"equals": "OBJECT_ID_OF_YOUR_AD_GROUP" // Object ID of desired AD group
							}
						]
					}
				}
			}
		}
	}
}

That should be enough, right? Well it isn’t. Since we’re using ARM template deployment with our policy, we must add a role with privileges to create remediation tasks which essentially means we must add a role that has privileges to create and validate resource deployments. Azure doesn’t provide such policy with minimal privileges out-of-the-box since the scope that has all the permissions we need is Owner. We naturally don’t want to give Owner permissions to anything if we reeeeeally don’t have to. The solution: create a custom RBAC role for Azure Policy remediation tasks.

Create custom RBAC role for policy remediation

Luckily creating a new RBAC role for our needs is a fairly straightforward task. You can create new roles in Azure portal or with Powershell or Azure CLI. Depending on your desire and permissions to go around in Azure, you’ll want to create the new role into a management group or a subscription to contain it to a level where it is needed. Of course there’s no harm done to spread that role to wider area of your Azure environment, but for the sake of keeping everything tidy, we’ll create the new role to one subscription since it’s not needed elsewhere for the moment.

Note that the custom role only allows anyone to validate and create deployments. That’s not enough to actually do anything. You’ll need to combine the deployment role with a role that has permissions to do the stuff set in deployment. For RBAC role assignments you’d need to add “User Access Administrator” role to the deployer as well.

Here’s how to do it in Azure portal:

  1. Go to your subscription listing in Azure, pick the subscription you want to add the role to and head on to Access control (IAM) tab.
  2. From the top toolbar, click on the “Add” menu and select “Add custom role”.
  3. Give your role a clear, descriptive name such as Least privilege deployer or something else that you think is more descriptive.
  4. Add a description.
  5. Add permissions Microsoft.Resources/deployments/validate/action and Microsoft.Resources/deployments/write to the role.
  6. Set the assignable scope to your subscription.
  7. Review everything and save.

After the role is created, check it’s properties and take note of the role id. Next we’ll need to update the policy definition made earlier in order to get the new RBAC role assigned to the service principal during policy initiative assignment.

So from the template, change this in effect block:

"roleDefinitionIds": [
	"/providers/microsoft.authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9" // Use user access administrator role update RBAC role assignments
]

to this:

"roleDefinitionIds": [
	"/providers/microsoft.authorization/roleDefinitions/18d7d88d-d35e-4fb5-a5c3-7773c20a72d9", // Use user access administrator role update RBAC role assignments
	"/subscriptions/your_subscription_id/providers/Microsoft.Authorization/roleDefinitions/THE_NEW_ROLE_ID" // The newly created role with permissions to create and validate deployments
]

Assigning the created policy

Creating the policy definition is not enough for the policy to take effect. As mentioned before, the definition is merely a ruleset created for assigning the policy and does nothing without the policy assignment. Like definitions, assignments can be set to desired scope. Depending on your policy, you can set the policy for management group level or individual assignments to subscription level with property values that fit each individual subscription as needed.

Open Azure Policy and select “Assignment” from the left side menu. You can find “Assign policy” from the top toolbar. There’s a few considerations that you should go over when you’re assigning a policy:

Basics

  • The scope: always think about your assignment scope before blindly assigning policies that modify your environment.
  • Exclusion is a possibility, not a necessity. Should you re-evaluate the policy definition if you find yourself adding a lot of exclusions?
  • Policy enforcement: if you have ANY doubts about the policy you have created, don’t enforce the policy. That way you won’t accidentally overwrite anything. It might be a good idea to assign policy without enforcement for the first time, review compliance results and if you’re happy with them, then enforce the policy.
    • You can fix all the non-compliant resources with a remediation task after initial compliance scan

Remediation

  • If you have a policy that changes something either with modify of deployIfNotExists effect, you’ll be creating a service principal for implementing the changes when you assign the policy. Be sure to check the location (region) of the service principal that it matches your desired location.
  • If you select to create a remediation tasks upon assignment, it will implement the changes in policy to existing resources. So if you have doubts if the policy works as you desire, do not create a remediation task during assignment. Review the compliance results first, then create the remediation task if everything’s ok.

Non-compliance message

  • It’s usually a good idea to create a custom non-compliance message for your own custom definitions.

After you’ve set up all relevant stuff for the assignment and created it, it’s time to wait for the compliance checks to go through. When you’ve created an assignment, the first compliance check cycle is done usually within 30 minutes of the assignment creation. After the first cycle, compliance is evaluated once every 24 hours or whenever the assigned policy definitions are changed. If that’s not fast enough for you, you can always trigger an on-demand evaluation scan.

CxO You need to understand this about Cloud

In order to embrace full potential of modern techniques like AI/ML, IoT, Agile, DevOps, you need CLOUD to enable your IT to support Business at full scale

It’s probably useless to start making more listings of the benefits of the cloud. There has been enough of those during the past 5 years. At the same time the digital transformation has been on the table of every CEO, CIO, CDO, CMO, etc. We can all agree that we need to have a modern data warehouses, make mobile solutions, be agile, create new digital services and utilize artificial intelligence. In the Cloud, these services can be taken for granted. However, I have noticed that companies and organizations don´t necessarily understand what the utilization of cloud really means and requires from you.

In order to embrace the full potential of modern techniques like AI/ML, IoT, Agile, DevOps, you need CLOUD to enable your IT to support Business at full scale

Why the platform matters?

Usually, technology is technology and should not be mixed with strategy discussions, but I think that the cloud makes an exception, at least next x years (I’m not going to predict). When advanced digital transformation companies are telling the story where data is the new oil and other possibilities of the new business opportunities, they seem to forget to mention the platform that makes it all possible. Without the cloud, flexible and scalable modern data platforms with AI and other innovative solutions fall short. You just cannot get that speed, ease of management and the variety of services from on-prem or “private cloud” solutions.

Problem is that Senior management may lack an understanding of how big change the cloud is for their organization especially for IT, how it tightly links to digital transformation and what kind of resistance it may create.

Still today, there are a number of large companies where their own IT department does not know how to properly utilize the cloud

Change resistance and skills gap leads to shadow IT

Still today, there are a number of large companies where their own IT department does not know how to properly utilize the cloud and, at worst, they deliberately slow down cloud-related projects. This is natural, you tend to resist things that are unknown and things that put you out of your comfort zone. For these reasons, things aren´t moving at the speed that business wants.

This can create a shadow IT inside the company, where business units start to acquire cloud solutions directly so that they do not have to struggle unnecessarily with un-matching solution offering and lack of capabilities of their own IT.

The key challenge here is that when you start to develop services in an environment where you haven´t built your foundation properly, the continuity and manageability of services you build, are at risk. Lack of centralized management entails unnecessary risks as eg. unmanaged users become a huge risk.
In this way, environments in the cloud by different business units and by different partners are created with no one thinking about the whole. IT has traditionally thought about these things, but now it has been ignored. IT’s concerns and resistance against these types of projects are genuine and valid. If they don’t have the skills to build cloud services, so how would business units have? Well, they have the courage and enthusiasm to go forward but they’ve not used to handle subjects related to continuity and manageability.

This leads to, as I have written previously, into cloud service mess and uncontrolled cloud.

Going to the cloud is a big change that needs to be better addressed and lead properly.

How could this be taken into account and handled better?

Communication

Effective internal communication is needed. When the strategy outlines that we are developing new digital services and ecosystems, it must be made clear that this means deploying new cloud services. This must be repeated! The issue mentioned in the management’s monthly letter is just not enough. Many times I’ve been in situations where the customer’s personnel have been told that they’re going to take the environment into the cloud, but no one has really understood what that really means.

Leading the change

When communication is in order, you need a leader with the power to act and understanding of the cloud. If going to the cloud is not managed properly and with a sufficient mandate, it is difficult to move things forward. Care must be taken to ensure that data center processes and architectures do not fall into the cloud. In the cloud, new ways of working are needed, and addressing this may sometimes require the use of powers.

Engagement and competence development

A good leader understands that know-how does not come from scratch and everything cannot be outsourced. So at the same time when it´s important to bring in knowledgeable partners in your projects, it is also to involve your own IT from the beginning. Existing competencies should also be mapped and based on this, form learning paths to the new cloud roles. A properly motivated old data center fox can be very eager to update their skills. Old knowledge and skills do not become obsolete overnight, they are only used differently.

Motivation

Going to the cloud is a big change for your staff, not just IT. Procurement and budgeting also change, contracts are different, etc. This can give rise to fears that one’s job is in danger and one’s skills are not enough. By encouraging competence development and rewarding achievements, a positive atmosphere is created where everyone has the same goal. This is not rocket science.

With these things in mind, we can start building a modern digital services platform that supports business and agile development.

Our adaptive cloud transformation framework helps you to do this without the need for a massive transformation project; it just ensures that all the things are taken into account.

The time span, needed resources, etc. are adapted to your needs and your situation.

Contact me if you are interested to learn more

Anton Floor
Cloud Advisor
anton.floor@solita.fi

No public cloud? Then kiss AI goodbye

What’s the crucial enabling factor that’s often missing from the debate about the myriad uses of AI? The fact that there is no AI without a proper backend for data (cloud data warehouses/data lakes) or without pre-built components. Examples of this are Cloud Machine Learning (ML) in Google Cloud Platform (GCP) and Sagemaker in Amazon Web Services (AWS). In this cloud blog I will explain why public cloud offers the optimum solution for machine learning (ML) and AI environments.

Why is public cloud essential to AI/ML projects?

  • AWS, Microsoft Azure and GCP offer plenty of pre-built machine learning components. This helps projects to build AI/ML solutions without requiring a deep understanding of ML theory, knowledge of AI or PhD level data scientists.
  • Public cloud is built for workloads which need peaking CPU/IO performance. This lets you pay for an unlimited amount of computing power on a per-minute basis instead of investing millions into your own data centres.
  • Rapid innovation/prototyping is possible using public cloud – you can test and deploy early and scale up in the production if needed.

Public cloud: the superpower of AI

Across many types of projects, AI capabilities are being democratised. Public cloud vendors deliver products, like Sagemaker or CloudML, that allow you to build AI capabilities for your products without a deep theoretical understanding. This means that soon a shortage of AI/ML scientists won’t be your biggest challenge.  Projects can use existing AI tools to build world-class solutions such as customer support, fraud detection, and business intelligence.

My recommendation is that you should head towards data enablement. First invest in data pipelines, data quality, integrations, and cloud-based data warehouses/data lakes. So rather than using over-skilled AI/ML scientists, build up the essential twin pillars – cloud ops and skilled team of data engineers.

Enablement – not enforcement

In my experience, many organisations have been struggling to transition to public cloud due to data confidentiality and classification issues. Business units have been driving the adoption of modern AI-based technology. IT organisations have been pushing back due to security concerns.  After plenty of heated debate we have been able to find a way forward. The benefits of using public cloud components in advanced data processing have been so huge that IT has to find ways to enable the use of public cloud.

The solution for this challenge has proven to be proper data classification and the use of private on-premises facilities to support operations in public cloud. Data location should be defined based on the data classification. Solita has been building secure but flexible automated cloud governance controls. These enable business requests but keep the control in your hands, as well as meeting the requirements usually defined by a company’s chief information security officer (CISO). Modern cloud governance is built on automation and enablement – rather than enforcing policies.

Conclusion

  • The pathway to effective AI adoption usually begins by kickstarting or boosting the public cloud journey and competence within the company.
  • Our recommendation – the public cloud journey should start with proper analyses and planning.
  • Solita is able to help with data confidentiality issues: classification, hybrid/private cloud usage and transformation.
  • Build cloud governance based on enablement and automation rather than enforcement.

Download a free Cloud Buyer's Guide

Modern cloud operation: successful cloud transformation, part 2

How to ensure a successful cloud transformation? In the first part of this two-part blog series, I explained why and how cloud transformation often fails despite high expectations. In this second part, I will explain how to succeed in cloud transformation, i.e. how to move services to the cloud in the right way.

Below, there are three important tips that will help you reach a good outcome.

1. Start by defining a cloud strategy and a cloud governance model

We often discuss with our customers how to manage, monitor and operate the cloud and what things should be considered when working with third party developers. Many customers are also interested to know what kinds of guidelines and operating models should be determined in order to keep everything under control.

You don’t need a big team to brainstorm and create loads of new processes to define a cloud strategy and update governance models.

To succeed in updating your cloud strategy and governance model, you have to take a very close look at things and realise that you are moving things to a new environment that functions differently from traditional data centers.

So it’s important to understand that for example software projects can be developed in a completely new way in the cloud with multiple suppliers. However, it must be kept in mind that this sort of operation requires a governance model and instructions on what kind of minimum requirements the new services that are to be linked to the company’s systems should have and how their maintenance and continuity should be taken care of. For instance, you have to decide how you can ensure that cloud accounts, data security and access management are taken care of.

2. Insist on having modern cloud operation – choose a suitable partner or get the needed knowhow yourself

Successful cloud transformation requires right kind of expertise. However, traditional service providers rarely have the required skills. New kinds of cloud operators have emerged to solve this issue. Their mission is to help customers manage cloud transformation. How can you identify such operators and what should you demand from them?

The following list is formed on the basis of views presented by Gartner, Forrester and AWS on modern operators. When you are looking for a partner…

  • demand a strong DevOps culture. It forms a good foundation for automation and development of services.
  • ensure cloud-native expertise on platforms and applications.It creates certainty that an expert who knows the whole package and understands how applications and platforms work together is in charge of the project.
  • check that your partner has skills in multiple platforms. AWS, Azure and Google are all good alternatives.
  • ask if your partner masters automatic operation and predictive analytics. These skills reduce variable costs and contribute to quick recovery from incidents.
  • demand agile operating methods, as well as transparency and continuous development of services. With clear and efficient service processes, cost management and reporting are easier and the customer understands the benefits of development.

Solita’s answer to this is a modern cloud operation partnership. In other words, we help our customers create operating models and cloud strategies. A modern cloud operator has an understanding of the whole package that has to be managed and helps to formulate proper operating models and guidelines for cloud development. It’s not our purpose to limit development speed or opportunities, but we want to pay attention to things that ensure continuity and easy maintenance. After all, the development phase is only a fraction of the whole application life cycle.

The developer’s needs are taken into account, and at the same time, for instance the following operating models are determined: How are cloud accounts created and who creates them? How are costs monitored? What kind of user rights are given and to whom? What sort of development tools are used or what targets should be achieved with them? We are responsible for deciding what things are monitored and how.

In addition, the right kind of partner knows what things should be moved to the cloud in the first place.

When moving to cloud, the word move doesn’t fit very well in this context because it is rarely recommended just to move workloads. That is why it’s better to talk about transformation, which means transforming an existing worksload at least with some modifications towards cloud native.

In my opinion, application development is one important skill a modern cloud operator should master. Today, the cloud can be seen as a platform where different kinds of systems and applications are coded. It takes more than just the ability to manage servers to succeed in this game. Therefore, DevOps culture determines how application development and operation work together. You have to understand how environments are automated and monitored.

In addition to monitoring whether applications are running, experts are able to control other things too. They can analyse how an application is working and whether it is performing effectively. A strong symbiosis between developers and operators helps to continuously develop and improve skills that are needed to improve service quality. At best, this kind of operator can promise their customers that services are available and running all the time, and if they are not, they will be fixed at a fixed monthly charge. The model aims to minimise manual operation and work that is separately invoiced per hour. For instance, the model has allowed us reduce our customers’ billable hours by up to 75%.

With the addition of knowledge on the benefits and best features of different cloud services, as well as capacity use and invoicing, you get a package that serves customers’ needs optimally.

3. Don’t try to save in migration! Make the implementation project gradual

 

Lift & shift type transfers, i.e. moving old environments as they are, don’t generate savings very often. I’m not saying that it couldn’t happen, but the best benefits are achieved by looking at operating models and the environment as a whole. This requires a comprehensive study of the things that should work in the cloud and how the application is integrated in other systems.

The whole environment and its dependencies should be analysed, and all services should be checked one by one. After that you plan migration, and it is time to think what things can be automated. This requires time and money.

A migration that leads to an environment that has been automated as much as possible is a good target. It should also lower recurrent costs related to operation and improve the quality of the service.

Solita offers all services that are needed in cloud transformation. If you are interested in the subject, read more about our services on our website. If you have any questions, please feel free to contact us!

Download a free Cloud Buyer's Guide

Modern cloud operation: successful cloud transformation, part 1

Today, many people are wondering how they could implement cloud transformation successfully. In the first part of this two-part blog series, I explain why and how cloud transformation often fails despite high expectations. In the second part, I will describe how cloud transformation is made and what the correct way of migrating services to the cloud is.

Some time ago at Solita HUB event, I talked about modern cloud operation and successful cloud transformation. Experiences that our customers had told us about, served as the starting point for my presentation. I wanted to share some of those also with you.

People have often started to use the cloud with high expectations, but those expectations have not really been met. Or they have ended up in a situation where nobody has a good picture of what things have been moved to the cloud or what has been built there. So they’ve ended up in cloud service mess.

People have often started to use the cloud with high expectations, but those expectations have not really been met.

In recent years, people have talked a lot about the cloud and how to start using it. Should they move their systems there by Lift & Shift their existing resources as they are, or should they make new cloud-native applications and systems? Or should they do both?

They might have decided to make the cloud transformation with the help of their own IT department, using an existing service provider or – a bit secretly – with a software development partner. No matter what the choice is, it feels like people are out to make quick profits and they haven’t stopped to think about the big picture and how to govern all of this.

The cloud is not a data centre

Quite often I hear people say “the cloud is only somebody else’s data center”. That is exactly what it is if you don’t know how to use it properly. When we think how the systems of a traditional service provider or our own IT departments has been built, it’s no wonder that you hear statements like this.

Download a free Cloud Buyer's Guide

Before, the aim was to offer servers from data center with maintenance and monitoring for operating systems. The idea was that first you specified what kind of capacity you want and how environments should be monitored. Then it was agreed how to react to possible alerts.

The architecture has been designed to be as cost-efficient as possible. In this model, efficiency has relied on virtualisation and, for instance, on the decision whether to build HA systems or not. Especially solutions with two data centers have traditionally been expensive.

When people have started to move this old operating model to the cloud, it hasn’t functioned as they had planned and hoped for. Therefore, it can be said that the true benefits of the cloud will not be gained in the traditional way.

Cloud transformation is not only about moving away from own or co-location data centers. It’s about a comprehensive change towards new operating methods.

It is very wise to build the above-mentioned HA systems in a cloud, because they won’t necessarily cost much or are build-in features. The cloud is not a data centre, and it shouldn’t be considered as one.

Of course, it’s possible to achieve savings with traditional workloads, but still, it is more important to understand that operating methods have to change. Old methods are not enough, and traditional service partners don’t often have adequate skills to develop environments using modern means.

Lack of management causes trouble in cloud services

In some cases, services are built in to cloud together with a software development partner. They have promised to create a well-functioning system quickly. And this can be the case in the cloud at its best. But without management or an proper governance model, problems often occur. The number of different kind of cloud service accounts may increase, and nobody in the organisation seems to know how to manage the accounts and where costs come from.

In addition, surprisingly often people believe that cloud services do not require maintenance and that any developer is able to build a sustainable, secure and cost-effective environment. They are surprised to notice that it’s not that simple.

‘No-Ops’, and maybe the word ‘serverless’ could belong to this same category, are terms that unfortunately have been misunderstood a bit. Only a few software development partners have corrected this misunderstanding, or they haven’t realised themselves that cloud services do require maintenance in reality.

It’s true that services that function relatively well without special maintenance can be built in the cloud, but in reality, No-Ops doesn’t exist without seamless cooperation between developers and operations experts, in other words DevOps culture. No-Ops does mean extreme automation which doesn’t happen on its own. It really isn’t possible everytime, and it is not always worth pursuing.

At Solita, operation has been taken to an entirely new level. Our objective is to make us “useless” as far as daily routines are concerned. We call this modern cloud operation. With this approach, we have, for instance, managed to reduce our customers’ hourly billing considerably. We have also managed to spread our operating methods from customers’ data centers all the way to the cloud.

In my next blog, I will focus on things that should be considered in cloud transformation and explain what modern cloud operation means in practice.

Anton works as a cloud business manager at Solita. Producing IT cost-efficiently from desktops to data centers is close to his heart. When he is not working on clouds, he enjoys skiing, running, cycling, playing football. He is excited about all types of gadgets related to sports and likes to measure and track everything.

Download a free Cloud Buyer's Guide