Azure boards – does it make sprint planning enjoyable?

There are many different kind of services that aim to control development teams activities. Azure boards is one of them but it takes integration a one step further (into azure of course, why someone would want to use something else?)

Azure boards aims to make sprint planning easier and faster than ever before. Combining ease of use and plenty of customization options it sounds pretty good but is it worth a try?

I would say that if you are heavily using Azure already this is something you cannot miss. Boards takes everything one step further than its competitors. Sure you can have the same features on Jira too but it takes lot more work to get it done.

Boards uses Kanban -style board to manage tasks and combines that into deep integration into source control and Azure DevOps.

I am not going to go into every detail here but just to overview how we have used this board to manage sprint activities.
And also how to get relevant info into different stakeholders.

Lets Scrum

With this view we are building we have a different view for developers and people who want to have oversight on how things are progressing.

We use Scrum process as a starting point where views are divided into backlog items and features.




If you are starting a new board choose “Scrum” as the process when creating a board under advanced options




If you already have a boards in use and you are not using “Scrum” you need to change the current process into scrum. To change it go to organizational settings and choose process under boards.

There we see all the available processes.
Choose Basic by clicking it. This is the default process that boards assigns to your board when creating.

Then select projects and under this you see your board, in this case “Test board”. Press the three dots and select “change process”

Select “Scrum” from the dropdown menu and off you go.


Depending on how much items you already have on your board you might need to resolve some conflicts like wrong type of items.

Now go to your board see that there is now different views for backlog item and features. Structural flow is that backlog items are child items for features.  Features simply define a feature that is being implemented and backlog items are steps to get it done.

Let’s start by creating a feature.
Make sure you are at Feature view at boards. This can be changed at top right corner dropdown menu. (choose Features)


Click “New item” and name the Feature, in this case i named it as Testi feature nro 3 as i have couple of items done already. Then we proceed to make a backlog item belonging into this feature.


Click the three dots on the right hand corner of the item and choose “Add product backlog item” to add a child backlog item. Name the item as you like.


Once you made a first backlog item under feature you don’t need to go through that three dots menu again to make more backlog items. Because now there is a “Add product backlog item” straight at the card.

And you can add more items by just clicking it.



Now we have some content on our board and we can see how this is structured. We have two views now. A backlog item view and feature view.

Backlog item view is used mainly by developers where they see individual tasks and what is their status.
Choose this board by clicking upper right hand corner drop down menu and select “Backlog items”

This presents a view of backlog items and their statuses. This is pretty standard view, there are items and they are assigned to developers

By clicking open one item you can see more info like which feature is a parent for that item.





Another view that we use here is a feature view. This is to be used by people who are supervising developers or a client who would like to see how things are progressing.

Choose this board by clicking top hand corner dropdown menu and choose “Features”

The beef here is that this view presents immediate visualization on what is the status of features and how many backlog items under that feature are done. And we don’t want use same view for everyone as this would be a compromise.


Now lets look a bit into customizing the Scrum process. As it is often a case that we need something that is not part of the standard process.


Click Azure DevOps logo on the left top corner and click “Organizational Settings” at the bottom




Select “Process” under Boards


For us to be able to make customizations into processes we need to make a copy of them. Original processes cannot be customized.

Click the three dots next to “Scrum” and click “Create inherited process”

Name the process as “Scrum customize”
And once done select and choose “Set as default process” to take this into use.

Click “Scrum customize” process to open the following screen.

Click “Product Backlog Item” to make a customization for that item type and how board handles it.

Choose “Rules” and “New Rule”.
Customization we are doing here is that we want to assign a product backlog item automatically into person once that item is moved into “Committed”

See the following screenshot on how this is done.

First we make sure we don’t overwrite assignee with this rule as we check that “Assigned To” field is empty. Then we check that when the state is changed from “Approved” to “Committed”. As with Scrum this is the normal flow, once item is appoved a developer can start to work on that and then state is “Committed”

once all the previous rules are met we change the “Assigned To” for the person moving the item into “Committed” state.

With this kind of simple changes we make it faster to handle items on boards and take away those annoying things that you need to do manually everytime.

There are lots of other stuff that you can do with boards. But this was just a small look into what is possible and how we have seen this is best used.

If you have any questions about Azure Boards or any cloud related matters please do not hesitate to contact me
+358 40 1897586


No public cloud? Then kiss AI goodbye

What’s the crucial enabling factor that’s often missing from the debate about the myriad uses of AI? The fact that there is no AI without a proper backend for data (cloud data warehouses/data lakes) or without pre-built components. Examples of this are Cloud Machine Learning (ML) in Google Cloud Platform (GCP) and Sagemaker in Amazon Web Services (AWS). In this cloud blog I will explain why public cloud offers the optimum solution for machine learning (ML) and AI environments.

Why is public cloud essential to AI/ML projects?

  • AWS, Microsoft Azure and GCP offer plenty of pre-built machine learning components. This helps projects to build AI/ML solutions without requiring a deep understanding of ML theory, knowledge of AI or PhD level data scientists.
  • Public cloud is built for workloads which need peaking CPU/IO performance. This lets you pay for an unlimited amount of computing power on a per-minute basis instead of investing millions into your own data centres.
  • Rapid innovation/prototyping is possible using public cloud – you can test and deploy early and scale up in the production if needed.

Public cloud: the superpower of AI

Across many types of projects, AI capabilities are being democratised. Public cloud vendors deliver products, like Sagemaker or CloudML, that allow you to build AI capabilities for your products without a deep theoretical understanding. This means that soon a shortage of AI/ML scientists won’t be your biggest challenge.  Projects can use existing AI tools to build world-class solutions such as customer support, fraud detection, and business intelligence.

My recommendation is that you should head towards data enablement. First invest in data pipelines, data quality, integrations, and cloud-based data warehouses/data lakes. So rather than using over-skilled AI/ML scientists, build up the essential twin pillars – cloud ops and skilled team of data engineers.

Enablement – not enforcement

In my experience, many organisations have been struggling to transition to public cloud due to data confidentiality and classification issues. Business units have been driving the adoption of modern AI-based technology. IT organisations have been pushing back due to security concerns.  After plenty of heated debate we have been able to find a way forward. The benefits of using public cloud components in advanced data processing have been so huge that IT has to find ways to enable the use of public cloud.

The solution for this challenge has proven to be proper data classification and the use of private on-premises facilities to support operations in public cloud. Data location should be defined based on the data classification. Solita has been building secure but flexible automated cloud governance controls. These enable business requests but keep the control in your hands, as well as meeting the requirements usually defined by a company’s chief information security officer (CISO). Modern cloud governance is built on automation and enablement – rather than enforcing policies.


  • The pathway to effective AI adoption usually begins by kickstarting or boosting the public cloud journey and competence within the company.
  • Our recommendation – the public cloud journey should start with proper analyses and planning.
  • Solita is able to help with data confidentiality issues: classification, hybrid/private cloud usage and transformation.
  • Build cloud governance based on enablement and automation rather than enforcement.

Download a free Cloud Buyer's Guide

Modern cloud operation: successful cloud transformation, part 2

How to ensure a successful cloud transformation? In the first part of this two-part blog series, I explained why and how cloud transformation often fails despite high expectations. In this second part, I will explain how to succeed in cloud transformation, i.e. how to move services to the cloud in the right way.

Below, there are three important tips that will help you reach a good outcome.

1. Start by defining a cloud strategy and a cloud governance model

We often discuss with our customers how to manage, monitor and operate the cloud and what things should be considered when working with third party developers. Many customers are also interested to know what kinds of guidelines and operating models should be determined in order to keep everything under control.

You don’t need a big team to brainstorm and create loads of new processes to define a cloud strategy and update governance models.

To succeed in updating your cloud strategy and governance model, you have to take a very close look at things and realise that you are moving things to a new environment that functions differently from traditional data centers.

So it’s important to understand that for example software projects can be developed in a completely new way in the cloud with multiple suppliers. However, it must be kept in mind that this sort of operation requires a governance model and instructions on what kind of minimum requirements the new services that are to be linked to the company’s systems should have and how their maintenance and continuity should be taken care of. For instance, you have to decide how you can ensure that cloud accounts, data security and access management are taken care of.

2. Insist on having modern cloud operation – choose a suitable partner or get the needed knowhow yourself

Successful cloud transformation requires right kind of expertise. However, traditional service providers rarely have the required skills. New kinds of cloud operators have emerged to solve this issue. Their mission is to help customers manage cloud transformation. How can you identify such operators and what should you demand from them?

The following list is formed on the basis of views presented by Gartner, Forrester and AWS on modern operators. When you are looking for a partner…

  • demand a strong DevOps culture. It forms a good foundation for automation and development of services.
  • ensure cloud-native expertise on platforms and applications.It creates certainty that an expert who knows the whole package and understands how applications and platforms work together is in charge of the project.
  • check that your partner has skills in multiple platforms. AWS, Azure and Google are all good alternatives.
  • ask if your partner masters automatic operation and predictive analytics. These skills reduce variable costs and contribute to quick recovery from incidents.
  • demand agile operating methods, as well as transparency and continuous development of services. With clear and efficient service processes, cost management and reporting are easier and the customer understands the benefits of development.

Solita’s answer to this is a modern cloud operation partnership. In other words, we help our customers create operating models and cloud strategies. A modern cloud operator has an understanding of the whole package that has to be managed and helps to formulate proper operating models and guidelines for cloud development. It’s not our purpose to limit development speed or opportunities, but we want to pay attention to things that ensure continuity and easy maintenance. After all, the development phase is only a fraction of the whole application life cycle.

The developer’s needs are taken into account, and at the same time, for instance the following operating models are determined: How are cloud accounts created and who creates them? How are costs monitored? What kind of user rights are given and to whom? What sort of development tools are used or what targets should be achieved with them? We are responsible for deciding what things are monitored and how.

In addition, the right kind of partner knows what things should be moved to the cloud in the first place.

When moving to cloud, the word move doesn’t fit very well in this context because it is rarely recommended just to move workloads. That is why it’s better to talk about transformation, which means transforming an existing worksload at least with some modifications towards cloud native.

In my opinion, application development is one important skill a modern cloud operator should master. Today, the cloud can be seen as a platform where different kinds of systems and applications are coded. It takes more than just the ability to manage servers to succeed in this game. Therefore, DevOps culture determines how application development and operation work together. You have to understand how environments are automated and monitored.

In addition to monitoring whether applications are running, experts are able to control other things too. They can analyse how an application is working and whether it is performing effectively. A strong symbiosis between developers and operators helps to continuously develop and improve skills that are needed to improve service quality. At best, this kind of operator can promise their customers that services are available and running all the time, and if they are not, they will be fixed at a fixed monthly charge. The model aims to minimise manual operation and work that is separately invoiced per hour. For instance, the model has allowed us reduce our customers’ billable hours by up to 75%.

With the addition of knowledge on the benefits and best features of different cloud services, as well as capacity use and invoicing, you get a package that serves customers’ needs optimally.

3. Don’t try to save in migration! Make the implementation project gradual


Lift & shift type transfers, i.e. moving old environments as they are, don’t generate savings very often. I’m not saying that it couldn’t happen, but the best benefits are achieved by looking at operating models and the environment as a whole. This requires a comprehensive study of the things that should work in the cloud and how the application is integrated in other systems.

The whole environment and its dependencies should be analysed, and all services should be checked one by one. After that you plan migration, and it is time to think what things can be automated. This requires time and money.

A migration that leads to an environment that has been automated as much as possible is a good target. It should also lower recurrent costs related to operation and improve the quality of the service.

Solita offers all services that are needed in cloud transformation. If you are interested in the subject, read more about our services on our website. If you have any questions, please feel free to contact us!

Download a free Cloud Buyer's Guide

Deploying an application on a global scale

Running your application on global scale is now much more easier than ever before, here i go through one scenario how to achieve this.

Building and deploying an application on a global scale is now easier than ever. Using the cloud you can easily have your application running close to the customers no matter where they are located.

There are some things to take into consideration when planning and building a deployment. In this post I am using Microsoft Azure service offering as an example but at least Amazon Web Services and Google Cloud Platform have similar services available.

As with real estate, most important thing is location, location, location.

Your end user location defines which cloud to use and where to push applications. China is a totally different game compared to running everything in EU/US area.

Make sure that your application is built to scale from the start, for example DB should be something that is geo-replicated. SQL or Cosmos DB on Azure.

Once you have mapped the regions where an application will be mostly used you can start planning the deployment process.

Traffic manager for geo-balancer

Use the Azure traffic manager to route incoming requests into the nearest region to get the lowest latency from application to end users. Also, with this design if one region is having an outage, the nearest one will continue to serve the customers. Also make sure you put different regions into separate resource groups as this lets you manage each region as a single collection.

Failover can be done with a Traffic manager health probe, which probes the application and checks the health of app services, storage and DB. Make sure you follow design patterns on the health probe so that some lower priority outages don’t mark the whole regions as unavailable.

Traffic manager also supports several routing methods, and, in this case, we would be using Geographic as we want to use location as deciding factor where to route traffic.

Multiregion deployment needs some extra attention

For storage, the best option is to use Read-access geo-redundant storage (RA-GRS) as this gives best replication options for this use case. But there are some caveats to consider when using this option. For example, if there is a zone wide outage then there is a short time period when the data is in a read-only model until the failover happens from region to region.

Deploying an application into a single region is pretty straightforward. But as we are planning to do a multi-region deployment, we should deploy the application into multiple regions in an automated fashion. If you are using Azure DevOps, all you have to do is make several deployment slots to push the application into different regions.

This article covered just one scenario about what to consider when deploying an application to the cloud. When you build your application to be cloud capable from day one, the more benefits the cloud can offer. Don’t let the old ways hold you back. Explore and test different workloads, try containers and see how easy it is to have a true scaling and build deployment pipelines in the cloud.

Modern cloud operation: successful cloud transformation, part 1

Today, many people are wondering how they could implement cloud transformation successfully. In the first part of this two-part blog series, I explain why and how cloud transformation often fails despite high expectations. In the second part, I will describe how cloud transformation is made and what the correct way of migrating services to the cloud is.

Some time ago at Solita HUB event, I talked about modern cloud operation and successful cloud transformation. Experiences that our customers had told us about, served as the starting point for my presentation. I wanted to share some of those also with you.

People have often started to use the cloud with high expectations, but those expectations have not really been met. Or they have ended up in a situation where nobody has a good picture of what things have been moved to the cloud or what has been built there. So they’ve ended up in cloud service mess.

People have often started to use the cloud with high expectations, but those expectations have not really been met.

In recent years, people have talked a lot about the cloud and how to start using it. Should they move their systems there by Lift & Shift their existing resources as they are, or should they make new cloud-native applications and systems? Or should they do both?

They might have decided to make the cloud transformation with the help of their own IT department, using an existing service provider or – a bit secretly – with a software development partner. No matter what the choice is, it feels like people are out to make quick profits and they haven’t stopped to think about the big picture and how to govern all of this.

The cloud is not a data centre

Quite often I hear people say “the cloud is only somebody else’s data center”. That is exactly what it is if you don’t know how to use it properly. When we think how the systems of a traditional service provider or our own IT departments has been built, it’s no wonder that you hear statements like this.

Download a free Cloud Buyer's Guide

Before, the aim was to offer servers from data center with maintenance and monitoring for operating systems. The idea was that first you specified what kind of capacity you want and how environments should be monitored. Then it was agreed how to react to possible alerts.

The architecture has been designed to be as cost-efficient as possible. In this model, efficiency has relied on virtualisation and, for instance, on the decision whether to build HA systems or not. Especially solutions with two data centers have traditionally been expensive.

When people have started to move this old operating model to the cloud, it hasn’t functioned as they had planned and hoped for. Therefore, it can be said that the true benefits of the cloud will not be gained in the traditional way.

Cloud transformation is not only about moving away from own or co-location data centers. It’s about a comprehensive change towards new operating methods.

It is very wise to build the above-mentioned HA systems in a cloud, because they won’t necessarily cost much or are build-in features. The cloud is not a data centre, and it shouldn’t be considered as one.

Of course, it’s possible to achieve savings with traditional workloads, but still, it is more important to understand that operating methods have to change. Old methods are not enough, and traditional service partners don’t often have adequate skills to develop environments using modern means.

Lack of management causes trouble in cloud services

In some cases, services are built in to cloud together with a software development partner. They have promised to create a well-functioning system quickly. And this can be the case in the cloud at its best. But without management or an proper governance model, problems often occur. The number of different kind of cloud service accounts may increase, and nobody in the organisation seems to know how to manage the accounts and where costs come from.

In addition, surprisingly often people believe that cloud services do not require maintenance and that any developer is able to build a sustainable, secure and cost-effective environment. They are surprised to notice that it’s not that simple.

‘No-Ops’, and maybe the word ‘serverless’ could belong to this same category, are terms that unfortunately have been misunderstood a bit. Only a few software development partners have corrected this misunderstanding, or they haven’t realised themselves that cloud services do require maintenance in reality.

It’s true that services that function relatively well without special maintenance can be built in the cloud, but in reality, No-Ops doesn’t exist without seamless cooperation between developers and operations experts, in other words DevOps culture. No-Ops does mean extreme automation which doesn’t happen on its own. It really isn’t possible everytime, and it is not always worth pursuing.

At Solita, operation has been taken to an entirely new level. Our objective is to make us “useless” as far as daily routines are concerned. We call this modern cloud operation. With this approach, we have, for instance, managed to reduce our customers’ hourly billing considerably. We have also managed to spread our operating methods from customers’ data centers all the way to the cloud.

In my next blog, I will focus on things that should be considered in cloud transformation and explain what modern cloud operation means in practice.

Anton works as a cloud business manager at Solita. Producing IT cost-efficiently from desktops to data centers is close to his heart. When he is not working on clouds, he enjoys skiing, running, cycling, playing football. He is excited about all types of gadgets related to sports and likes to measure and track everything.

Download a free Cloud Buyer's Guide