The learning curve of a Cloud Service Specialist at Solita

Tommi Ritvanen is part of Cloud Continuous Services Team in Solita. The team consists of dozens of specialist who ensure that customers' cloud solutions are running smoothly. Now Tommi shares his experiences of learning path, culture and team collaboration.

I’ve been working at Solita for six months as a Cloud Service Specialist. I’m part of the cloud continuous services team, where we take care of our ongoing customers and ensure that their cloud solutions are running smoothly. After our colleagues have delivered a project, we basically take over and continue supporting the customer with the next steps.

What I like about my job is that every day is different. I get to work and learn from different technologies; we work with all the major cloud platforms such as AWS, Microsoft Azure and Google Cloud Platform. What also brings variety in our days is that we have different types of customers that we serve and support. The requests we get are multiple, so there is no boring day in this line of work.

What inspires me the most in my role is that I’m able to work with new topics and develop my skills in areas I haven’t worked on before. I wanted to work with public cloud, and now I’m doing it. I like the way we exchange ideas and share knowledge in the team. This way, we can find ways to improve and work smarter.

We have this mentality of challenging the status quo positively. Also, the fact that the industry is changing quickly brings a nice challenge; to be good at your job, you need to be aware of what is going on. Solita also has an attractive client portfolio and a track record of building very impactful solutions, so it’s exciting to be part of all that too.

I got responsibility from day one

Our team has grown a lot which means that we have people with different perspectives and visions. It’s a nice mix of seniors and juniors, which creates a good environment for learning. I think the collaboration in the team works well, even though we are located around Finland in different offices. While we take care of our tasks independently, there is always support available from other members of the cloud team. Sometimes we go through things together to share knowledge and spread the expertise within the team.

The overall culture at Solita supports learning and growth, there is a really low barrier to ask questions, and you can ask for help from anyone, even people outside of your team. I joined Solita with very little cloud experience, but I’ve learned so much during the past six months. I’ve got responsibility from the beginning and learned while doing, which is the best way of learning for me.

From day one, I got the freedom to decide which direction I wanted to take in my learning path, including the technologies. We have study groups and flexible opportunities to get certified in the technologies we find interesting.

As part of the onboarding process, I did this practical training project executed in a sandbox environment. We started from scratch, built the architecture, and drew the process like we would do in a real-life situation, navigating the environment and practising the technologies we needed. The process itself and the support we got from more senior colleagues was highly useful.

Being professional doesn’t mean being serious

The culture at Solita is very people-focused. I’ve felt welcome from the beginning, and regardless of the fact that I’m the only member of the cloud continuous services team here in Oulu, people have adopted me as part of the office crew. The atmosphere is casual, and people are allowed to have fun at work. Being professional doesn’t mean being serious.

People here want to improve and go the extra mile in delivering great results to our customers. This means that to be successful in this environment, you need to have the courage to ask questions and look for help if you don’t know something. The culture is inclusive, but you need to show up to be part of the community. There are many opportunities to get to know people, coffee breaks and social activities. We also share stories from our personal lives, which makes me feel that I can be my authentic self.

We are constantly looking for new colleagues in our Cloud and Connectivity Community! Check out our open positions here!

My Thursday at AWS re:Invent 2019

Keynote by K Dr. Werner Vogels

Dr. Vogels is CTO at AWS. The keynote started with very detail information about virtual machine structure and evolvement during the last 20-25 years. He said that the AWS Nitro microVM is maybe the most essential component to provide secure and performance virtual machine environment. It has enabled rapid development of new instance types.

Ms. Clare Liguori (Principal Software Engineer, AWS) gave detail information about container services. In AWS there are two container platforms, the ECS EC2 with virtual machines and the serverless Fargate. If you compare scaling speed, the Fargate service can follow the needed capacity much faster and avoid under-provisioning (for performance) and over-provisioning (for cost saving). With ECS you have two scaling phases, first you need to scale up your EC2 instances and after that launch the tasks. 

During the keynote Mr. Jeff Dowds (Information Technology Executive, Vanguard) told their journey to AWS from corporate data center. Vanguard is registered investment advisor company located in USA and has over 5 trillion USA dollars in assets. Mr. Dowds convinced the benefit of public cloud by hard facts: -30% savings in compute costs, -30% savings in build costs, and finally 20x deployment frequency via automations. Changing the mindset of deployment philosophy, I think is the most important change for the Vanguard. Like said in the slides, they have now the ability to innovate!

Building a pocket platform as a service with Amazon Lightsail – CMP348

Mr. Robert Zhu (Principal Technical Evangelist, AWS) kept chalk talk session about the AWS Lightsail service. He started saying that this talk will be the most anti-talk in re:Invent in meaning of scaling, high availability and so on. The crowd was laughing loud.

In chalk talk the example deployed app was a progressive web app. PWA apps try to look as a native app e.g. in different phones. PWA’s typically use web browser in the background with shared UI code between operating systems.

The Lightsail service provides public static ip addresses and simple DNS service that you can use to connect the static ip address to your user-friendly domain name. It supports wildcard records and default record which is nice. The price for outbound traffic is very affordable: in 10 USD deal you get 2TB outbound traffic.

We used a lot of time how to configure a server in traditional way via ssh prompt: installing docker, acquiring certificate from Let’s encrypt etc.

The Lightsail service has no connection to VPC, no IAM roles, and so on. It is basically only a virtual server, so it is incompatible for creating modern public cloud enterprise experience.

Selecting the right instance for your HPC workloads – CMP409

Mr. Nathan Stornetta (Senior Product Manager, AWS) kept this builder session. He is a product manager for AWS ParallelCluster. In on-premises solutions you almost always need to do choices what to run and when to run. With public cloud’s elastic capacity, you don’t have to queue for resources and not to pay what you are not using.

HPC term stands for high performance computing which basically means that your workload does not fit into one server and you need a cluster of servers with high speed networking. Within the cluster the proximity between servers is essential.

In AWS there exists more than 270+ different instance types. To select right instance type needs experience about the workload and offering. Here is nice cheat sheet for instance types:

If your workload needs high performance disk performance in-and-out from the server the default AWS recommended choice would be to use Amazon FSx for Lustre cluster storage solution.

If you decide to use the Elastic file system EFS service, you should first think how much you need performance rather than what size you need. The design of EFS promise 200 MBps performance per each 1 TB of data. So, you should rather decide the needed performance so your application will have enough IO bandwidth in-use.

The newest choice is Elastic Fabric Adapter (EFA) which was announced a couple of months ago. More information about EFA can be found from here: https://aws.amazon.com/hpc/efa/

If you don’t have experience which storage would work the best for your workload, it is strongly recommended to test each one and make the decision after that.

Intelligently automating cloud operations – ENT305

This session was a workshop session. In workshop sessions there is multiple tables with same topic and in builder session there is one small table for each topic. So, there were more than hundred persons to do same exercise.

At first Mr. Francesco Penta (Principal Cloud Support Engineer, AWS) and Mr. Tipu Qureshi (Principal Engineer, AWS) gave a short overview of services that we are using in this session. I want to mention few of them. AWS Health keeps track of health of different services in your account. For example, it can alarm if your ACM certificate is not able to renew automatically (e.g. missing DNS records) or VPN tunnel is down.

The other service was AWS Auto Scaling predictive scaling. It is important thing if you want to avoid bigger under-provisioning. When just using e.g. CPU metric from last 5 minutes you are already late, bad. Also, if your horizontal scaling needs awhile to have new nodes in service, then the predictive scaling helps you to get more stable performance.

The workshop can be found here: https://intelligent-cloud-operations.workshop.aws/

I’m familiar with the tooling so I could have yelled Bingo as one of the first persons to finish. I was happy to finish early and go to hotel for short break before the Solita’s customer event and the re:Play. The re:Play starts at 8pm in Las Vegas Festival Grounds with music, food, drinks and more than 30 000 eye pairs.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

My Wednesday at AWS re:Invent 2019

It was early morning today because the alarm clock woke up me around 6 am. The day started with Worldwide Public Sector Keynote talk at 7 am in Venetian Plazzo Ohall.

Worldwide Public Sector Breakfast Keynote – WPS01

This was my first time to take part to the Public Sector keynote. I’m not sure how worldwide it was. At least Singapore and Australia were mentioned, but I cannot remember anything special said about Europe.

Everyone who is following international security industry even a little could not have missed the fact who many cities, communities etc. have faced ransomware attack. Some victims paid the ransom in Bitcoins, some did not pay and many of victims are just quiet. The public cloud environment is a great way to protect your infrastructure and important data. Here is summary how to protect yourself:

The RMIT University from Australia has multiple education programs for AWS competencies and it was announced that they are now official AWS Cloud Innovation Centre (CIC). Typical students have some education background (eg. bachelor in IT) and they want to make some move in job market by re-education. Sounds great way!

The speaker Mr. Martin Bean from the RMIT showed the picture from Jetsons (1963, by Hanna Barber Production) that could already list multiple things that are invented for mass markets much later. Mr. Martin also reminded two things that got my attention: there are more people owning cellphone than toothbrush and 50 percent of jobs are going to transform to another in next 20 years.

Visit to expo area

After keynote I visited expo in Venetian Sands Expo area before heading to the Aria for the rest of Wednesday. The expo was huge, noisy, crowded etc. The more detail experience from last year was enough. At AWS Cloud Cafe I took panorama picture (click to zoom in) and that’s it, I was ready to leave.

I took the shuttle bus towards Aria. I was very happy that the bus driver dropped off us next the main door of Aria hotel which saves about average 20-30 minutes of queueing in Aria’s parking garage. Important change! On the way I passed the Manhattan of New York.

Get started with Amazon ElastiCache in 60 minutes – DAT407

Mr. Kevin McGehee (Principal Software Engineer, AWS) was the instructor for the ElasticCache Redis builder session. In the session we logged in to the Amazon console, opened Cloud9 development environment and then the just followed the clear written instructions.

The guide for builder session can be found from here: https://reinvent2019-elasticache-workshop.s3.amazonaws.com/guide.pdf

This session was about how to import data to the Redis via python and index and refine the data at the importing phase. In refinement the data becomes information with aggregated scoring, geo location etc. It’s easier to use by the requestor. That was interesting and looked easy.

Build an effective resource compliance program – MGT405

Mr.Faraz Kazmi (Software Development Engineer, AWS) held this builder session.

Conformance pack under AWS Config service was published last week. It can be integrated in AWS Organization level in account structure. With conformance packs you can make a group of config rules (~governance rules for common settings) easily in YAML format template and have consolidated view over those rules. There are few AWS managed packs currently available. “Operational Best Practices For PCI-DSS” pack is one for example.  It’s clear that AWS will provide more and more of those rule sets in upcoming months and so will also do the community via Github.

There are timeline view and compliance view of your all resources, so it makes this tool very effective to have consolidated view of compliance of resources.

You can find the material from here: https://reinvent2019.aws-management.tools/mgt405/en/

Btw. If you cannot find Conformance packs, you are possible using old Config service UI in the AWS Console. Make sure to switch to new UI. All new features are only done to the new UI.

The clean-up phase in the guide is not perfect. To addition to the guide you have to manually delete SNS topic and IAM roles that was created in the wizards. It was a disappointment that no sandbox account was provided.

Best practices for detecting and preventing data exposure – MGT408

Ms. Claudia Charro (Enterprise Solutions Architect, AWS) from Brasilia was the teacher in this session. This was very similar to my previous session that I was not aware. In both session we used Config rules and blocked public s3 usages.

The material can be found from here: https://reinvent2019.aws-management.tools/mgt408/en/cont/testingourenforcement.html

AWS Certification Appreciation Reception

The Wednesday evening started (as usually) with reception for certificated people at the Brooklyn Bowl. It is again nice venue to have some food, drinks, music and mingle with other people. I’m writing this around 8 pm so I left a bit early to get good night sleep for Thursday which is the last full day.

Brooklyn bowl outside Brooklyn bowl inside Brooklyn bowl bowling Brooklyn bowl dance floor

On the way back to my hotel (Paris Las Vegas) I found the version 3 Tesla Supercharge station which was the one of the first v3 stations in the world. It was not too crowed. The station was big when I’m comparing with the supercharger stations in Finland. The v3 Supercharger stations can provide up to 250kW charging power for Model 3 Long Range (LR) models, which has 75kWh battery size. I would have like to see the new (experimental) Cybertruck model.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

 

My Tuesday at AWS re:Invent 2019

Please also check my blog post from Monday.

Starting from Tuesday each event venue provides great breakfast with lightning speed service. It scales and works. It’s always amazing how each venue can provide food services for thousands of people in short period of time. The most people are there for the first time so guidance has to be very clear and simple.

Today started with keynote by Mr. Andy Jassy. I was not able to join the live session at Venetian because of my next session. Moving from one location to another takes at least 15 minutes, and you have to be at least 10 minutes before at your session to reclaim you reserved seat. Starting last year, the booking systems forces one hour cap between sessions in different venues.

Keynote by Andy Jassy on Tuesday

You can find the full recap written by my colleagues here: Andy Jassy’s release roller coaster at AWS re:Invent 2019

Machine learning was the thing today. The Sagemaker service received tons of new features. Those are explained by our ML specialists here: Coming soon!

So I joined overflow room at Mirage for Jassy’s keynote session. Everyone has their personal headphones. I have more than 15 years background in software development, so it was love in first sight with CodeGuru. There are good news and bad news. It is service for static analysis of your code, to make review and finally but not definitely least to provide realtime profiling concept via installed agent.

The profiling information is provided in 5 minutes periods and it will provide profiling for several factors: CPU, memory and latency. It was promising product because Mr. Jassy told that Amazon has used it by itself for couple of years already. So, it is mature product already.

So, what was the bad news. It supports only Java. Nothing to add to that.

The other interesting announcement for me was general availability of Outposts. Finally, also in Europe you can have AWS fully managed servers inside your corporate datacenter. Those servers integrate fully to AWS Console and can be used eg. for running ECS container services. The starting price 8300 USD per month is very competitive because it already includes roughly 200 cores, 800GB memory and 2,7TB of instance storage. You can add EBS storage additionally starting from 2.7TB.

You can find more information here: https://aws.amazon.com/blogs/aws/aws-outposts-now-available-order-your-racks-today/

Performing analytics at the edge – IOT405

This session was a workshop and level 400 (highest). It was held by Mr. Sudhir Jena (Sr. IoT Consultant, AWS) and Mr. Rob Marano (Sr. Practice Manager, AWS).

Industry 4.0 a.k.a. IoT was totally new sector for me. It was very informative and pleasant session. It was all about AWS IoT Greengrass service which can provide low latency response but still managed platform which tons of features for handling data stream from IOT devices locally.

For multiple people it was first touch to AWS Cloud Development Kit which I fell in love about three months ago. It has multiple advances like refactoring, strong typing and good IDE support. You can find more information about AWS CDK here: https://docs.aws.amazon.com/cdk/latest/guide/home.html

In our workshop session we demonstrated to receive temperature, humidity etc. time series data stream from IoT device. The IoT device was in our case EC2 which simulated IoT device. From AWS IoT Greengrass console you can eg. deploy new version of analytic functions to the IoT devices.

Material for workshop can be found from Github: https://github.com/aws-samples/aws-iot-greengrass-edge-analytics-workshop

AWS Transit Gateway reference architectures for many VPCs – NET406

This was a session and it was held by Nick Matthews (Principal Solutions Architect, AWS) in glamorous Ballroom F at Mirage. It fits more than thousand people. The session was almost full, so it was a very popular session.

To summaries the topic, there are several good ways to do interconnectivity between multiple VPCs and corporate data center. In small scale you can things more manually but in large scale you need automation.

One provided solution for automation based on the use of tags. The autonomous team (owner of account A) can tag their shared resources predefined way. The transit account can read those changes via CloudTrail logging. So, each modification will create CloudTrail audit event which triggers lambda function. The function checks for if change is required and makes change request item to metadata table in DynamoDB to wait for approval. The network operator is notified via SNS (Simple notification service). The operator can then allow (or decline) the modification. Another Lambda will then do the needed route table modifications for the transit account and for the account A.

If you are interested, you can watch video from August 2019: https://pages.awscloud.com/AWS-Transit-Gateway-Reference-Architectures-for-Many-Amazon-VPCs_2019_0811-NET_OD.html

If you want to wait, I’m pretty sure that this re:Invent talk was also recorded and can be found from AWS Youtube channel in few week: https://www.youtube.com/user/AmazonWebServices

Fortifying web apps against bots and scrapers with AWS WAF – SEC357

Mr. Yuri Duchovny (Solution architect, AWS) held the session. It was the most intensive session with a lots of todo with many architectural examples and usage scenarios in demo screen. The AWS WAF service has got a new shiny UI in AWS Console. Also the AWS published few new features already in last few weeks, eg. Managed rules to give more protection in nondisruptive way. The WAF it self did not have multiple predefined rules for protection, only XSS (Cross-site Scripting) an SQLi (SQL Injection) were supported. All other rules needed to configure manually as regular expressions or so.

The WAF is service that should always be turned on for CloudFront Distribution, Application Load Balancer (ALB) and API Gateway.

The workshop material is again public and can be found from here: https://github.com/gtaws/ProtectWithWAF

Encryption options for AWS Direct Connect – NET405

Mr. Sohaib Tahir (Sr Solutions Architect, AWS) from Seattle was the teacher in this session. It was more listening than doing because of the short period of time. We (attendees) were group of seven from USA, Japan and Finland.

There was five possibilities to encrypt direct connection:

1. Private VIF (virtual interface) + application-layer TLS
2. Private VIF + virtual VPN appliances (can be in transit VPC)
3. Private VIF + detached VGW + AWS Site-to-site VPN (CloudHub functionality)
4. Public VIF + AWS Virtual Private Gateway (GP, IPSec tunnel, BGP)
5. Public VIF + AWS Transit Gateway (BGP, IPSec tunnel, BGP) NEW!

It’s good to remember that single VPN connections has 1,25 Gbps limit which can be hit easily with DX connection and eg. data intensive migration jobs. AWS recommendation is to use number five architecture if it is possible. Using the fifth architecture requires to have own direct connection so you cannot use shared model direct connection from 3rd party operator.

AWS published yesterday cross-region VPC connectivity via Transit Gateway. During the session Mr. Tahir started to do demonstrate this new feature ad-hoc but we ran out of time.

My Monday at AWS re:invent 2019

I started the day with breakfast at Denny’s. It was nice to have typical (I think) American breakfast. Thanks Mr. Heikki Hämäläinen for your company. By the way, all attendees from Solita are wearing those bright red hoodie shown in the picture. Thanks to our Cloud Ambassador Anton Floor. The hoodie makes it a lot easier to spot a colleague in a crowded places. Okay, let’s start going through my actual sessions.

How NextRoll leverages AWS Batch for daily business operations – CMP311

Advertisement company’s Tech Lead Mr. Roozbeh Zabihollahi described shortly their journey with AWS Batch service. If I remember correctly, they use about 5000 CPU years which is huge amount of computing power. It was nice to hear NextRoll allows their teams quite freely to choose which services they want to use. Nowadays Mr. Zabihollahi sees that more and more teams are looking into AWS Batch as a promising choice to use, rather than Hadoop or Spark.

Mr Zabihollahi believes that AWS Batch is good for several things:

AWS Batch is good for

If you are consideration start using AWS Batch you should be familiar at least these challenges:

The Mr. Steve Kendrex (Sr. Technical Product Manager, AWS) presented the road map of AWS Batch service. The support for Fargate (a.k.a serverless container service) is coming but Steve could not provide details for a wide audience. My personal guess is the spot instance support for Fargate is coming soon which provide key cost efficiency factor for batch operations.

Build self-service registration with facial recognition – ARC320

My first builder session this year was about integrating facial recognition for registering guests to an event. Me and four other attendees were led by Mr. Alan Newcomer (Solutions Architect, AWS) to this interesting topic. Mr. Newcomer had lived before near Las Vegas which was interesting to hear about him.

Each builder session starts with short queuing for the right table which you have hopefully reserved a spot beforehand:

The hall has multiple tables which each has 7 chairs, one for a teacher and 6 for participants, and screen for guidance purposes.

Typical the teacher provides website which has all the required information to do exercise. Additional to that the teacher provides unique password for each participant eg. for AWS Console login. After that each participant can start doing the exercise by themselves. The teacher provides helps whenever needed. You need to keep good pace all the time to be able to do whole exercise.

During the recognition session we built an application with had tree main functionalities:  user registering, do RSVP one day before the event and finally registering user at event via facial recognition. You actually look up the workshop material by yourself here: http://regappworkshop.com/

Managing DNS across hundreds of VPCs – NET411

This was my second chalk talk today. It started very well because right at the beginning audience heard real life problems from different attendees. The chalk talk was guided by Mr. Matt Johnson (Manager, Solutions Architecture, WWPS, AWS) and Mr. Gavin McCullagh (Principal System Development Engineer, AWS). They did extremely well.

It was reminded that the support for overlapping private zones was published recently. It enables autonomous structured dns management in multi-account environment.  For more information go to: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-route-53-now-supports-overlapping-namespaces-for-private-hosted-zones/

During the session we looked up four different architecture for sharing DNS information with multiple VPCs (~accounts). The number four “Share and Associate Zones and Rules” was the most interesting which suites for massive number of private DNS zones and VPCs. It has hub account for outbound DNS traffic to corporate network and it uses private zone associating between VPC’s. The associating does not yet have native CloudFormation support but there are several ways to handle association, eg. using CloudFormation custom resources or custom Lamda function.

One major feature request was that AWS should support DNS query logging (query and response) in the VPC. The audience wanted to receive the logging information to CloudWatch log groups. The logging is needed for security/audit and debugging purposes.

Processing AWS Ground Station data in AWS – NET409

This my second builder session sounded very fancy, handling data from satellites. The attendees had very different experience from AWS and from satellites, everything from one to five in both topics. After the session I needed to update my CV…

In the sky there are few open satellites. Those can be listened by AWS Ground Station service and the data received to AWS account. The data link between the Ground Station and your VPC is made via elastic network interface (ENI).

In the example case we received 15 Mbps stream for 15 minutes. It was the period that the satellite was visible for the Ground Station’s antenna system. The stream from Ground Station needs always to be received by to Kratos DataDefender software that will parse UDP traffic. The Ground Station traffic is not in right order and sometimes missing species which is handled by the DataDefender.

The data stream was analyzed in few phases via S3 bucket and EC2 instances. The final product was precise TIFF format picture of the view of the satellite passing the Ground station antenna. The resolution was about 1 megapixel per kilometer.

Nordics Customer Reception

The evening ended to pleasant and well organised the Nordics Customer Reception event at the Barrymore. The Solita was one of the sponsors of the event. From the terrace we had great view towards the Encore hotel:

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

CxO You need to understand this about Cloud

In order to embrace full potential of modern techniques like AI/ML, IoT, Agile, DevOps, you need CLOUD to enable your IT to support Business at full scale

It’s probably useless to start making more listings of the benefits of the cloud. There has been enough of those during the past 5 years. At the same time the digital transformation has been on the table of every CEO, CIO, CDO, CMO, etc. We can all agree that we need to have a modern data warehouses, make mobile solutions, be agile, create new digital services and utilize artificial intelligence. In the Cloud, these services can be taken for granted. However, I have noticed that companies and organizations don´t necessarily understand what the utilization of cloud really means and requires from you.

In order to embrace the full potential of modern techniques like AI/ML, IoT, Agile, DevOps, you need CLOUD to enable your IT to support Business at full scale

Why the platform matters?

Usually, technology is technology and should not be mixed with strategy discussions, but I think that the cloud makes an exception, at least next x years (I’m not going to predict). When advanced digital transformation companies are telling the story where data is the new oil and other possibilities of the new business opportunities, they seem to forget to mention the platform that makes it all possible. Without the cloud, flexible and scalable modern data platforms with AI and other innovative solutions fall short. You just cannot get that speed, ease of management and the variety of services from on-prem or “private cloud” solutions.

Problem is that Senior management may lack an understanding of how big change the cloud is for their organization especially for IT, how it tightly links to digital transformation and what kind of resistance it may create.

Still today, there are a number of large companies where their own IT department does not know how to properly utilize the cloud

Change resistance and skills gap leads to shadow IT

Still today, there are a number of large companies where their own IT department does not know how to properly utilize the cloud and, at worst, they deliberately slow down cloud-related projects. This is natural, you tend to resist things that are unknown and things that put you out of your comfort zone. For these reasons, things aren´t moving at the speed that business wants.

This can create a shadow IT inside the company, where business units start to acquire cloud solutions directly so that they do not have to struggle unnecessarily with un-matching solution offering and lack of capabilities of their own IT.

The key challenge here is that when you start to develop services in an environment where you haven´t built your foundation properly, the continuity and manageability of services you build, are at risk. Lack of centralized management entails unnecessary risks as eg. unmanaged users become a huge risk.
In this way, environments in the cloud by different business units and by different partners are created with no one thinking about the whole. IT has traditionally thought about these things, but now it has been ignored. IT’s concerns and resistance against these types of projects are genuine and valid. If they don’t have the skills to build cloud services, so how would business units have? Well, they have the courage and enthusiasm to go forward but they’ve not used to handle subjects related to continuity and manageability.

This leads to, as I have written previously, into cloud service mess and uncontrolled cloud.

Going to the cloud is a big change that needs to be better addressed and lead properly.

How could this be taken into account and handled better?

Communication

Effective internal communication is needed. When the strategy outlines that we are developing new digital services and ecosystems, it must be made clear that this means deploying new cloud services. This must be repeated! The issue mentioned in the management’s monthly letter is just not enough. Many times I’ve been in situations where the customer’s personnel have been told that they’re going to take the environment into the cloud, but no one has really understood what that really means.

Leading the change

When communication is in order, you need a leader with the power to act and understanding of the cloud. If going to the cloud is not managed properly and with a sufficient mandate, it is difficult to move things forward. Care must be taken to ensure that data center processes and architectures do not fall into the cloud. In the cloud, new ways of working are needed, and addressing this may sometimes require the use of powers.

Engagement and competence development

A good leader understands that know-how does not come from scratch and everything cannot be outsourced. So at the same time when it´s important to bring in knowledgeable partners in your projects, it is also to involve your own IT from the beginning. Existing competencies should also be mapped and based on this, form learning paths to the new cloud roles. A properly motivated old data center fox can be very eager to update their skills. Old knowledge and skills do not become obsolete overnight, they are only used differently.

Motivation

Going to the cloud is a big change for your staff, not just IT. Procurement and budgeting also change, contracts are different, etc. This can give rise to fears that one’s job is in danger and one’s skills are not enough. By encouraging competence development and rewarding achievements, a positive atmosphere is created where everyone has the same goal. This is not rocket science.

With these things in mind, we can start building a modern digital services platform that supports business and agile development.

Our adaptive cloud transformation framework helps you to do this without the need for a massive transformation project; it just ensures that all the things are taken into account.

The time span, needed resources, etc. are adapted to your needs and your situation.

Contact me if you are interested to learn more

Anton Floor
Cloud Advisor
anton.floor@solita.fi

AWS Summit Berlin 2019

My thoughts on the Berlin AWS Summit 2019

What is an AWS Summit?

AWS Summits are small, free events that happen in various cities around the world. They are a “satellite” event of the re:Invent which takes place in Las Vegas every year in November. If you cannot attend re:Invent, you should definately try to attend an AWS Summit.

Berlin AWS Summit

I have had the pleasure of attending the Berlin AWS Summit for 4 years in a row.

Werner Vogels

The event was a 2 day event held on 26-27 of February 2019 in Berlin. The first day was more focused for management or new cloud users and the second day had more deep-dive technical sessions. The event started with a keynote held by Werner Vogels, CTO of Amazon. This year the Berlin AWS Summit seemed to be very focused on topics around Machine Learning and AI. Also I think this year there were more people attending compared to 2018 or 2017.

You will always find other sessions that are interesting to you, even if ML&AI are currently not on your radar. For example I attended the session about “Observability for Modern Applications” that showed how to use AWS X-Ray and App Mesh to monitor and control large scale microservices running in AWS EKS or similar. App Mesh is currently in public preview and it looks very interesting!

The partners

Every year there are a lot of stands by various partners showcasing their products to the passers by. You can also participate in raffles with the cost of your email address (and obvious marketing emails that will ensue). Most of them will also hand out free swag, stickers or pens etc.

stands 1Stands 2Stands 3

Solita Oy is an AWS Partner, please check our qualifications on the AWS Partners page.

Differences to previous years

This year there was no AWS Certified lounge which was a surprise to me. It is a restricted area for people who have an active AWS Certification where they can network with other certified people. I hope it will return next year again.

 

Thank you for the event!

Thank you and goodbye

Modern cloud operation: successful cloud transformation, part 2

How to ensure a successful cloud transformation? In the first part of this two-part blog series, I explained why and how cloud transformation often fails despite high expectations. In this second part, I will explain how to succeed in cloud transformation, i.e. how to move services to the cloud in the right way.

Below, there are three important tips that will help you reach a good outcome.

1. Start by defining a cloud strategy and a cloud governance model

We often discuss with our customers how to manage, monitor and operate the cloud and what things should be considered when working with third party developers. Many customers are also interested to know what kinds of guidelines and operating models should be determined in order to keep everything under control.

You don’t need a big team to brainstorm and create loads of new processes to define a cloud strategy and update governance models.

To succeed in updating your cloud strategy and governance model, you have to take a very close look at things and realise that you are moving things to a new environment that functions differently from traditional data centers.

So it’s important to understand that for example software projects can be developed in a completely new way in the cloud with multiple suppliers. However, it must be kept in mind that this sort of operation requires a governance model and instructions on what kind of minimum requirements the new services that are to be linked to the company’s systems should have and how their maintenance and continuity should be taken care of. For instance, you have to decide how you can ensure that cloud accounts, data security and access management are taken care of.

2. Insist on having modern cloud operation – choose a suitable partner or get the needed knowhow yourself

Successful cloud transformation requires right kind of expertise. However, traditional service providers rarely have the required skills. New kinds of cloud operators have emerged to solve this issue. Their mission is to help customers manage cloud transformation. How can you identify such operators and what should you demand from them?

The following list is formed on the basis of views presented by Gartner, Forrester and AWS on modern operators. When you are looking for a partner…

  • demand a strong DevOps culture. It forms a good foundation for automation and development of services.
  • ensure cloud-native expertise on platforms and applications.It creates certainty that an expert who knows the whole package and understands how applications and platforms work together is in charge of the project.
  • check that your partner has skills in multiple platforms. AWS, Azure and Google are all good alternatives.
  • ask if your partner masters automatic operation and predictive analytics. These skills reduce variable costs and contribute to quick recovery from incidents.
  • demand agile operating methods, as well as transparency and continuous development of services. With clear and efficient service processes, cost management and reporting are easier and the customer understands the benefits of development.

Solita’s answer to this is a modern cloud operation partnership. In other words, we help our customers create operating models and cloud strategies. A modern cloud operator has an understanding of the whole package that has to be managed and helps to formulate proper operating models and guidelines for cloud development. It’s not our purpose to limit development speed or opportunities, but we want to pay attention to things that ensure continuity and easy maintenance. After all, the development phase is only a fraction of the whole application life cycle.

The developer’s needs are taken into account, and at the same time, for instance the following operating models are determined: How are cloud accounts created and who creates them? How are costs monitored? What kind of user rights are given and to whom? What sort of development tools are used or what targets should be achieved with them? We are responsible for deciding what things are monitored and how.

In addition, the right kind of partner knows what things should be moved to the cloud in the first place.

When moving to cloud, the word move doesn’t fit very well in this context because it is rarely recommended just to move workloads. That is why it’s better to talk about transformation, which means transforming an existing worksload at least with some modifications towards cloud native.

In my opinion, application development is one important skill a modern cloud operator should master. Today, the cloud can be seen as a platform where different kinds of systems and applications are coded. It takes more than just the ability to manage servers to succeed in this game. Therefore, DevOps culture determines how application development and operation work together. You have to understand how environments are automated and monitored.

In addition to monitoring whether applications are running, experts are able to control other things too. They can analyse how an application is working and whether it is performing effectively. A strong symbiosis between developers and operators helps to continuously develop and improve skills that are needed to improve service quality. At best, this kind of operator can promise their customers that services are available and running all the time, and if they are not, they will be fixed at a fixed monthly charge. The model aims to minimise manual operation and work that is separately invoiced per hour. For instance, the model has allowed us reduce our customers’ billable hours by up to 75%.

With the addition of knowledge on the benefits and best features of different cloud services, as well as capacity use and invoicing, you get a package that serves customers’ needs optimally.

3. Don’t try to save in migration! Make the implementation project gradual

 

Lift & shift type transfers, i.e. moving old environments as they are, don’t generate savings very often. I’m not saying that it couldn’t happen, but the best benefits are achieved by looking at operating models and the environment as a whole. This requires a comprehensive study of the things that should work in the cloud and how the application is integrated in other systems.

The whole environment and its dependencies should be analysed, and all services should be checked one by one. After that you plan migration, and it is time to think what things can be automated. This requires time and money.

A migration that leads to an environment that has been automated as much as possible is a good target. It should also lower recurrent costs related to operation and improve the quality of the service.

Solita offers all services that are needed in cloud transformation. If you are interested in the subject, read more about our services on our website. If you have any questions, please feel free to contact us!

Download a free Cloud Buyer's Guide