How to win friends and influence people by consulting?

By definition consulting is easy, just advice people on how to do things in a wiser manner. But how to keep yourself motivated and your skills up-to date in this fast paced world is a totally different matter!

I have done consulting over the years in several different companies and have gathered a routine that helps achieving things described in the starting paragraph. Not everyone is cut for consulting, it requires a special type of person to succeed in it. I am not saying you need to be especially good on particular topics to be able to share your knowledge to your customers.

The first rule of thumb is that you never, never let your skills get old.  It does not matter how busy you are. Always, and i mean always, make some time to study and test new things. If you don’t you are soon obsolete.

Second rule of consulting 101 is that you need to keep yourself motivated, once work becomes a chore you lose your “sparkle” and customers can sense that. If you want to be on top of your game you need to have that thing which keeps customers coming back to you.

Third rule is that you keep you need to keep your customers happy. Always remember who pays your salary. This should be pretty obvious though.

Fourth and the most import rule is “manage yourself”. This is something extremely important in this line of work. It is easy to work too much, sleep too little and eventually have a burnout. This is something that takes practice but is absolutely necessary in the long run. To avoid working too much you need to know yourself and know what symptoms are signs that you are not well. I need to sleep, eat and exercise to avoid this kind of situation. Just saying “work less” is not always possible so good physical and mental health is essential.

Consulting business can be a cutthroat line of work where straight up strongest survive, some describe it as a meritocracy. But in Solita it is not so black and white. We have balanced the game here quite well.
Of course we need to work and do those billable hours. But we have a bit more leg room and we aim ourselves to be the top house in the nordics. Leaving the leftovers for the b-players to collect.

If you still think you might be cut for consulting work, give me call or whatsapp or contact by some other means

Twitter @ToniKuokkanen
IRCnet tuwww
+358 401897586 

Integrating AWS Cognito with and others eIDAS services via SAML interface

AWS Cognito and Azure AD both support SAML SSO integration but neither supports encryption and signing of SAML messages. Here is a solution for a problem that all European public sector organizations are facing.

In the end of 2018, The Ministry of Finance of Finland aligned how Finnish public sector organizations should treat public cloud environments. To summarize, a “cloud first” -strategy. Everybody should use the public cloud, and if they don’t, there must be a clear reason why not. The most typical reason is, of course, classified data. The strategy is an extremely big and clear indication of change regarding how organizations should treat the public cloud nowadays.

To move forward fast, most applications require an authentication solution.  In one of my customer projects I was requested to design the AWS cloud architecture for a new solution with a requirement for strong authentication of citizens and public entities. In Finland, for public sector organizations there exists an authentication service called (Suomi means Finland) to gain trusted identity. It integrates banks etc. to a common platform.  The service is following strictly the Electronic Identification, Authentication and Trust Services (eIDAS) standard. Currently, and at least in short term future perspective, the supports only SAML integration.

eIDAS SAML with AWS Cognito – Not a piece of cake

Okay, that’s fine. The plan was to use AWS Cognito for strong security boundary for applications and it supports “the old” SAML integration. But in few hours later, I started to say No, Why and What. The eIDAS standard requires encrypted and signed SAML messaging. Sounds reasonable. However, soon I found out that AWS Cognito (or for example Azure AD) does not support it. My world collapsed for a moment. This was not going to be as easy as I thought.

After I contacted to AWS partner services and service organization, it was clear that I need to get my hands dirty and build something for this. In Solita we are used to have open discussions and transfer information from mouth to mouth between project. So, I already knew that there are at least a couple of other projects that are facing the problem. They also are using AWS Cognito and they also need to integrate with eIDAS authentication service. This made my journey more fascinating because I could solve a problem for multiple teams.

Solution architecture

Red hat JBoss Keycloak is the star of the day

Again, because open discussion my dear colleague Ari from Solita Health (see how he is doing while these remote work period) pointed out that I should look into the product called Keycloak. After I found out that it is backed by Red Hat JBoss, I knew it has a strong background. The Keycloak is a single sign on solution which supports e.g. SAML integration for eIDAS service and OpenID for AWS Cognito.

Here is simple reference architecture from the solution account setup (click to zoom):

The solution is done with DevOps practices. There is one Git repository for Keycloak Docker image and one for AWS CDK project. The AWS CDK project is provisioning the square area components with dash line to the AWS account (and e.g. CI/CD pipelines not shown in the picture). The rest is done by the actual IaC-repository of each project because it varies too much.

We run Keycloak as a container in AWS Fargate service which has at least two instances always running in two availability zone in the region. The Fargate service integrates nicely with AWS ALB, for example if one container is not able to answer to health check request, it will not receive any traffic and soon it will be replaced by another container automatically.

Multiple keycloak instances forms a cluster. They need to share data between each other via TCP connection. The Keycloak uses jgroups to form the cluster. In the solution, the Fargate service register (and deregister) the new container to AWS Cloud Map service automatically and provides DNS interfaces to find out which instances are up and healthy. Keycloak uses “DNS PING” query method by jgroups to search others via Cloud Map DNS records.

The other thing what Keycloak clusters need is the database. In this solution we used AWS Aurora PostgreSQL PaaS database service.

The login flow

The browser is the key integrator element because it is redirected multiple times with payload from service to another. If you don’t have previous knowledge how  SAML works, check Basics of SAML Auth by Christine Rohacz.

The (simplified) initial login flow is described below. Yep, even it is hugely simplified, it still has so many steps.

  1. User enters access the URL of the application. The application is protected by AWS Application Load Balancer and its listener rule requires user to have valid AWS Cognito session. Because the session it is missing, user is redirected to the AWS Cognito domain.
  2. The AWS Cognito receives request and because no session found and identity provider is defined, it forwards the user again to the Keycloak URL.
  3. The Keycloak receives the request and because no session is found and SAML identity provider is defined, it forwards the user again to the authentication service with signed and encrypted SAML AuthnRequest.
  4. After user has proven his/her identity at service, the service redirects user back to the Keycloak service.
  5. The Keycloak verifies and extracts the SAML message and its attributes, and forwards user back to the AWS Cognito service
  6. The AWS Cognito verifies the OpenID message and asks more user information via secret from Keycloak and finally redirects the user back to the Application ALB.
  7. The application’s ALB receives the identity and finally redirects the user back to the original path of the application’s ALB

Now user have session within the application ALB (not with the Keycloak ALB) for several hours.

The application receives internally few extra headers

The application ALB adds two JWT tokes via x-amzn-oidc-accesstoken and x-amzn-oidc-data headers to each request it sends to the backend. From those headers, the application can easily access to the information who is logged in and other information about the user profile in AWS Cognito. Those headers are only passed between ALB and the application.

Here is example of those headers:

Notice: the data is imaginary and for testing purpose by

x-amzn-oidc-accesstoken: {
    "sub": "765371aa-a8e8-4405-xxxxx-xxxxxxxx",
    "cognito:groups": [
    "token_use": "access",
    "scope": "openid",
    "auth_time": 1591106167,
    "iss": "",
    "exp": 1591109767,
    "iat": 1591106167,
    "version": 2,
    "jti": "xxxxx-220c-4a70-85b9-xxxxxx",
    "client_id": "xxxxxxx",
    "username": "xxxxxxxxx"

x-amzn-oidc-data: {
    "custom:FI_VKPostitoimip": "TURKU",
    "sub": "765371aa-a8e8-4405-xxxxx-xxxxxxxx",
    "custom:FI_VKLahiosoite": "Mansikkatie 11",
    "custom:FI_firstName": "Nordea",
    "custom:FI_vtjVerified": "true",
    "custom:FI_KotikuntaKuntanro": "853",
    "custom:FI_displayName": "Nordea Demo",
    "identities": "[{\"userId\":\"72dae55e-59d8-41cd-a413-xxxxxx\",\"providerName\":\"\",\"providerType\":\"OIDC\",\"issuer\":null,\"primary\":true,\"dateCreated\":1587460107769}]",
    "custom:FI_lastname": "Demo",
    "custom:FI_KotikuntaKuntaS": "Turku",
    "custom:FI_commonName": "Demo Nordea",
    "custom:FI_VKPostinumero": "20006",
    "custom:FI_nationalIN": "210281-9988",
    "username": "",
    "exp": 1591106287,
    "iss": ""


There are multiple security elements and best practices in use also for this solution. For example, each environment of each system has their own AWS account as the first security boundary. So, there will be separate Keycloak installation for each environment.

There are few secret strings that are generated to the AWS Secret Manager and used by Keycloak service via secret injection in runtime by Fargate task definition. For example, the OpenID secret is generated and shared via AWS Secret Manager and it is newer published to code repository etc.

The Keycloak service is only published by the realm. Eg. the default admin panel from default realm is not published to the internet. Realm is a Keycloak concept to have multiple solutions inside a single Keycloak system with boundaries.

The Keycloak stores user profiles but it can be automatically cleaned if required by the project.

About me

I’m a cloud architect/consultant for public sector customers in Finland in Solita. I have a long history with AWS. I found a newsletter that in September 2008 EBS service was announced.  Me and my brother were excited and commenting “finally persistent storage for EC2”. The EC2 were extended to Europe a month later. I know for sure that at least from 2008 I have used AWS services. Of course, the years have not been the same, but it is nice to have some memories with you.

What have you always wanted to know about the life of Solita’s Cloud expert?

What’s it like to work at Solita and in the Cloud team? During recruitment meetings and discussions, candidates bring up a range of questions and preconceptions about life at Solita. I asked Helinä Nuutinen from our Cloud Services team in Helsinki to answer some of the most common ones. She might be a familiar face to those who’ve had a technical interview with us.


Helinä, could you tell us a little bit about yourself?

I’ve been with Solita for a year as a Cloud Service Specialist in the Cloud Services team. Before that, I worked with more traditional data centre and network services. I was particularly interested in AWS and DevOps, but the emphasis of my previous role was a little different. I participated in Solita’s AWS training, and before I knew it, I started working here.

Currently, I’m part of the operations team of our media-industry customer. Due to coronavirus, we’re working from home, getting used to the new everyday life. I have five smart and friendly team mates, with whom I would normally sit at the customer site from Monday through Thursday. The purpose of our team is to develop and provide tools and operational support services for development teams. We develop and maintain shared operational components such as code-based infrastructure with Terraform and Ansible, manage logs with Elasticsearch Stack as well as DevOps tools and monitoring.

I typically spend my free time outdoors in Western Helsinki, take care of my window sill garden, and work on various crafting and coding projects. Admittedly, lately I haven’t had much energy to sit at the computer after work.

Let’s go through the questions. Are the following statements true or false?

#1 Solita’s Cloud team only works on cloud services, and you won’t succeed if you don’t know AWS, for example.

Practically false. Solita will implement new projects in the public cloud (AWS, Azure, GCP) if there are no regulatory maintenance requirements. We produce regulated environments in the private cloud together with a partner.

To succeed at Solita, you don’t have to be an in-depth expert on AWS environments – interest and background in similar tasks in more traditional IT environments is a great start. If you’re interested in a specific cloud platform, we offer learning paths, smaller projects, or individual tasks.

Many of our co-workers have learned the ins and outs of the public cloud and completed certifications while working at Solita. We are indeed learning at work.

#2 At Solita, you’ll be working at customer sites a lot.

Both true and false. In the Cloud team, it’s rare to sit at the customer site full time. We’re mindful of everyone’s personal preferences. I personally like working on site. Fridays are so-called office days when you have a great reason to visit the Solita office and hang out with colleagues and people you don’t normally meet.

In consulting-focused roles, you’ll naturally spend more at the customer site, supporting sales as well.

(Ed. Note: Our customers’ wishes regarding time spent on site vary. In certain projects, it’s been on the rise lately. However, we will always discuss this during recruitment so that we’re clear on the candidate’s preferences before they join us.)

#3 Solita doesn’t do product development.

Practically false – we do product development, too. Our portfolio includes at least ADE (Agile Data Engine) and WhiteHat. Our Cloud Services team is developing our own monitoring stack, so we also do “internal development”.

(Ed. Note: The majority of Solita’s sales comes from consulting and customer projects, but we also do in-house product development. In addition to WhiteHat and Agile Data Engine, we develop Oravizio, for example. Together, these amount to about 2 MEUR. Solita’s net sales in 2019 was approximately 108 MEUR.)

#4 If you’re in the Cloud team, you need to know how to code.

Sort of. You don’t have to be a super coder. It also depends what kind of projects you have in the pipeline. However, in the Cloud Services team, we build all infrastructure as code, do a lot of development work around our monitoring services, and code useful tools. We’re heavy users of Ansible, Python Terraform and Cloudformation, among others, so scripting or coding skills are definitely an advantage.

#5 The team is scattered in different locations and works remotely a lot.

Sort of true. We have several Cloud team members in Helsinki, Tampere and Turku, and I would argue that you’ll always find a team mate in the office. You can, of course, work remotely as much as your projects allow. Personally, I like to visit the office once a week to meet other Solitans.

To ease separation, we go through team news and discuss common issues in bi-weekly meetings. During informal location-specific discussions, we share and listen to each other’s feedback.

#6 I have a lengthy background in the data centre world, but I’m interested in the public cloud. Solita apparently trains people in this area?

True. We offer in-house learning paths if you’re looking to get a new certfication, for example, or are otherwise interested in studying technology. You’ll get peer support and positive pressure to study at the same pace with others.

As mentioned earlier, public cloud plays a major role in our work, and it will only get stronger in the future. The most important thing is that you’re interested in and motivated to learn new things and work with the public cloud.

(Ed. Note: From time to time, we offer free open-to-all training programmes around various technologies and skills.)

#7 The majority of Solita’s public cloud projects are AWS projects.

True. I don’t have the exact figures, but AWS plays the biggest part in our public cloud projects right now. There’s demand for Azure projects in the market, but we don’t have enough people to take them on.

(Ed. Note: The share of Azure is growing fast in our customer base. We’re currently strengthening our Azure expertise, both by recruiting new talents, and by providing our employees with the opportunity to learn and work on Azure projects.)

#8 Apparently Solita has an office and Cloud experts in Turku?

Yes! In Turku, we have six Cloud team members: four in the Cloud Services team (including subcontractors) plus Antti and Toni who deliver consulting around cloud services. I haven’t been to the office but I hear it’s fun.

(Ed. Note: Solita has five offices in Finland: Tampere, Helsinki, Oulu, Turku and Lahti. At the moment, Cloud is represented in all other cities except Oulu and Lahti.)

#9 Solita sells the expertise of individuals. Does this mean I’d be sitting at the customer site alone?

Mostly a myth. It depends on the project – some require on-site presence from time to time, but a lot work can be done flexibly in the office or remotely. No one will be forced to sit at the customer site alone. Projects include both individual and team work. This, too, largely depends on the project and the employee’s own preferences.

#10 Solita doesn’t have a billing-based bonus.

True. If we have one, no one has told me.

(Ed. Note: Solita’s compensation model for experts is based on a monthly salary.)

#11 Solita only works with customers in the public sector.

False. Solita has both public and private sector customers, from many different industries.

(Ed. Note: In 2019, around 55% of our Cloud customers were from the private sector.)

#12 Projects require long-term commitment, so you’ll be working on the same project for a long time.

True, if that’s what you want! When I started at Solita, my team lead asked me in advance what kind of projects I’d like to be part of, and what would be an absolute no-no. I’m happy to note that my wishes have actually been heard. But it might be because I’m not picky. Projects can last from a few days to years, and people might be working on several projects at the same time. Of course, you can also rotate between projects, so a final commitment isn’t necessary.

Helinä was interviewed by Minna Luiro who’s responsible for the Cloud unit’s recruiting and employer image at Solita. Do you have more questions or thoughts around the above topics? You can reach out to Minna: +358 40 843 6245 or

If you’re excited about the idea of joining Solita’s Cloud team, send us an open application. You can also browse our vacancies.

One year of cloud

It has been 12 months since I took a leap into exciting world of cloud consulting. It is time to look back and throw some predictions for 2020.

I started my career with cloud at Hewlett Packard aeons ago. At that time the cloud was really immature and we had a huge variety of problems just on basic IaaS deployments. However even then the benefits were so imminent that we thought it was really worth it. Customers were eager to have faster deployments, so somewhat unstable platforms did not seem big problem at all.

Along came OpenStack with its promise to simplify and unify things. I still have nightmares about upgrading OpenStack installations (Although I have heard that it has gotten better now). Yes we did some NFV -related things with it as it was the only option back then.

The world has changed a lot during these years and in the last 12 months I think the pace is speeding up even more.

Joining Solita

I joined Solita cloud consulting unit last January with high hopes and a little bit scared. But after the warm welcome I got from my colleagues (especially Antti Heinonen) and comprehensive introduction to working at Solita I immediately felt at ease (thanks again Antti). This would be the place where I can make a real impact. Working for Solita has proven to be something that I imagined it would be. Lots of freedom but also lots of responsibility to take. This is not for everyone but for me it works.

During these months I have seen the demand for cloud competency in Finland has risen into a whole new level. Customers are really looking to get serious with cloud adoption, if they have not yet done so. Microservice architecture with containers and FaaS based offering is really taking off even more.

My projects have varied a lot during this year, ranging from simple cloud deployments to complete cloud strategy and governance projects.
One of the key skills here is that you need to put yourself into the shoes of the client and you really need to be aware of all aspects of modern business. Only that way you really can evaluate how to use the cloud for the clients best interests. I wont go into details in this posts on what has happened on the tech side during these 12 months as the cloud evolves so fast that it is pointless to list all new features.

Crystal ball

I estimate that there is some demand for on-premises cloud during 2020. Azure stack and AWS outpost both seem to be pretty interesting and I hope we see some cases with those. The price tag is quite big, but not impossible, if you have a real use for them.
Serverless will keep on winning as the benefits with it are so obvious, even if there are some hiccups with the management layer from time to time (this needs its own blog entry).

It’s difficult to make predictions, especially about the future. But I will make one clear prediction that FaaS will rule even more during 2020. There are some clear benefits with especially DevOps minded developers. It will take the focus away from infrastructure even more and true IaC deployments are easier to do. It is more cost effective as resources are only used when code is executed. Scaling is easier and more imminent than on legacy VM’s or containers.

There are some rules of thumb to keep on mind when working with FaaS services. Limit single FaaS to do a single action, limit scope and keep functions lightweight. Pay attention when using libraries, they have a tendency to slow down functions. Keep functions isolated, don’t call functions from other functions.

Cold starts

With Faas there is the inevitable discussion about cold starts. Problem is that when running Functions which are not “warm” it takes some time to bring the infra up and running. Usually delay is around 2-3 seconds which is a deal breaker for some usage. But there are some workarounds for this and vendors are constantly improving this.

Also AWS is a bit ahead here and announced provisioned concurrency at 2019 Re:invent which basically keeps your lambdas “warm” and cuts the latency. Look for a summary here

Check out Mikhail Shilkov analysis on coldstarts with AWS, Azure and GCP

To summarize last year: It has been very busy year for our team and I think in 2020 we are going to do some great things again.
I hope to see some new colleagues in our team, so if you are interested in working at Solita dont hesitate to contact me to have a chat at +358 401897586 or tuwww @IRCnet

My Summary of AWS re:Invent 2019

The re:Invent event for 2019 is officially over. However, the learning and innovation is never stopping. It was a full week of learning new things, mingling with other AWS users and basically having a good time in Las Vegas. You can continue learning by following AWS Events Youtube channel:

Personally, I would like to thank all my colleagues, customers of Solita and employees of the event organisation for a just magnificent conference. Thanks!

As fresh as I could be after a re:Invent.

View of Oakland in the second picture was very pretty. The gentlemen next to me from Datadog said that he has landed to SFO tens of times and our plane’s approaching direction was the first time also for him. Amazing view!

It’s not just about the services, actually it’s more about having the bold mindset to try new things

You don’t have to be an expert of everything what you do. If you are, it probably means that you are not following what is out there and you are doing familiar stuff to yourself repeatedly. I don’t say that you have to always be Gyro Gearloose. I mean that you should push the limits a bit, take controlled risk for reward and have the will to learn new things.

The three announcements that caught my attention

Fargate spot instances. That’s is what my project has been waiting for a while. It will do costs savings and make it possible to stop using ECS EC2 clusters in cost optimization manner. The rule of thumb is that you can save 70% of your costs with spot instances.

Outposts. I really like this idea that you can get AWS ecosystem integrated computing power next to in corporate data centers. The hybrid environments are only way for many customers. I would like to see in future some kind of control panel also inside Outpost. Now all information points out that you cannot basically to do any controlling for servers inside the Outposts in higher than OS level (e.g. login in via SSH or Remote desktop).

Warm Lambda’s. I think the most of Lambda developers have thought about warming up their Lambda resources manually via CloudWatch events etc. This simplifies the work as is should have always been. Now you can be sure that I there is request coming you will have some warm computing capacity to serve the request fast. The pricing starts from 1,50 $/month/128MB to have one provisioned concurrency (=warm lambda).

re:Play 2019 photos

We organized preparty at The Still in Mirage before.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

My Thursday at AWS re:Invent 2019

Keynote by K Dr. Werner Vogels

Dr. Vogels is CTO at AWS. The keynote started with very detail information about virtual machine structure and evolvement during the last 20-25 years. He said that the AWS Nitro microVM is maybe the most essential component to provide secure and performance virtual machine environment. It has enabled rapid development of new instance types.

Ms. Clare Liguori (Principal Software Engineer, AWS) gave detail information about container services. In AWS there are two container platforms, the ECS EC2 with virtual machines and the serverless Fargate. If you compare scaling speed, the Fargate service can follow the needed capacity much faster and avoid under-provisioning (for performance) and over-provisioning (for cost saving). With ECS you have two scaling phases, first you need to scale up your EC2 instances and after that launch the tasks. 

During the keynote Mr. Jeff Dowds (Information Technology Executive, Vanguard) told their journey to AWS from corporate data center. Vanguard is registered investment advisor company located in USA and has over 5 trillion USA dollars in assets. Mr. Dowds convinced the benefit of public cloud by hard facts: -30% savings in compute costs, -30% savings in build costs, and finally 20x deployment frequency via automations. Changing the mindset of deployment philosophy, I think is the most important change for the Vanguard. Like said in the slides, they have now the ability to innovate!

Building a pocket platform as a service with Amazon Lightsail – CMP348

Mr. Robert Zhu (Principal Technical Evangelist, AWS) kept chalk talk session about the AWS Lightsail service. He started saying that this talk will be the most anti-talk in re:Invent in meaning of scaling, high availability and so on. The crowd was laughing loud.

In chalk talk the example deployed app was a progressive web app. PWA apps try to look as a native app e.g. in different phones. PWA’s typically use web browser in the background with shared UI code between operating systems.

The Lightsail service provides public static ip addresses and simple DNS service that you can use to connect the static ip address to your user-friendly domain name. It supports wildcard records and default record which is nice. The price for outbound traffic is very affordable: in 10 USD deal you get 2TB outbound traffic.

We used a lot of time how to configure a server in traditional way via ssh prompt: installing docker, acquiring certificate from Let’s encrypt etc.

The Lightsail service has no connection to VPC, no IAM roles, and so on. It is basically only a virtual server, so it is incompatible for creating modern public cloud enterprise experience.

Selecting the right instance for your HPC workloads – CMP409

Mr. Nathan Stornetta (Senior Product Manager, AWS) kept this builder session. He is a product manager for AWS ParallelCluster. In on-premises solutions you almost always need to do choices what to run and when to run. With public cloud’s elastic capacity, you don’t have to queue for resources and not to pay what you are not using.

HPC term stands for high performance computing which basically means that your workload does not fit into one server and you need a cluster of servers with high speed networking. Within the cluster the proximity between servers is essential.

In AWS there exists more than 270+ different instance types. To select right instance type needs experience about the workload and offering. Here is nice cheat sheet for instance types:

If your workload needs high performance disk performance in-and-out from the server the default AWS recommended choice would be to use Amazon FSx for Lustre cluster storage solution.

If you decide to use the Elastic file system EFS service, you should first think how much you need performance rather than what size you need. The design of EFS promise 200 MBps performance per each 1 TB of data. So, you should rather decide the needed performance so your application will have enough IO bandwidth in-use.

The newest choice is Elastic Fabric Adapter (EFA) which was announced a couple of months ago. More information about EFA can be found from here:

If you don’t have experience which storage would work the best for your workload, it is strongly recommended to test each one and make the decision after that.

Intelligently automating cloud operations – ENT305

This session was a workshop session. In workshop sessions there is multiple tables with same topic and in builder session there is one small table for each topic. So, there were more than hundred persons to do same exercise.

At first Mr. Francesco Penta (Principal Cloud Support Engineer, AWS) and Mr. Tipu Qureshi (Principal Engineer, AWS) gave a short overview of services that we are using in this session. I want to mention few of them. AWS Health keeps track of health of different services in your account. For example, it can alarm if your ACM certificate is not able to renew automatically (e.g. missing DNS records) or VPN tunnel is down.

The other service was AWS Auto Scaling predictive scaling. It is important thing if you want to avoid bigger under-provisioning. When just using e.g. CPU metric from last 5 minutes you are already late, bad. Also, if your horizontal scaling needs awhile to have new nodes in service, then the predictive scaling helps you to get more stable performance.

The workshop can be found here:

I’m familiar with the tooling so I could have yelled Bingo as one of the first persons to finish. I was happy to finish early and go to hotel for short break before the Solita’s customer event and the re:Play. The re:Play starts at 8pm in Las Vegas Festival Grounds with music, food, drinks and more than 30 000 eye pairs.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

My Wednesday at AWS re:Invent 2019

It was early morning today because the alarm clock woke up me around 6 am. The day started with Worldwide Public Sector Keynote talk at 7 am in Venetian Plazzo Ohall.

Worldwide Public Sector Breakfast Keynote – WPS01

This was my first time to take part to the Public Sector keynote. I’m not sure how worldwide it was. At least Singapore and Australia were mentioned, but I cannot remember anything special said about Europe.

Everyone who is following international security industry even a little could not have missed the fact who many cities, communities etc. have faced ransomware attack. Some victims paid the ransom in Bitcoins, some did not pay and many of victims are just quiet. The public cloud environment is a great way to protect your infrastructure and important data. Here is summary how to protect yourself:

The RMIT University from Australia has multiple education programs for AWS competencies and it was announced that they are now official AWS Cloud Innovation Centre (CIC). Typical students have some education background (eg. bachelor in IT) and they want to make some move in job market by re-education. Sounds great way!

The speaker Mr. Martin Bean from the RMIT showed the picture from Jetsons (1963, by Hanna Barber Production) that could already list multiple things that are invented for mass markets much later. Mr. Martin also reminded two things that got my attention: there are more people owning cellphone than toothbrush and 50 percent of jobs are going to transform to another in next 20 years.

Visit to expo area

After keynote I visited expo in Venetian Sands Expo area before heading to the Aria for the rest of Wednesday. The expo was huge, noisy, crowded etc. The more detail experience from last year was enough. At AWS Cloud Cafe I took panorama picture (click to zoom in) and that’s it, I was ready to leave.

I took the shuttle bus towards Aria. I was very happy that the bus driver dropped off us next the main door of Aria hotel which saves about average 20-30 minutes of queueing in Aria’s parking garage. Important change! On the way I passed the Manhattan of New York.

Get started with Amazon ElastiCache in 60 minutes – DAT407

Mr. Kevin McGehee (Principal Software Engineer, AWS) was the instructor for the ElasticCache Redis builder session. In the session we logged in to the Amazon console, opened Cloud9 development environment and then the just followed the clear written instructions.

The guide for builder session can be found from here:

This session was about how to import data to the Redis via python and index and refine the data at the importing phase. In refinement the data becomes information with aggregated scoring, geo location etc. It’s easier to use by the requestor. That was interesting and looked easy.

Build an effective resource compliance program – MGT405

Mr.Faraz Kazmi (Software Development Engineer, AWS) held this builder session.

Conformance pack under AWS Config service was published last week. It can be integrated in AWS Organization level in account structure. With conformance packs you can make a group of config rules (~governance rules for common settings) easily in YAML format template and have consolidated view over those rules. There are few AWS managed packs currently available. “Operational Best Practices For PCI-DSS” pack is one for example.  It’s clear that AWS will provide more and more of those rule sets in upcoming months and so will also do the community via Github.

There are timeline view and compliance view of your all resources, so it makes this tool very effective to have consolidated view of compliance of resources.

You can find the material from here:

Btw. If you cannot find Conformance packs, you are possible using old Config service UI in the AWS Console. Make sure to switch to new UI. All new features are only done to the new UI.

The clean-up phase in the guide is not perfect. To addition to the guide you have to manually delete SNS topic and IAM roles that was created in the wizards. It was a disappointment that no sandbox account was provided.

Best practices for detecting and preventing data exposure – MGT408

Ms. Claudia Charro (Enterprise Solutions Architect, AWS) from Brasilia was the teacher in this session. This was very similar to my previous session that I was not aware. In both session we used Config rules and blocked public s3 usages.

The material can be found from here:

AWS Certification Appreciation Reception

The Wednesday evening started (as usually) with reception for certificated people at the Brooklyn Bowl. It is again nice venue to have some food, drinks, music and mingle with other people. I’m writing this around 8 pm so I left a bit early to get good night sleep for Thursday which is the last full day.

Brooklyn bowl outside Brooklyn bowl inside Brooklyn bowl bowling Brooklyn bowl dance floor

On the way back to my hotel (Paris Las Vegas) I found the version 3 Tesla Supercharge station which was the one of the first v3 stations in the world. It was not too crowed. The station was big when I’m comparing with the supercharger stations in Finland. The v3 Supercharger stations can provide up to 250kW charging power for Model 3 Long Range (LR) models, which has 75kWh battery size. I would have like to see the new (experimental) Cybertruck model.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action


Andy Jassy’s release roller coaster at AWS re:Invent 2019

We can forget the slow start for the week. Andy Jassy's keynote was loaded with a plethora of new stuff.

Tuesday morning was full of hope that there is something new and cool coming out. Andy Jassy’s keynote has always been packed with all the big new releases and this year was no exception. With a great keynote behind we headed towards a day full of sessions and a customer event later in the evening.

Transformation is the key to success

Even though the cloud is present in many IT discussions, the fact is that only 3% of workloads are in the cloud. The wast majority of IT spend is still in traditional data centers, albeit services are moving to the cloud in accelerating pace.

Andy Jassy’s four key principles for transformation were

  • Senior leadership team conviction and alignment
  • Top-down aggressive goals
  • Train your builders
  • Don’t let the paralysis stop you before you start.

Those all address the most crucial steps in a successive cloud journey. It needs an executive sponsor from an organization to effectively get everybody on board and make it a reality. Also, the goal of the transformation needs to be ambitious and push the limits in a sensible way to make progress. And the last point, don’t get paralyzed by the possible difficulties ahead. Start with the easy stuff and learn during the process. Hard things get considerably simpler when there is solid experience from easier things.

The same list applies when doing migrations from on-premises environments to the cloud. There is bound to be simpler and harder environments but one should not paralyze progress with the hard ones. Start with simpler environments and make your way to harder ones.

Data and ML

Each year, the emphasis on data and everything related to it seems to be growing. Andy’s keynote was almost split between other releases and then data related releases. AWS’ belief is that data lakes will play a central role with modern data platforms and S3 as the technology for it. S3 access points feature was introduced to tackle one issue that has been with S3 data lakes and that is granular access control to data.

S3 access points allow one to create completely separate access to an S3 bucket with its own URL and policy attached to it. This way policies, in buckets that have large amounts of data and multiple use cases, can be made a lot simpler. This, combined with the previously released Attribute-Based Access Control,  make an efficient pair for controlling S3 access.

Data without any analysis is mostly useless and AWS is trying to make utilizing data as effortless as possible by adding more capabilities to Sagemaker. These features include for example Sagemaker Studio, Model monitor and Debugger. The goal of the SageMaker has always been making developing machine learning models simpler. New features announced today help reach that goal since those make experimenting and evaluating the results of those experiments easier and enable one to focus more on providing business value. AWS also announced its first autoML service, which enables developing machine learning models with much less effort than traditional ways.

New concepts for zones

AWS locations have been split with two concepts before. Those have been regions that define larger geographically separated areas, which then consist of at least three availability zones that are made up of actual data centers. This has been and is a completely adequate setup for the majority of AWS customers. Then again, there are workloads that require single-digit latencies to users or other environments. The newest addition to locations tries to remediate that and it is AWS Local Zones.

Local zones are smaller and most likely not so highly available as normal availability zones but they give the advantage to move workloads closer to end-users or other environments and think architecture for those workloads in completely new ways. Local zones are connected to a parent zone and can be seen as an extension to it. Local zones are also fully maintained by AWS to offer a level of security that can be expected from normal availability zones.


Even though the keynote was a two-hour constant stream of new releases and information. There were still multiple services that were introduced just in AWS blog posts and that gives the idea of how many new services and features actually got announced during one day. And in this blog post, we only touched a small fraction of those. More can be read from What’s new in AWS blog or Jeff Barr’s blog post collecting most of them

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

My Tuesday at AWS re:Invent 2019

Please also check my blog post from Monday.

Starting from Tuesday each event venue provides great breakfast with lightning speed service. It scales and works. It’s always amazing how each venue can provide food services for thousands of people in short period of time. The most people are there for the first time so guidance has to be very clear and simple.

Today started with keynote by Mr. Andy Jassy. I was not able to join the live session at Venetian because of my next session. Moving from one location to another takes at least 15 minutes, and you have to be at least 10 minutes before at your session to reclaim you reserved seat. Starting last year, the booking systems forces one hour cap between sessions in different venues.

Keynote by Andy Jassy on Tuesday

You can find the full recap written by my colleagues here: Andy Jassy’s release roller coaster at AWS re:Invent 2019

Machine learning was the thing today. The Sagemaker service received tons of new features. Those are explained by our ML specialists here: Coming soon!

So I joined overflow room at Mirage for Jassy’s keynote session. Everyone has their personal headphones. I have more than 15 years background in software development, so it was love in first sight with CodeGuru. There are good news and bad news. It is service for static analysis of your code, to make review and finally but not definitely least to provide realtime profiling concept via installed agent.

The profiling information is provided in 5 minutes periods and it will provide profiling for several factors: CPU, memory and latency. It was promising product because Mr. Jassy told that Amazon has used it by itself for couple of years already. So, it is mature product already.

So, what was the bad news. It supports only Java. Nothing to add to that.

The other interesting announcement for me was general availability of Outposts. Finally, also in Europe you can have AWS fully managed servers inside your corporate datacenter. Those servers integrate fully to AWS Console and can be used eg. for running ECS container services. The starting price 8300 USD per month is very competitive because it already includes roughly 200 cores, 800GB memory and 2,7TB of instance storage. You can add EBS storage additionally starting from 2.7TB.

You can find more information here:

Performing analytics at the edge – IOT405

This session was a workshop and level 400 (highest). It was held by Mr. Sudhir Jena (Sr. IoT Consultant, AWS) and Mr. Rob Marano (Sr. Practice Manager, AWS).

Industry 4.0 a.k.a. IoT was totally new sector for me. It was very informative and pleasant session. It was all about AWS IoT Greengrass service which can provide low latency response but still managed platform which tons of features for handling data stream from IOT devices locally.

For multiple people it was first touch to AWS Cloud Development Kit which I fell in love about three months ago. It has multiple advances like refactoring, strong typing and good IDE support. You can find more information about AWS CDK here:

In our workshop session we demonstrated to receive temperature, humidity etc. time series data stream from IoT device. The IoT device was in our case EC2 which simulated IoT device. From AWS IoT Greengrass console you can eg. deploy new version of analytic functions to the IoT devices.

Material for workshop can be found from Github:

AWS Transit Gateway reference architectures for many VPCs – NET406

This was a session and it was held by Nick Matthews (Principal Solutions Architect, AWS) in glamorous Ballroom F at Mirage. It fits more than thousand people. The session was almost full, so it was a very popular session.

To summaries the topic, there are several good ways to do interconnectivity between multiple VPCs and corporate data center. In small scale you can things more manually but in large scale you need automation.

One provided solution for automation based on the use of tags. The autonomous team (owner of account A) can tag their shared resources predefined way. The transit account can read those changes via CloudTrail logging. So, each modification will create CloudTrail audit event which triggers lambda function. The function checks for if change is required and makes change request item to metadata table in DynamoDB to wait for approval. The network operator is notified via SNS (Simple notification service). The operator can then allow (or decline) the modification. Another Lambda will then do the needed route table modifications for the transit account and for the account A.

If you are interested, you can watch video from August 2019:

If you want to wait, I’m pretty sure that this re:Invent talk was also recorded and can be found from AWS Youtube channel in few week:

Fortifying web apps against bots and scrapers with AWS WAF – SEC357

Mr. Yuri Duchovny (Solution architect, AWS) held the session. It was the most intensive session with a lots of todo with many architectural examples and usage scenarios in demo screen. The AWS WAF service has got a new shiny UI in AWS Console. Also the AWS published few new features already in last few weeks, eg. Managed rules to give more protection in nondisruptive way. The WAF it self did not have multiple predefined rules for protection, only XSS (Cross-site Scripting) an SQLi (SQL Injection) were supported. All other rules needed to configure manually as regular expressions or so.

The WAF is service that should always be turned on for CloudFront Distribution, Application Load Balancer (ALB) and API Gateway.

The workshop material is again public and can be found from here:

Encryption options for AWS Direct Connect – NET405

Mr. Sohaib Tahir (Sr Solutions Architect, AWS) from Seattle was the teacher in this session. It was more listening than doing because of the short period of time. We (attendees) were group of seven from USA, Japan and Finland.

There was five possibilities to encrypt direct connection:

1. Private VIF (virtual interface) + application-layer TLS
2. Private VIF + virtual VPN appliances (can be in transit VPC)
3. Private VIF + detached VGW + AWS Site-to-site VPN (CloudHub functionality)
4. Public VIF + AWS Virtual Private Gateway (GP, IPSec tunnel, BGP)
5. Public VIF + AWS Transit Gateway (BGP, IPSec tunnel, BGP) NEW!

It’s good to remember that single VPN connections has 1,25 Gbps limit which can be hit easily with DX connection and eg. data intensive migration jobs. AWS recommendation is to use number five architecture if it is possible. Using the fifth architecture requires to have own direct connection so you cannot use shared model direct connection from 3rd party operator.

AWS published yesterday cross-region VPC connectivity via Transit Gateway. During the session Mr. Tahir started to do demonstrate this new feature ad-hoc but we ran out of time.

Calm before the storm at AWS re:Invent 2019

Week started more quiet than what I would have anticipated. Hopefully this is fixed in tomorrow’s keynote.

Re:Invent week is officially on, first sessions are behind and so is the first customer event, which was the Nordic reception. Regarding releases and news, the start has been pretty quiet but we have still gotten some new services.

Nordic Customer Reception

Nordic customer reception is an event focused for AWS customers from nordics only. This year it was held at the Barrymore bar slightly outside of the busiest part of the strip. The event was again fully packed with people from all around nordics.

It was great to talk to everyone and have nice time with old and possible new customers. We had our whole team there representing Solita with Omar, Anniina, Miia and Anahit running the booth in the picture.


Releases and announcements

First day of the week had two new services and two new security features for existing services

Access analyzer for S3 and IAM

A new feature called Access analyzer was released for S3 and IAM. It is a feature inside these services that tries to automatically analyze created policies and verify that they only have the intended effect.

Access analyzer for S3 goes through bucket access policies and alerts and creates reports of misconfigured policies. This enables the reaction and remediation to be much faster than manually evaluating policies. AWS bucket policies already, by default, deny any public access to buckets so I am not sure how useful this is for S3 but that remains to be seen when we get to test it out better.

Access Analyzer for AWS IAM works in a very similar fashion as the previous one but with resource policies for Amazon S3 buckets, AWS KMS keys, Amazon SQS queues, AWS IAM roles, and AWS Lambda functions. This though could be a feature that becomes very useful in the future.

With both of these services, the good thing is that it can be enabled without any additional costs. Making it a no-brainer to enable and see how useful it is.

EC2 Image builder

EC2 Image builder is a service, which creates pipelines to automatically build and update golden images for EC2. There has already been a solution for this, called AWS Golden AMI pipeline, which has been deployable through AWS Marketplace. Still it has been more like a demonstration how it can be done and requiring quite a lot of tinkering to configure it. This could now be the actual, easy to use service, which enables fast implementation of golden AMI factory.

Amazon Braket

Amazon braket is a new service for exploring and evaluating quantum computing. It should be an easy to use platform for scientist, researchers and developers to build, test and run quantum compute algorithms. With Amazon Braket, one can create algorithms from scratch or use pre-built algorithms and then utilize a fully managed simulation environment to test it. When the algorithm is ready, it can be deployed to a quantum computer with different hardware choices from hardware providers like D-Wave, IonQ, and Rigetti.

All in all, Monday was good with a lot of things happening. Next we have Andy Jassy’s Keynote and most likely the biggest event for the week. At least when it comes to any new releases.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action