AWS re:Inforce 2019 review

Looking at new releases, sessions which I attended, comparison to re:Invent and overall value of the first-ever re:Inforce.

This year I attended re:Inforce, the first incarnation of AWS conference specializing in security. The conference was a two day event in Boston Convention & Exhibition Center.

New releases

Amazon VPC Traffic Mirroring was probably the biggest new release in the event, but doesn’t touch my projects much. But, if you have systems for analyzing network traffic, this could be useful.

AWS Security Hub and AWS Control Tower are generally available. Haven’t yet tested much of these, but announced already in re:Invent.

Amazon EC2 Instance Connect was released in truth after re:Inforce, but should have been released during the conference. A new way to connect in case you don’t want to use Session Manager.

Attended sessions

Keynote by Stephen E. Schmidt, VP and CISO of AWS


The keynote speakers were great and overall the whole keynote was good. I would have liked to have more new releases, now the main content was importance of security and existing solutions.

GRC346 – DNS governance in multi-account and hybrid environments

Builder sessions were already available in the last years re:Invent, but it didn’t get into any of them. DNS is not really my main focus area, but interesting topic nonetheless. Still, probably leaving setup of this to network specialists.

The setup was a little bit let down, because the room was quite noisy with multiple builder sessions going on the same time and participants didn’t do much themselves. But, it was very easy to ask questions and there was good discussion between AWS architect and participants.

SEJ001 Security Jam

Hackaton / Jam was again a highlight of the event. Sadly, I was just in time in and hence didn’t have much time to talk with the team beforehand. The duration was actually 3,5 hours which felt a little bit short.

We had a three-person team, but we didn’t really achieve much synergy. At first, we decided everyone would begin as a solo player and ask help when needed. During the whole jam I worked with one other only for about one hour and the last one worked solo the whole time. We did call out some questions and answers to each other from time to time, but very minimal team work.

One lesson that I relearned again was to double check everything. In one task, there needed to be a private endpoint for API Gateway. In security jams, some of the setup is already done for you. So, when I checked the list of private endpoints and there was one, I thought that it was the correct one. But it was for AWS Systems Manager and therefore I would have needed to add a new one.

AWS has been improving the platform so that companies can request access to either AWS architect or company own personnel lead jams. Going to look into this and maybe holding an internal jam. But the cost was unclear and number of interested colleague was low last time that I tried to hold a GameDay.

Other sessions:

I also attended one lecture type session, one workshop and couple of chalk talks. To keep the length of the post manageable, I will skip them. But, feel free to ask about them.

Other activities

Security Hub (Expo)

Expo floor

Many of the partner solutions were of course about web firewalls etc., which aren’t the main interest for a data developer/architect. But there were also companies about data masking, encryption and audit trails. I have received many follow up emails and phone calls after the event. Luckily, many of them are interesting even though might not have a use case right away.


There was multiple unofficial after parties at Tuesday evening, but I only attended Nordic gathering sponsored by AWS. Quite small gathering, but made discussion easier. Most of the evening there was two from Sweden and one from Finland at my table, but couple of others visited.

No alt text provided for this image

Closing reception was really informal with food, drinks and games inside the conference center and outside in a lawn. Very nice, but not the best setup for me network. I did exchange couple of words with the people I met in the Nordic gathering.

Comparison to re:Invent

From my point of view, one major reason for attending for re:Invent to be able to hear and question AWS about brand new systems that only selected companies have been able to test with strict NDAs. Even if they aren’t right away available in Europe, you know the basic capabilities of the system and can plan for the future. And usually the technical people giving workshops and sessions give much more honest opinions compared to marketing material released about the service. This was mostly missing from re:Inforce, because only Amazon VPC Traffic Mirroring was completely new service.

Good thing was that having everything in one place made logistics much easier and there wasn’t so much moving around. The expo was also much smaller and interesting companies were easier to find.

Re:Invent has four main days and two shorter ones. Compared to that, two days of re:Inforce is quite short time. You don’t get familiarity of the location which would make moving around faster and you don’t have time to reorganize calendar if you would like to learn more about a certain topic. Also, from a traveling perspective, travel vs conference ratio is much worse with re:Inforce.


First feelings after the conference was that it was ok, but it has risen to good level after some thinking about it more objectively. The first impression came mostly because I was automatically comparing re:Inforce to re:Invent. In that comparison re:Inforce is lacking in multiple areas. But, if we are looking at re:Inforce objectively there was quite a lot to learn and meeting of new AWS users. And to some, shorted length and cheaper tickets might make it possible possible to attend where re:Invent isn’t a possibility.

If attending again, I should keep more free time in the calendar and participate in the background events like ongoing security jam and capture the flag. Also, more planning beforehand, because conference being only two days there really isn’t much time to reorganize days during the event.

The next re:Inforce will be in Houston, but the feedback form had a question for re:Inforce outside USA. So, there might be hope for one in Europe at some time in the future.

Additional reading

Got a laptop case with badges from AWS booths.

Kicking the tires of AWS Textract

Amazon Web Services' new ML/AI service Amazon Textract came to general availability and I gave it a quick test.

AWS has multiple services in AI/ML field. These include, for example, Amazon Comprehend for text analysis, Amazon Forecast for predicting future from set of data and Amazon Rekognition to extract information from pictures. Amazon Textract is a new service in this field and it was just announced to be generally available. Textract is a service which does Optical Character Recognition (OCR) from multiple file formats and stores output in a more usable format in JSON.

At the moment of release the AWS Textract can detect Latin-script characters from standard English alphabet and ASCII symbols. It can use PNG, JPEG and PDF as input files. I would say that there are enough input formats but would have wanted to see more languages available. Of course Finnish is not something that I assume to see anytime soon or at all. Textract is now available in three regions in US and Ireland in Europe.

Analyse test

Textract allows one to easily test what kind of results they can get with it. One can open Textract service and first see a sample document created by AWS. This helps to get started and get some kind of idea how to use it. Documents can be uploaded directly from the console and it automatically creates a S3 bucket to store them.

Textract sample document


I did tests with multiple files and file formats to see how it performs but used one PDF document as an example for this post. The PDF I used was AWS Landing Zone immersion day information sheet because it was handily available and had text, table and image in it. On the left in the picture, we can see again the areas where Textract has identified content and on the right is the extraction. From this kind of clear and simple document it seems to have picked up everything easily. It took around 10 seconds for this document to be analysed.

Test document


I would say that Textract handled all the files I gave it without too much problem. The view of the file and places where it finds text does not always align even though text output is correct. This happened for example with my CV where the visual representation was off on many places.

Visual analyse sample


Outputs can also be downloaded directly from the console in a zip file and it will provide these four files.

  • apiResponse.json
  • tables.csv
  • keyValues.csv
  • rawText.txt

Tables.csv, keyValues.csv and rawText.txt are all quite clear. Tables holds all the tables and fields Textract found from the document and keyValues.csv holds form data. This is the table that was found in the document. It has been correctly read and put in table. Interestingly, it has also added empty columns for the long empty spaces between texts.

Test document table


Rawdata.csv contains extracted text from document in a raw format. It has all the text in non edited format, all the words just after each other.

H Automated Landing Zone Immersion Day Please join the AWS Nordics Partner team for an immersion day for the Automated Landing Zone. Learn how to set up an account structure according to best practices with the help of the ALZ solution. After you have performed this training, you will get access to the ALZ solution tools and materials sO you can use when setting up customer environments. This training will also be helpful for those of you interested in the AWS Control Tower service that will be available later this year. WHEN: April 1st 2019 (no joke) WHERE: AWS Office at Kungsgatan 49 in Stockholm Preliminary agenda 10:00 10:30 Welcome and Registration 10:30 10:40………

Textract also gives a full output of the process. This information is in JSON format and contains all the information about the findings. There is detailed information what was found and in where. It also gives a confidence percentage of the finding. This is a very large JSON document even with a small PDF, almost as big file as the original PDF.

      "BlockType": "WORD",
      "Confidence": 99.962646484375,
      "Text": "account",
      "Geometry": {
        "BoundingBox": {
          "Width": 0.0724315419793129,
          "Height": 0.012798813171684742,
          "Left": 0.448628693819046,
          "Top": 0.37925970554351807
        "Polygon": [
            "X": 0.448628693819046,
            "Y": 0.37925970554351807
            "X": 0.5210602283477783,
            "Y": 0.37925970554351807
            "X": 0.5210602283477783,
            "Y": 0.39205852150917053
            "X": 0.448628693819046,
            "Y": 0.39205852150917053
      "Id": "f1c9bdeb-f76a-44ff-8037-6cb746d5613d",
      "Page": 1



Textract is a needed addition to AWS AI/ML service family and fills the gap in analysis tools. Textract says that it will read English from multiple file formats and seems to do that well. All tests with PDFs and pictures were successful. Of course one wouldn’t use this service like this and upload single files manually. Textract has support in AWS cli and both Java and Python SDKs. That makes it possible to have, for example, automatic triggers in S3 bucket when new files are uploaded which launches Textract to do it’s thing. Overall a nice service which will probably be a very useful one for text analysis use cases.

Download a free Cloud Buyer's Guide

No public cloud? Then kiss AI goodbye

What’s the crucial enabling factor that’s often missing from the debate about the myriad uses of AI? The fact that there is no AI without a proper backend for data (cloud data warehouses/data lakes) or without pre-built components. Examples of this are Cloud Machine Learning (ML) in Google Cloud Platform (GCP) and Sagemaker in Amazon Web Services (AWS). In this cloud blog I will explain why public cloud offers the optimum solution for machine learning (ML) and AI environments.

Why is public cloud essential to AI/ML projects?

  • AWS, Microsoft Azure and GCP offer plenty of pre-built machine learning components. This helps projects to build AI/ML solutions without requiring a deep understanding of ML theory, knowledge of AI or PhD level data scientists.
  • Public cloud is built for workloads which need peaking CPU/IO performance. This lets you pay for an unlimited amount of computing power on a per-minute basis instead of investing millions into your own data centres.
  • Rapid innovation/prototyping is possible using public cloud – you can test and deploy early and scale up in the production if needed.

Public cloud: the superpower of AI

Across many types of projects, AI capabilities are being democratised. Public cloud vendors deliver products, like Sagemaker or CloudML, that allow you to build AI capabilities for your products without a deep theoretical understanding. This means that soon a shortage of AI/ML scientists won’t be your biggest challenge.  Projects can use existing AI tools to build world-class solutions such as customer support, fraud detection, and business intelligence.

My recommendation is that you should head towards data enablement. First invest in data pipelines, data quality, integrations, and cloud-based data warehouses/data lakes. So rather than using over-skilled AI/ML scientists, build up the essential twin pillars – cloud ops and skilled team of data engineers.

Enablement – not enforcement

In my experience, many organisations have been struggling to transition to public cloud due to data confidentiality and classification issues. Business units have been driving the adoption of modern AI-based technology. IT organisations have been pushing back due to security concerns.  After plenty of heated debate we have been able to find a way forward. The benefits of using public cloud components in advanced data processing have been so huge that IT has to find ways to enable the use of public cloud.

The solution for this challenge has proven to be proper data classification and the use of private on-premises facilities to support operations in public cloud. Data location should be defined based on the data classification. Solita has been building secure but flexible automated cloud governance controls. These enable business requests but keep the control in your hands, as well as meeting the requirements usually defined by a company’s chief information security officer (CISO). Modern cloud governance is built on automation and enablement – rather than enforcing policies.


  • The pathway to effective AI adoption usually begins by kickstarting or boosting the public cloud journey and competence within the company.
  • Our recommendation – the public cloud journey should start with proper analyses and planning.
  • Solita is able to help with data confidentiality issues: classification, hybrid/private cloud usage and transformation.
  • Build cloud governance based on enablement and automation rather than enforcement.

Download a free Cloud Buyer's Guide

AWS Summit Berlin 2019

My thoughts on the Berlin AWS Summit 2019

What is an AWS Summit?

AWS Summits are small, free events that happen in various cities around the world. They are a “satellite” event of the re:Invent which takes place in Las Vegas every year in November. If you cannot attend re:Invent, you should definately try to attend an AWS Summit.

Berlin AWS Summit

I have had the pleasure of attending the Berlin AWS Summit for 4 years in a row.

Werner Vogels

The event was a 2 day event held on 26-27 of February 2019 in Berlin. The first day was more focused for management or new cloud users and the second day had more deep-dive technical sessions. The event started with a keynote held by Werner Vogels, CTO of Amazon. This year the Berlin AWS Summit seemed to be very focused on topics around Machine Learning and AI. Also I think this year there were more people attending compared to 2018 or 2017.

You will always find other sessions that are interesting to you, even if ML&AI are currently not on your radar. For example I attended the session about “Observability for Modern Applications” that showed how to use AWS X-Ray and App Mesh to monitor and control large scale microservices running in AWS EKS or similar. App Mesh is currently in public preview and it looks very interesting!

The partners

Every year there are a lot of stands by various partners showcasing their products to the passers by. You can also participate in raffles with the cost of your email address (and obvious marketing emails that will ensue). Most of them will also hand out free swag, stickers or pens etc.

stands 1Stands 2Stands 3

Solita Oy is an AWS Partner, please check our qualifications on the AWS Partners page.

Differences to previous years

This year there was no AWS Certified lounge which was a surprise to me. It is a restricted area for people who have an active AWS Certification where they can network with other certified people. I hope it will return next year again.


Thank you for the event!

Thank you and goodbye

Modern cloud operation: successful cloud transformation, part 2

How to ensure a successful cloud transformation? In the first part of this two-part blog series, I explained why and how cloud transformation often fails despite high expectations. In this second part, I will explain how to succeed in cloud transformation, i.e. how to move services to the cloud in the right way.

Below, there are three important tips that will help you reach a good outcome.

1. Start by defining a cloud strategy and a cloud governance model

We often discuss with our customers how to manage, monitor and operate the cloud and what things should be considered when working with third party developers. Many customers are also interested to know what kinds of guidelines and operating models should be determined in order to keep everything under control.

You don’t need a big team to brainstorm and create loads of new processes to define a cloud strategy and update governance models.

To succeed in updating your cloud strategy and governance model, you have to take a very close look at things and realise that you are moving things to a new environment that functions differently from traditional data centers.

So it’s important to understand that for example software projects can be developed in a completely new way in the cloud with multiple suppliers. However, it must be kept in mind that this sort of operation requires a governance model and instructions on what kind of minimum requirements the new services that are to be linked to the company’s systems should have and how their maintenance and continuity should be taken care of. For instance, you have to decide how you can ensure that cloud accounts, data security and access management are taken care of.

2. Insist on having modern cloud operation – choose a suitable partner or get the needed knowhow yourself

Successful cloud transformation requires right kind of expertise. However, traditional service providers rarely have the required skills. New kinds of cloud operators have emerged to solve this issue. Their mission is to help customers manage cloud transformation. How can you identify such operators and what should you demand from them?

The following list is formed on the basis of views presented by Gartner, Forrester and AWS on modern operators. When you are looking for a partner…

  • demand a strong DevOps culture. It forms a good foundation for automation and development of services.
  • ensure cloud-native expertise on platforms and applications.It creates certainty that an expert who knows the whole package and understands how applications and platforms work together is in charge of the project.
  • check that your partner has skills in multiple platforms. AWS, Azure and Google are all good alternatives.
  • ask if your partner masters automatic operation and predictive analytics. These skills reduce variable costs and contribute to quick recovery from incidents.
  • demand agile operating methods, as well as transparency and continuous development of services. With clear and efficient service processes, cost management and reporting are easier and the customer understands the benefits of development.

Solita’s answer to this is a modern cloud operation partnership. In other words, we help our customers create operating models and cloud strategies. A modern cloud operator has an understanding of the whole package that has to be managed and helps to formulate proper operating models and guidelines for cloud development. It’s not our purpose to limit development speed or opportunities, but we want to pay attention to things that ensure continuity and easy maintenance. After all, the development phase is only a fraction of the whole application life cycle.

The developer’s needs are taken into account, and at the same time, for instance the following operating models are determined: How are cloud accounts created and who creates them? How are costs monitored? What kind of user rights are given and to whom? What sort of development tools are used or what targets should be achieved with them? We are responsible for deciding what things are monitored and how.

In addition, the right kind of partner knows what things should be moved to the cloud in the first place.

When moving to cloud, the word move doesn’t fit very well in this context because it is rarely recommended just to move workloads. That is why it’s better to talk about transformation, which means transforming an existing worksload at least with some modifications towards cloud native.

In my opinion, application development is one important skill a modern cloud operator should master. Today, the cloud can be seen as a platform where different kinds of systems and applications are coded. It takes more than just the ability to manage servers to succeed in this game. Therefore, DevOps culture determines how application development and operation work together. You have to understand how environments are automated and monitored.

In addition to monitoring whether applications are running, experts are able to control other things too. They can analyse how an application is working and whether it is performing effectively. A strong symbiosis between developers and operators helps to continuously develop and improve skills that are needed to improve service quality. At best, this kind of operator can promise their customers that services are available and running all the time, and if they are not, they will be fixed at a fixed monthly charge. The model aims to minimise manual operation and work that is separately invoiced per hour. For instance, the model has allowed us reduce our customers’ billable hours by up to 75%.

With the addition of knowledge on the benefits and best features of different cloud services, as well as capacity use and invoicing, you get a package that serves customers’ needs optimally.

3. Don’t try to save in migration! Make the implementation project gradual


Lift & shift type transfers, i.e. moving old environments as they are, don’t generate savings very often. I’m not saying that it couldn’t happen, but the best benefits are achieved by looking at operating models and the environment as a whole. This requires a comprehensive study of the things that should work in the cloud and how the application is integrated in other systems.

The whole environment and its dependencies should be analysed, and all services should be checked one by one. After that you plan migration, and it is time to think what things can be automated. This requires time and money.

A migration that leads to an environment that has been automated as much as possible is a good target. It should also lower recurrent costs related to operation and improve the quality of the service.

Solita offers all services that are needed in cloud transformation. If you are interested in the subject, read more about our services on our website. If you have any questions, please feel free to contact us!

Download a free Cloud Buyer's Guide

Modern cloud operation: successful cloud transformation, part 1

Today, many people are wondering how they could implement cloud transformation successfully. In the first part of this two-part blog series, I explain why and how cloud transformation often fails despite high expectations. In the second part, I will describe how cloud transformation is made and what the correct way of migrating services to the cloud is.

Some time ago at Solita HUB event, I talked about modern cloud operation and successful cloud transformation. Experiences that our customers had told us about, served as the starting point for my presentation. I wanted to share some of those also with you.

People have often started to use the cloud with high expectations, but those expectations have not really been met. Or they have ended up in a situation where nobody has a good picture of what things have been moved to the cloud or what has been built there. So they’ve ended up in cloud service mess.

People have often started to use the cloud with high expectations, but those expectations have not really been met.

In recent years, people have talked a lot about the cloud and how to start using it. Should they move their systems there by Lift & Shift their existing resources as they are, or should they make new cloud-native applications and systems? Or should they do both?

They might have decided to make the cloud transformation with the help of their own IT department, using an existing service provider or – a bit secretly – with a software development partner. No matter what the choice is, it feels like people are out to make quick profits and they haven’t stopped to think about the big picture and how to govern all of this.

The cloud is not a data centre

Quite often I hear people say “the cloud is only somebody else’s data center”. That is exactly what it is if you don’t know how to use it properly. When we think how the systems of a traditional service provider or our own IT departments has been built, it’s no wonder that you hear statements like this.

Download a free Cloud Buyer's Guide

Before, the aim was to offer servers from data center with maintenance and monitoring for operating systems. The idea was that first you specified what kind of capacity you want and how environments should be monitored. Then it was agreed how to react to possible alerts.

The architecture has been designed to be as cost-efficient as possible. In this model, efficiency has relied on virtualisation and, for instance, on the decision whether to build HA systems or not. Especially solutions with two data centers have traditionally been expensive.

When people have started to move this old operating model to the cloud, it hasn’t functioned as they had planned and hoped for. Therefore, it can be said that the true benefits of the cloud will not be gained in the traditional way.

Cloud transformation is not only about moving away from own or co-location data centers. It’s about a comprehensive change towards new operating methods.

It is very wise to build the above-mentioned HA systems in a cloud, because they won’t necessarily cost much or are build-in features. The cloud is not a data centre, and it shouldn’t be considered as one.

Of course, it’s possible to achieve savings with traditional workloads, but still, it is more important to understand that operating methods have to change. Old methods are not enough, and traditional service partners don’t often have adequate skills to develop environments using modern means.

Lack of management causes trouble in cloud services

In some cases, services are built in to cloud together with a software development partner. They have promised to create a well-functioning system quickly. And this can be the case in the cloud at its best. But without management or an proper governance model, problems often occur. The number of different kind of cloud service accounts may increase, and nobody in the organisation seems to know how to manage the accounts and where costs come from.

In addition, surprisingly often people believe that cloud services do not require maintenance and that any developer is able to build a sustainable, secure and cost-effective environment. They are surprised to notice that it’s not that simple.

‘No-Ops’, and maybe the word ‘serverless’ could belong to this same category, are terms that unfortunately have been misunderstood a bit. Only a few software development partners have corrected this misunderstanding, or they haven’t realised themselves that cloud services do require maintenance in reality.

It’s true that services that function relatively well without special maintenance can be built in the cloud, but in reality, No-Ops doesn’t exist without seamless cooperation between developers and operations experts, in other words DevOps culture. No-Ops does mean extreme automation which doesn’t happen on its own. It really isn’t possible everytime, and it is not always worth pursuing.

At Solita, operation has been taken to an entirely new level. Our objective is to make us “useless” as far as daily routines are concerned. We call this modern cloud operation. With this approach, we have, for instance, managed to reduce our customers’ hourly billing considerably. We have also managed to spread our operating methods from customers’ data centers all the way to the cloud.

In my next blog, I will focus on things that should be considered in cloud transformation and explain what modern cloud operation means in practice.

Anton works as a cloud business manager at Solita. Producing IT cost-efficiently from desktops to data centers is close to his heart. When he is not working on clouds, he enjoys skiing, running, cycling, playing football. He is excited about all types of gadgets related to sports and likes to measure and track everything.

Download a free Cloud Buyer's Guide