Feeling small and cozy – while being big and capable

About two months ago I started my new Cloud journey in a company that has grown - and grows - very fast. Initially I had an image of a small, nimble and modern company inside my head - and it was a surprise to realize that Solita has 1500+ employees nowadays. But has it become a massive ocean-going ship, too big to steer swiftly? A corporation slowly suffocating all creativity and culture?

Fortunately not. As our CEO Ossi Lindroos said (adapting / quoting some successful US growth company) in our starter bootcamp day:

“One day we will be a serious and proper company – but that day is not today!”

Surely, Ossi is not saying that we should not take business seriously and responsibly at all. Or that we should not act like a proper company when it comes to capabilities towards customers – or ability to take care of our own people. We do act responsibly and take our customers and our people seriously. But instead the idea – as I interpret it – is that we have that caring small community feel even when growing fast – and we want to preserve it no matter how big we grow.

Can a company with good vibrations preserve the essentials when it grows?

Preserving good vibrations

Based on my first weeks I feel that Solita has been able to maintain low hierarchy, open culture with brave and direct communication, not to forget autonomous and self-driven teams, people and communities. Like many smaller companies inside a big one, but sharing an identity without siloing too much. Diversity with unity.

I started in Solita Cloud business unit and the first thoughts are really positive. Work is done naturally crossing team or unit boundaries. Teams are not based on single domain expertise, but instead could act as self-preserved cells if required. Everyone is really helpful and welcoming. Company daily life and culture is running on Slack – and there you can easily find help and support without even knowing the people yet. And you get to know people on some level even without meeting them: that guy likes daily haikus, that girl listens to metal music etc.

“One day we will be a serious and proper company – but that day is not today!”

Petrus Enjoying Good Vibrations

Some extra muscle

And size is not all about downsides. Having some extra muscle enables things like getting a proper, well-thought induction and onboarding to new people that starts even before the first day with prepackaged online self-learning – and continues with intensive bootcamp days and self-paced, but comprehensive to-do-lists, that give a feeling that someone has put real effort on planning all this. Working tools are cutting-edge, whether choosing your devices and accessories or using your cloud observability system.  And there is room for the little bonus things as well, such as company laptop stickers, caps, backpacks and different kind of funny t-shirts. Not to mention all the health, commuting and childcare benefits.

And for customers, having some extra muscle means being a one-stop shop yet future-proof at the same time. Whether the needs are about leveraging data or designing and developing something new, or about the cloud which enables all this, customer can trust us. Now and tomorrow. Having that small community feeling and good vibrations ensures that we’ll have brilliant, motivated and healthy people helping our customers in the future as well.

Culture enables personal growth

And when the culture is supporting and enabling, one can grow fast. A while ago, I used to be a rapid-fire powerpoint guy waving hands like windmills – and now I’m doing (apologies for the jargon) Customer deployments into the Cloud using Infrastructure-as-Code, version control and CI/CD pipelines – knowing that I have all the support I need, whether from the low-threshold and friendly chat community of a nimble company, or a highly productized runbooks and knowledge bases of a serious and proper company. Nice.

Now, it’s time to enjoy some summer vacation with the kids. Have a great summertime you all, whether feeling small and cozy or big and capable!

Is cloud always the answer?

Now and then it might feel convenient that an application should be transferred to cloud quickly. For those situations this blog won’t offer any help. But for occasions when the decision is not yet made and a bit more analysis is required to justify the transformation, this blog post will propose a tool. We believe that often it is wise to think about various aspects of the cloud adoption before actually perform it.

For all applications there will be a moment in their lifecycle that the question whether the application should be modernised or just to be updated slightly. The question is rather straightforward. The answer might not as there are business and technological aspects that should be considered. Having the rational answer available is not easy task. Cloud transformation should have always business need as well as it should be technologically feasible. Many times there might be an interest to make the decision rather hasty and just move forward due to the fact that it is difficult to gather the holistic view to the application. But just neglect the rational analysis because it is difficult might not always be the suitable path to follow. Success in cloud journey requires guidance from business needs as well as technical knowledge.

To address this issue companies can formalize cloud strategy. Some companies find it as an excellent choice to move forward as during the cloud strategy work the holistic understanding is gathered and guidance for the next steps is identified. Cloud strategy also provides the main reason why cloud transition is supporting the value generation and how it is connected to the organisation strategy. However, sometimes the cloud strategy work might be contemplated to be too large and premature activity. In particular when the cloud journey is not really started and knowledge gap might be considered to be too vast to overcome and it is challenging to think about structured utilisation of the cloud. Organizations might face challenges in maneuvering through the mist to find the right path on their cloud journey. There are expectations and there are risks. There are low-hanging-fruits but there might be something scary ahead that has not even have a name.

Canvas to help the cloud journey

Benefits and risks should be considered analytically before transferring application to the cloud. Inspired by Business Model Canvas we came up a canvas to address various aspects of the cloud transformation discussion.  Application Evaluation Canvas (AEC) (presented in figure 1) guides the evaluation to take into account aspects widely from the current situation to expectations of the cloud.

 

cloud transformation

Figure 1. Application Evaluation Canvas

The main expected benefit is the starting point for the any further considerations. There should be a clear business need and concrete target that justifies the cloud journey for that application. And that target enables also the work to define dedicated risks that might hinder reaching the benefits. Migration to cloud and modernisation should always have a positive impact on value proposition.

The left-hand side of the canvas

The current state of the application is addressed with the left-hand side of the Application Evaluation Canvas. The current state is evaluated through 4 perspectives;  key partners, key resources, key activities and cost related. Key Partner section advice seeking answers to questions such who are the ones that are working with the application currently. The migration and modernisation activities will impact those stakeholders inevitably. In addition to the key partners, also some of the resources might be crucial for the current application. For example in-house competences that relates to rare technical expertise. These crucial resources should be identified. Furthermore, not only competences are crucial but also lots of activities are processed every day to keep the application up-and-running. Understanding about those will help the evaluation to be more meaningful and precise. After key partners, resources, and activities have been identified, the good understanding about the current state is established but that is not enough. Cost structure must also be well known. Without knowledge of the cost related to the current state of the application the whole evaluation is not on the solid ground. Costs should be identified holistically, ideally not only those direct costs but also indirect ones.

…and the right-hand side

On the right-hand side the focus is on cloud and the expected outcome. Main questions that should be considered are relating to the selection of the hyperscaler, expected usage, increasing the awareness of the holistical change of the cloud transformation, and naturally the exit plan.

The selection of the hyperscaler might be trivial when organisation’s cloud governance guides the decision towards pre-selected cloud provider. But for example lacking of central guidance or due to the autonomous teams or application specific requirements might bring the hyperscaler selection on the table. So in any case the clear decision should be made when evaluate paths towards the main benefit.

The cloud transformation will affect the cost structure by shifting it from CAPEX to OPEX. Therefore realistic forecast about the usage is highly important. Even though the costs will follow the usage, the overall cost level will not necessary immediately decrease dramatically, at least from the beginning the migration. There will be an overlapping period of current cost structure and the cloud cost structure as CAPEX costs won’t immediately decrease but OPEX based costs will start occurring. Furthermore the elasticity of the OPEX might not be as smooth as predicted due to the contractual issues; preferring for example annual pricing plans for SAAS might be difficult to be changed during the contract period.

The cost structure is not the only thing that is changing after cloud adoption. The expected benefit will be depending on several impact factors. Those might include success in organisational change management, finding the required new competences, or application might require more than lift-and-shift -type of migration to cloud before the main expected benefit can easily be reached.

Don’t forget exit costs

In the final section of the canvas is addressing the exit costs. Before any migration the exit costs should be discussed to avoid possible surprises if the change has to be rolled back.  The exit cost might relate to vendor lock-in. Vendor lock-in itself is vague topic but it is crucial to understand that there is always a vendor lock-in. One cannot get rid of vendor lock-in with multicloud approach as instead of vendor lock-in there is multicloud-vendor lock-in. Additionally, orchestration of microservices is vendor specific even a microservice itself might be transferable. Utilising somekind of cloud agnostic abstraction layer will form a vendor lock-in to that abstraction layer provider. Cloud vendor lock-in is not the only kind of lock-in that has a cost. Utilising some rare technology will inevitable tide the solution to that third party and changing the technology might be very expensive or even impossible. Furthermore, lock-in can have also in-house flavour, especially when there is a competence that only a couple of employees’ master. So the main question is not to avoid any lock-ins as that is impossible but to identify the lock-ins and decide the type of lock-in that is feasible.

Conclusion

As a whole the Application Evaluation Canvas can help to gain a holistic understanding about the current state. Connecting expectations to the more concrete form will to support the decision-making process how the cloud adoption can be justified with business reasons.

Cloud-technology is a tool

Us engineers are eager to get our hands dirty and dive into difficult architectural- and implementation tasks. We love to speak about technology and how to implement wonderful constructs. Good architecture and implementation is crucial, but we should not forget to put the customer to the spotlight. Do we have a solid business case and most important where is the value?

To project hardened seniors reading this, what would resonate in the manner to get you chills.

You know the feeling, when things really click. Your peer colleague succeeds in difficult project or the architecture comes together.

To us, who have experienced dozens of projects and clients, we tend to know the direction for the chills for success.

Meaning, we have created models from the past experiences. These models are formal structures explaining the world around.

The models

If we master and understand the models, it improves our communication, enables us to explain difficult scenarios, reason and cooperate with our teammates, predict the outcome from a solution and furthermore explore different options.

When our thinking is logical, consistent and based on past experiences, we are more likely to make wise choices. No matter if we speak about development team trying to implement difficult business case or just us cooking for our family.

The same is true, when a salesperson tries to find out what a customer truly needs or when we are implementing the next cloud-enabled solution for the customer.

Building value

It is all about building value. Is the food you prepare the value or is it merely a tool for you to have energy to play with your children?

Nevertheless I’m sure each and everyone of us have made up models, how to build value and get the chills in different situations.

Good architecture and implementation is crucial, but we should not forget to put the customer to the spotlight.

Do we have a solid business case and most important where is the value? We should not ask what  the customer wants, but what the customer needs.

Try 3 whys the next time you’re thinking of making a big decision:

“Why we are building the infrastructure to the cloud?”

“We need to grow our capacity and we need to be highly available and resilient”

“Why it is important to improve scalability and resilience?”

“The old infrastructure can not handle the increased traffic”

“Why the old infrastructure is not enough anymore?”

“We have analysed the platform usage. If we get more capacity on the rush hours, we can serve more customers.”

Productise

All right! We are convinced the customer needs the cloud migration and start working with the implementation.

Solita has productised the whole cloud journey, so we can quickly get into speed.

Ready made instructions, battle proven implementations and peer-support from earlier projects will help you up to speed.

Technical solutions follow a model, meaning there are few ways to provision a database, few ways to do networking in the cloud, few ways to optimize expenditure.

When we do not plan and develop everything from scratch, we build value.

Re-use

According to study published in the book Accelerate: The Science of Lean Software and DevOps, to be productive, everything should be in version control.

For sure the application code is already in the version control, but also system configuration, application configuration and scripts for automated build, test and delivery should be in version control.

The study revealed, keeping configurations and scripts in the version control correlated more to the software delivery performance than keeping the application code in the version control.

When building for cloud, the infrastructure must be defined in code. And the code should reside.. in version control!

This enables us to move faster without trade offs. We can recognise modules, implement them once, create automated tests and then re-use the codebase in customer repositories.

Every time we are able to do this, we build value.

Automate

Humans are erroneous. This is an experience based model. When computers handle the repetitive work, it enables people to solve problems.

We know from experience, automation requires investments and should be implemented from day one. Investments done here will result smaller development lead time, easier deployments and more quality code.

In cloud, we describe our infrastructure as code. This goes hand in hand with automation. Based on this model, we choose to automate recurring tasks such as building code, testing and making the deployments.

As a result we speed up the feedback loops, have repeatable results each time and enable developers make quality code and automated tests. We test our infrastructure in the pipeline and once again build value.

Deliver

Deliver continuously in small chunks is an experience based model.

You most surely want to have your piece of code tested and delivered to production before you forget, what you were doing.

Short lived branches or Trunk-Based Development predict performance. Failures in small changes are far easier to fix.

Also test automation is key part of continuous delivery. Reliable automated tests predict performance and improves quality. Diagram below is from the book Accelerate. 

High Performers who had adopted the Continuous Delivery model spent far more time on new work than in unplanned or rework.

Although unplanned, emergency and refactoring work is a necessity, value is built when implementing new features.

Software Delivery Performance

Measuring software delivery performance is difficult. An incorrect productivity metric can easily lead to poor decisions.

If you want to deliver the feature as quickly as possible without trading for quality, one key metric is development Lead Time. This is because code not in production is a waste.

For example Software Delivery Performance can be split to 4 topics:

  • Lead Time (from starting implementation to delivery to production)
  • Deployment Frequency (more often results smaller changes)
  • Mean Time to Restore (how to recover from failure)
  • Change Failure Rate (how often the changes cause a failure)

According to the study made by the authors of the book Accelerate, these are the measures from different types of Organisations:

Conclusion

Past experiences makes us what we are. Stop and think the models you have crafted. Challenge yourself, interact with your peers, find out new models for building value.

Together with the team and with the customer, you’ll will find the best solution for the opportunity at hand. Remember the actual implementation is only a fraction of the whole journey.

On pure technical aspect, we should re-use as much as possible and we should automate as much as possible. When talking about cloud migration, infrastructure must be described as code (IaC).

Does your organisation understand and use the models?

Does your organisation productise and re-use the codebase?

Let’s build value!

 

Avoid the pitfalls: what to keep in mind for a smooth start with cloud services

Many companies are looking for ways to migrate their data centre to the cloud platform. How to avoid potential pitfalls in migrating data centres to the public cloud? How to plan your migration so that you are satisfied with the end result and achieve the set goals?

Why the public cloud?

The public cloud provides the ability to scale as needed, to use a variety of convenient SAAS (Software as a Service), PAAS (Platform as a Service) and IAAS (Infrastructure as a Service) solutions, and to pay for the service exactly as much as you use it.

 The public cloud gives a company the opportunity for a great leap in development, the opportunity to use various services of a service provider during development, as those accelerate the development and help create new functionality.

 All of this can be conveniently used without having to house a personal data centre.

Goal setting

The first and most important step is to set a goal for the enterprise. The goal cannot be general; it must be specific and, if possible, measurable, so that it would be possible to assess at the end of the migration whether the set goal has been achieved or not.

Goal setting must take the form of internal collaboration between the business side and the technical side of the company. If excluding even one party, it is very difficult to reach a satisfactory outcome.

The goals can be, for example, the following:

  • Cost savings. Do you find that running your own data centre is too expensive and operating costs are very high? Calculate the cost, how much resource the company will spend on it, and set a goal of what percentage in savings you want to achieve. However, cost savings are not recommended as the main goal. Cloud providers also aim to make a profit. Rather, look for goals in the following areas to help you work more efficiently.
  • Agility, i.e. faster development of new functionalities and the opportunity to enter new markets.
  • Introduction of new technologies (ML or Machine Learning, IOT or Internet of Things, AI or Artificial Intelligence). The cloud offers a number of already developed services that have been made very easy to integrate.
  • End of life for hardware or software. Many companies are considering migrating to the cloud at the moment when their hardware or software is about to reach its end of life.
  • Security. Data security is a very important issue and it is becoming increasingly important. Cloud providers invest heavily in security. Security is a top priority for cloud providers because an insecure service compromises customer data and thus they are reluctant to buy the service.

The main reason for migration failure is the lack of a clear goal (the goal is not measurable or not completely thought out)

Mapping the architecture

The second step should be to map the services and application architecture in use. This mapping is essential to choose the right migration strategy.

In broad strokes, applications fall into two categories: applications that are easy to migrate and applications that require a more sophisticated solution. Let’s take, for example, a situation where a large monolithic application is used, the high availability of which is ensured by a Unix cluster. An application with this type of architecture is difficult to migrate to the cloud and it may not provide the desired solution.

The situation is similar with security. Although security is very important in general, it is especially important in situations where sensitive personal data of users, credit card data, etc. must be stored and processed. Cloud platforms offer great security solutions and tips on how to run your application securely in the cloud.

Security is critical to AWS, Azure, and GCP, and their security is invested into much more than individual customers could ever do.

Secure data handling requires prior experience. Therefore, I recommend migrating applications with sensitive personal data at a later stage of the migration, where experience has been gained. It is also recommended to use the help of different partners. Solita has previous experience in managing sensitive data in the cloud and is able to ensure the future security of data as well. Partners are able to give advice and draw attention to small details that may not be evident due to lack of previous experience.

This is why it is necessary to map the architecture and to understand what types of applications are used in the companies. An accurate understanding of the application architecture will help you choose the right migration method.

Migration strategies

‘Lift and Shift’ is the easiest way, transferring an application from one environment to another without major changes to code and architecture.

Advantages of the ‘Lift and Shift’ way:

  • In terms of labour, this type of migration is the cheapest and fastest.
  • It is possible to quickly release the resource used.
  • You can quickly fulfil your business goal – to migrate to the cloud.

 Disadvantages of the ‘Lift and Shift’ way:

  • There is no opportunity to use the capabilities of the cloud, such as scalability.
  • It is difficult to achieve financial gain on infrastructure.
  • Adding new functionalities is a bit tricky.
  • Almost 75% of migrations take place again within two years. Either they go back to their data centre or they use another migration method. At first glance, it seems like a simple and fast migration strategy, but in the long run, it will not open up the cloud’s opportunities and no efficiency gains will be achieved.

‘Re-Platform’ is a way to migrate where a number of changes are made to the application that enable the use of services provided by the cloud service provider, such as using the AWS Aurora database.

Benefits:

  • It is possible to achieve long-term financial gain.
  • It can be scaled as needed.
  • You can use a service, the reliability of which is the service provider’s responsibility.

 Possible shortcomings:

  • Migration takes longer than, for example, with the ‘Lift and Shift’ method.
  • The volume of migration can increase rapidly due to the relatively large changes made to the code.

‘Re-Architect’ is the most labour- and cost-intensive way to migrate, but the most cost-effective in the long run. During the re-architecture, the application code is changed sufficiently that it can be handled smoothly in the cloud. This means that the application architecture will take advantage of the opportunities and benefits offered by the cloud

Advantages:

  • Long-term cost savings.
  • It is possible to create a highly manageable and scalable application.
  • An application based on the cloud and micro services architecture enables to add new functionality and to modify the current one.

Disadvantages:

  • It takes more time and therefore more money for the development and migration.

Start with the goal!

Successful migration starts with setting and defining a clear goal to be achieved. Once the goals have been defined and the architecture has been thoroughly mapped, it is easy to offer a suitable option from those listed above: either ‘Lift and Shift’, ‘Re-Platform’ or ‘Re-Architect’.

Each strategy has its advantages and disadvantages. To establish a clear and objective plan, it is recommended to use the help of a reliable partner with previous experience and knowledge of migrating applications to the cloud.

Turbulent times in security

We are currently living very turbulent times: COVID-19 is still among us, and at the same time we are facing a geopolitical crisis in Europe not seen since the second world war. You can and you should prepare for the changed circumstances by getting the basics in order.

On top of the normal actors looking for financial gains, state-backed actors are now likely to activate their campaigns against services critical for society. This expands beyond the crisis zone, we have for example already seen Denial of Service attacks against banks. It is likely that different types of ransomware campaigns and data wipers will also be seen in western countries targeting utilities providers, telecommunications, media, transportation and financial institutions and their supply chains.

So what should be done differently during these times in terms of securing our business and environments? Often an old trick is better than a bagful of new ones, meaning that getting the basics right should always be the starting point. There’s no shortcuts in securing the systems, there’s no single magic box that can be deployed to fix everything.

Business continuity and recovery plans

Make sure you have the business continuity plan and recovery plan available and revised. Also require the recovery plans from your service providers. Make sure that roles and responsibilities are clearly defined and everyone knows the decision making tree. Check that the contact information is up to date, and your service providers and partners have your contact information correct. It is also a good idea to practice cyberattack scenarios with your internal and external stakeholders to see potential pitfalls of your plan in advance.

Know what you have out there!

How certain are you that the CMDB you have is 100% up-to-date? When’s the last time you have checked how your DNS records have been configured? Do you really know what services are visible to the internet? Are you aware what software and versions you are using in your public services? These questions are the same what malicious actors are going through when they are gathering information on where to attack and how. This information is available on the internet for everyone to find out, and this is something that all organizations should also use for their own protection. There are tools and services (such as Solita WhiteHat) available to perform reconnaissance checks against your environment. Use them or get a partner to help you in this.

Keep your software and systems updated

This is something that everyone of us hears over and over again, but still: It is utmost important to keep the software up-to-date! Every single software contains vulnerabilities and bugs which can be exploited. Vendors are nowadays patching vulnerabilities coming to their attention rather quickly, so use that as your own benefit and apply the patches.

Require MultiFactor Authentication and support strong passwords

This one is also on every recommendation list and it’s not there for nothing. Almost all services nowadays provide the possibility to enable MFA, so why not to require it. It is easy to set up and provides an additional layer of security for users, preventing brute forcing and password spraying. It doesn’t replace a good and strong password, so a rather small thing to help users in creating strong passwords and prevent using same passwords in multiple services is to provide them a password manager software, such as LastPass or 1Password. If you have SSO service in place, make sure you take the most out of it.

Take backups and exercise recovery

Make sure you are backing up your data and services. Also make sure that backups are stored somewhere else than in the production environment, to prevent for example ransom trojans making them useless. Of course, just taking backups is not enough, but the recovery should be tested periodically (at least yearly) to make sure that when recovery is needed it will actually work.

What if you get hit

One famous CEO once said that there are two types of companies: ones that have been hacked and ones who don’t know they have been hacked. So what should you do if you even suspect that you have been attacked:

Notify authorities

National authorities run CERT (Computer Emergency Response Team) teams, who maintain the situational awareness and coordinate the response actions on national level. For example in Finland its kyberturvallisuuskeskus.fi and in Sweden cert.se.  So if you suspect a possible data leak or attack, notify the local CERT and at the same time, file a police report. It is also advisable to contact a service provider who can help you to investigate and mitigate the situation. One good source to find a service provider providing Digital Forensics and Incident Response services is from dfir.fi.

Isolate breached targets and change/lock credentials

When you suspect a breach, isolate the suspected targets from the environment. If possible cut off network access and let the resources still run, this way you are not destroying possible evidence by turning off the services (shutting down servers, deleting cloud resources).  At the same time, lock the credentials suspected to be used in the breach and change all the passwords.

Verify logs

Check that you have logs available from the potentially breached systems. Best case would be that the logs are available outside of the system in question. If not, back them up to external storage, to make sure that it doesn’t get altered or removed by the attacker.

Remember to communicate

Communicate with stakeholders, remember your users, customers and public. Although it may feel challenging to tell these kinds of news, it’s much better to be open in the early stages than to get caught your pants down later on.

To summarise

The threat level is definitely higher due to above mentioned circumstances, but getting the basics in order helps you to react if something happens. Keep also in mind that you don’t have to cope in this situation alone. Security service providers have the means and capacity to support you in efficient way. Our teams are always willing to help to keep your business and operations secure.

 

Cloud services

Hybrid Cloud Trends and Use Cases

Let's look at different types of cloud services and learn more about the hybrid cloud and how this cloud service model can exist at an organisation. We’ll also try to predict the future a bit and talk about what hybrid cloud trends we are expecting.

As an IT person, I dare say that today we use the cloud without even thinking about it. All kinds of data repositories, social networks, streaming services, media portals – they work thanks to cloud solutions. The cloud now plays an important role in how people interact with technology.

Cloud service providers are inventing more and more features and functionalities, bringing them to the IT market. Such innovative technologies offer even more opportunities for organisations to run a business. For example, AWS, one of the largest providers of public cloud services, announces over 100 product or service updates each year.

Cloud services

Cloud technologies are of interest to customers due to their economy, flexibility, performance and reliability.

For IT people, one of the most exciting aspects of using cloud services is the speed at which the cloud provides access to a resource or service. A few clicks at a cloud provider’s portal – and you have a server with a multi-core processor and large storage capacity at your disposal. Or a few commands on the service provider’s command line tool – and you have a powerful database ready to use.

Cloud deployment models

In terms of the cloud deployment model, we can identify three main models:

• A public cloud – The service provider has publicly available cloud applications, machines, databases, storage, and other resources. All this wealth runs on the IT infrastructure of the public cloud service provider, who manages it. The best-known players in the public cloud business are AWS, Microsoft Azure and Google Cloud.

In my opinion, one of the most pleasant features of a public cloud is its flexibility. We often refer to it as elasticity. An organisation can embark on its public cloud journey with low resources and low start costs, according to current requirements. 

Major public cloud players offer services globally. We can easily launch cloud resources in a geographical manner which best fits our customer market reach. 

For example, in a globally deployed public cloud environment, an organization can serve its South American customers from a South American data centre. A data centre located in one of the European countries would serve European customers. This greatly improves the latency and customer satisfaction.

There is no need to invest heavily in hardware, licensing, etc. – organisation spends money over time and only on the resources actually used.

• A private cloud – This is an infrastructure for a single organisation, managed by the organisation itself or by a service provider. The infrastructure can be located in the company’s data centre or elsewhere.

The definition of a private cloud usually includes the IT infrastructure of the organisation’s own data centre. Most of these state-of-the-art on-premise solutions are built using virtualisation software. They offer the flexibility and management capabilities of a cloud.

Here, however, we should keep in mind that the capacity of a data centre is not unlimited. At the same time, the private cloud allows an organisation to implement its own standards for data security. It also allows to follow regulations where applicable. Also, to store data in a suitable geographical area in its data centre, to achieve an ultra low latency, for example. 

As usual, everything good comes with trade-offs. Think how complex activity it might be to expand the private cloud into a new region, or even a new continent. Hardware, connectivity, staffing, etc – organisation needs to take care of all this in a new operating area. 

• A hybrid cloud – an organisation uses both its data centre IT infrastructure (or its own private cloud) and a public cloud service. Private cloud and public cloud infrastructures are separate but interconnected.

Using this combination, an organisation can store sensitive customer data in an on-premise application according to regulation in a private cloud. At the same time, it can integrate this data with corporate business analytics software that runs in a public cloud. The hybrid cloud allows us to use the strengths of both cloud deployment models.

Hybrid cloud model

When is a hybrid cloud useful?

Before we dive into the talk about hybrid cloud, I’d like to stress that we at Solita are devoted advocates of cloud-first strategy, referring to public cloud. At the same time, cloud-first does not mean cloud-only, and we recognize that there might be use-cases when running a hybrid model is justified, be it regulation reasons or very low latency requirements.

Let’s look at some examples of when and how a hybrid cloud model can benefit an organisation. 

Extra power from the cloud

Suppose that the company has not yet made its migration to public cloud. Reasons can be lack of resources or cloud competence. It is running its private cloud in a colocation data centre. The private cloud is operating at a satisfactional level while the load and resource demand remains stable. 

However, the company’s private cloud lacks extra computing resources to handle future events of demand growth. But an increased load on the IT systems is expected due to an upcoming temporary marketing campaign. As a result of the campaign, the number of visitors to the organisation’s public systems will increase significantly. How to address this concern?

The traditional on-premise way used to be getting extra resources in the form of additional hardware. It means new servers, larger storage arrays, more powerful network devices, and so on. This causes additional capital investment, but it is also important that this addition of resources may not be fast.

The organisation must deliver, install, configure the equipment – and these jobs cannot always be automated to save time. After the load on the IT systems has decreased with the end of the marketing campaign, the situation may arise that the acquired additional computing power is not used any more.

But given the capabilities of the cloud, a better solution is to get additional resources from the public cloud. Public cloud allows to do this flexibly and on-demand, as much as the situation requires. The company spends and pays for resources only as it needs them, without large monetary commitments. Let the cloud adoption start 😊

The organisation can access additional resources from the public cloud in hours or even minutes. We can order these programmatically, and in automated fashion in advance, according to the time of the marketing campaign.

When the time comes and there are many more visitors, the company will still keep the availability of its IT systems. They will continue to operate at the required level with the help of additional resources. This method of use is known as cloud bursting, i.e. resources “flow over” to another cloud environment.

This is the moment when a cloud journey begins for the organization. It is an extremely important point of time when the organization must carefully evaluate its cloud competence. It needs to consider possible pitfalls on the road to cloud adoption. 

For an organisation, it is often effective to find a good partner to assist with cloud migration. The partner with verified cloud competence will help to get onto cloud adoption rails and go further with cloud migration. My colleagues at Solita have written a great blog post about cloud migration and how to do it right.

High availability and recovery

Implementing high availability in your data centre and/or private cloud can be expensive. As a rule, high availability means that everything must be duplicated – machines, disk arrays, network equipment, power supply, etc. This can also mean double costs.

An additional requirement can be to ensure geo-redundancy of the data and have a copy in another data centre. In such case, the cost of using another data centre will be added.

A good data recovery plan still requires a geographically duplicated recovery site to minimise risk. From the recovery site, a company can quickly get its IT systems back up and running in the event of a major disaster in a major data centre. Is there a good solution to this challenge? Yes, there is.

A hybrid cloud simplifies the implementation of a high availability and recovery plan at a lower cost. As in the previous scenario described above, this is often a good starting point for an organisation’s cloud adoption. Good rule of thumb is to start small, and expand your public cloud presence in controlled steps.

A warm disaster recovery site in the public cloud allows us to use cloud resources sparingly and without capital investment. Data is replicated from the main data centre and stored in the public cloud, but bulky computing resources (servers, databases, etc.) are turned off and do not incur costs.

In an emergency case, when the main data centre is down, the resources on the warm disaster recovery site turn on quickly – either automatically or at the administrator’s command. Because data already exists on the replacement site, such switching is relatively quick and the IT systems have minimal downtime.

Once there is enough cloud competence on board, the organisation will move towards cloud-first strategy. Eventually it would switch its public cloud recovery site to be a primary site, whereas recovery site would move to an on-premise environment.

Hybrid cloud past and present

For several years, the public cloud was advertised as a one-way ticket. Many assumed that organisations would either move all their IT systems to the cloud or continue in their own data centres as they were. It was like there was no other choice, as we could read a few years ago.

As we have seen since then, this paradigm has now changed. It’s remarkable that even the big cloud players AWS and Microsoft Azure don’t rule out the need for a customer to design their IT infrastructure as a hybrid cloud.

Hybrid cloud adoption

Organisations have good reasons why they cannot always move everything to a public cloud. Reasons might include an investment in the existing IT infrastructure, some legal reasons, technical challenges, or something else.

Service providers are now rather favouring the use of a hybrid cloud deployment model. They are trying to make it as convenient as possible for the customer to adopt it. According to the “RightScale 2020 State of the Cloud” report published in 2020, hybrid cloud is actually the dominant cloud strategy for large enterprises:

Hybrid cloud is the dominant strategy

Back in 2019, only 58% of respondents preferred the hybrid cloud as their main strategy. There is a clear signal that the hybrid cloud offers the strengths of several deployment models to organisations. And companies are well aware of the benefits.

Cloud vendors vs Hybrid

How do major service providers operate on the hybrid cloud front? Microsoft Azure came out with Azure Stack – a service that is figuratively speaking a public cloud experience in the organisation’s own data centre.

Developers can write the same cloud native code. It runs in the same way both in a public Azure cloud and in a “small copy” of Azure in the enterprise’s data centre. It gives the real cloud feeling, like a modern extension to a good old house that got small for the family.

Speaking of multi-cloud strategy as mentioned in the above image, Azure Arc product by Microsoft is worth mentioning, as it is designed especially for managing multi-cloud environments and gaining consistency across multiple cloud services.

AWS advertises its hybrid cloud offering portfolio with the message that they understand that not all applications can run in the cloud – some must reside on customers premises, in their machines, or in a specific physical location.

A recent example of hybrid cloud thinking is AWS’s announcement of launching its new service ECS Anywhere. It’s a service that allows customers to run and manage their containers right on their own hardware, in any environment, while taking advantage of all the ECS capabilities that AWS offers in the “real” cloud to operate and monitor the containers. Among other things, it supports “bare” physical hardware and Raspberry Pi. 😊

As we’ve also seen just recently, the next step for Amazon to win hybrid cloud users was the launch of EKS Anywhere – this allows customers using Kubernetes to enjoy the convenience of a managed AWS EKS service while keeping their containers and data in their own environment, on their own data centre’s machines.

As we see, public cloud vendors are trying hard with their hybrid service offerings. It’s possible that after a critical threshold of hybrid services users is reached, it will create the next big wave of cloud adoption in the next few years. 

Hybrid cloud trends

The use of hybrid cloud related services mentioned above assumes that there is cloud competence in the organisation. These services integrate tightly with the public cloud. It is important to have skills to manage these correctly in a cloud-native way.

I think we will see a general trend in the near future that the hybrid cloud will remain. Multi-cloud strategy as a whole will grow even bigger. Service providers will assist customers in deploying a hybrid cloud while maintaining a “cloud native” ecosystem. So that the customer has the same approach to developing and operating their IT systems. It will not matter whether the IT system runs in a “real” cloud or on a hybrid model. 

The convergence of public, private and hybrid models will evolve, whereas public cloud will continue to lead in the cloud-first festival. Cloud competence and skills around it will become more and more important. The modern infrastructure will not be achievable anymore without leveraging the public cloud.

Ye old timey IoT, what was it anyway and does it have an upgrade path?

What were the internet connected devices of old that collected data? Are they obsolete and need to be replaced completely or is there an upgrade path for integrating them into data warehouses in the cloud?

Previously on the internet

In the beginning the Universe was created.
This has made a lot of people very angry and been widely regarded as a bad move.

— Douglas Adams in his book “The restaurant at the end of the universe”

Then someone had another great idea to create computers, the Internet and the world wide web, and ever since then its been a constant stream of all kinds of multimedia content that one might enjoy as a kind of remittance for the previous blunders by the universe. (As these things, usually, go some people have regarded these as bad moves as well.)

Measuring the world

Some, however, enjoy a completely different type of content. I am talking about data, of course. This need for understanding and measuring the world around us has been with us ever since the dawn of mankind, but interconnected worldwide network combined with cheaper and better automation accelerated our efforts massively.

Previously you had to trek to the ends of the earth, usually accompanied with great financial and bodily risk, to try and set up test equipment or to monitor them with your senses and write down the readings to a piece of paper. But then, suddenly, electronic sensors and other measurement apparatus could be combined with a computer to collect data on-site and warehouse it. (Of course, back then we called warehouse of data a “database” or a “network drive” and had none of this new age poppycock terminology.)

Things were great; No need any longer to put your snowshoes on and risk being eaten by a vicious polar bear when you could just comfortably sit on your chair next to a desk with a brand new IBM PS/2 on it and check the measurements through this latest invention called Mosaic web browser or a VT100 terminal if your department was really old-school. (Oh, those were the days.)

These prototypes of IoT devices were very specialized pieces of hardware for very special use cases for scientists and other rather special types of folk and no Internet-connected washing machines were on sight, yet. (Oh, in hindsight ignorance is bliss. Is it not?)

The rise of the acronym

First, they used Dual-tone Pulse Modulation or DTMF. You put your phone next to the device, pushed a button on it and the thing would scream an ear-shattering series of audible pulses to your phone which then relayed them into a computer somewhere. Later, if you were lucky, a repairman would come over and completely disregard the self-diagnostic report your washing machine had just sent over the telephone lines and usually either fixed the damn thing or made the issue even worse while cursing computers all the way to hell. (Plumbers make bad IT support personnel and vice versa.)

From landlines to wireless

So because of this, and many other reasons, someone had a great idea to network devices like these directly to your Internet connection and cut the middle man, your phone, off the equation altogether. This made things simpler for everyone. (Except for the poor plumber who still continued to disregard the self-diagnostic reports.) And everything was great for a while again until, one day, we woke up and there was a legion of decades-old washing machines, tv’s, temperature sensors, cameras, refrigerators, ice boxes, video recorders, toothbrushes and plethora of other “smart” devices connected to the Internet.

Internet Of Things, or IoT for short, describes these devices as a whole and the phenomenon, the age, that created them.

Suddenly it was no longer just a set of specialized hardware for special people that had connected smart devices collecting data. Now it was for everybody. (This has, yet again, been regarded as a bad move.) We have to look past the obvious security concerns that this heat of connecting every single (useless) thing to the Internet has created, but we can also see the benefit. The data flows, and the data is the new oil as the saying goes.

And there is a lot of data

The volume of data collected with all these IoT devices is staggering and therefore just simple daily old-timey FTP transfers of data to the central server are no longer a viable way of collecting it. We have come up with different new protocols like REST, Websockets, and MQTT to ingest real-time streams of new data points to our databases from all of these data-collecting devices.

Eventually, all backend systems were migrated or converted into data warehouses that were only accepting data with these new protocols and therefore were fundamentally incompatible with the old IoT devices.

What to do? Obsolete and replace them all or is there something that can be done to extend the lifespan of those devices and keep them useful?

The upgrade path, a personal journey

As an example of an upgrade path, I shall share a personal journey on which I embarked in the late 1990s. At this point in time, this is a macabre exercise in fighting against the inevitable obsoletion, but I have devoted tears, sweat, and countless hours over the years to keep these systems alive and today’s example is no different. The service in question runs on a minimal budget and with volunteer effort; So heavy doses of ingenuity are required.

Vaisala weather station at Hämeenlinna observatory.
Vaisala weather station located at Hämeenlinna observatory is now connected with Moxa serial server to remote logger software.

 

Even though Finland is located near or in the arctic circle there are no polar bears around, except in a zoo. Setting up a Vaisala weather station is not something that will cause a furry meat grinder to release your soul from your mortal coil, no, it is actually quite safe. Due to a few circumstances and happy accidents, it is just what I ended up doing two decades ago when setting up a local weather station service in the city of Hämeenlinna. The antiquated 90’s web page design is one of those things I look forward to updating at some point, but today we are talking about what goes on in the background. We talk about data collection.

The old, the garbage and the obsolete

Here, we have the type of equipment that measures and logs data points about the weather conditions at a steady pace. Measurements, which are then read out by specialized software on a computer placed next to it since the communication is just plain old ASCII over a serial connection. The software is old. I mean really old. Actually I am pretty sure that some of you reading this were not even born back in 1998:

Analysis of text strings inside of a binary
Above image shows an analysis of the program called YourVIEW.exe that is used to receive data from this antiquated weather station. It is programmed with Labview version 3.243 that was released back in 1998. This software does not run properly on anything newer than Windows 2000.

This creates a few problematic dependencies; Problems that tend to get bigger with passing time.

The first issue is an obvious one: Old and unsupported version of Windows operating system. No new security patches or software drivers are available which in any IT scenario are a huge problem, but still a common one in any aging IoT solution.

The second problem is: No new hardware is available. No operating system support means no new drivers mean no new hardware if the old one brakes down. After spending a decade to scavenge this and that piece of obsolete computer hardware to pull together a somewhat functioning PC is a quite daunting task that keeps on getting harder every year. People tend to just dispose of their old PCs when buying a new one. The halflife of old PC “obtanium” is really short.

Third challenge: One can’t get rid of the Windows 2000 even if one wanted to since the logging software does not run on anything newer than that; And, yes, I tried even black magic, voodoo sacrifices and Wine under Linux to no avail.

And finally, the data collection itself is a problem: How do you modernize something that uses its own data collection /logging software and integrate it with modern cloud services when said software was conceived before modern cloud computing even existed?

Path step 1, an intermediate solution

As with any problem of technical nature after investigating the problem yields several solutions, but most of them are infeasible for a reason or another. In my example case, I came up with a partial solution that later enables me to continue building on top of it. At its core this is a cloud journey, an cloud migration, not much different from those I daily work with our customers at Solita.

For the first problem, Windows updates, we really can’t do anything about without updating the Windows operating system to more recent and supported release, but unfortunately, the data logging software won’t run anything newer than Windows 2000; Lift and shift it is then. The solution is to virtualize the server and bolster the security around the vulnerable underbelly of the system with firewalls and other security tools. This has the added benefit of improving service SLA due to lack of server/workstation hardware failures, network, and power outages. However, since the weather station communicates over a serial connection (RS232) we need to also somehow virtualize the added physical distance away. There are many solutions, but I chose a Moxa NPort 5110A serial server for this project. Combined with an Internet router capable of creating a secure IPSec tunnel between the cloud & on-site and by using Moxa’s Windows RealCOM drivers one can virtualize the on-site serial port to the remote Windows 2000 virtual server securely.

How about modernizing the data collection then? Luckily YourVIEW writes the received data points into CSV file so it is possible to write secondary logger with Python to collect those data points directly to a remote MySQL server as they become available.

Path step 2, next steps

What was before a vulnerable and obsolete piece of scavenged parts is still a pile of obsolete junk, but now it has a way forward. Many would have discarded it is garbage and thrown this data collection platform away, but with this example, I hope to demonstrate that everything has a migration path and with proper lifecycle management your IoT infrastructure investment does not necessarily need to be only a three-year plan, but one can expect to gain returns for even decades.

An effort on my part is ongoing to replace the MyView software all together with a homebrew logger that runs in a Docker container and published data with MQTT to the Google Cloud Platform IoT Core. IoT Core together with Google Cloud Pub/Sub assembles an unbeatable data ingestion framework. Data can be stored into, but not limited to, Google Cloud SQL and/or exported to BigQuery for additional data warehousing and finally for visualization for example in Google Data Studio.

Even though I use the term “logger” here the term “gateway” would be suitable as well. Old systems require interpretation and translation to be able to talk to modern cloud services. Either commercial solution exists from the vendor of the hardware or in my case I have to write one.

Together we are stronger

I would like to think that my very specific example above is unique, but I am afraid that is not. In principle, all integration and cloud migration journeys have their unique challenges.

Luckily modern partners, like Solita, with extensive expertise in cloud platforms like the Google Cloud Platform, Amazon Web Services or Microsoft Azure and in software development, integration, and data analytics can help a customer to tackle these obstacles. Together we can modernize and integrate existing data collection infrastructures for example in the web, healthcare, banking, at the factory floor, or in logistics. Throwing existing hardware or software into the trash and replacing them with new ones is time-consuming, expensive, and sometimes easier said than done. Therefore carefully planning an upgrade path with a knowledgeable partner might be a better way forward.

Even when considering investing in a completely new solution for data collection a need for integration is usually a requirement at some stage of the implementation and Solita together with our extensive partner network is here to help you.

My Wednesday at AWS re:Invent 2019

It was early morning today because the alarm clock woke up me around 6 am. The day started with Worldwide Public Sector Keynote talk at 7 am in Venetian Plazzo Ohall.

Worldwide Public Sector Breakfast Keynote – WPS01

This was my first time to take part to the Public Sector keynote. I’m not sure how worldwide it was. At least Singapore and Australia were mentioned, but I cannot remember anything special said about Europe.

Everyone who is following international security industry even a little could not have missed the fact who many cities, communities etc. have faced ransomware attack. Some victims paid the ransom in Bitcoins, some did not pay and many of victims are just quiet. The public cloud environment is a great way to protect your infrastructure and important data. Here is summary how to protect yourself:

The RMIT University from Australia has multiple education programs for AWS competencies and it was announced that they are now official AWS Cloud Innovation Centre (CIC). Typical students have some education background (eg. bachelor in IT) and they want to make some move in job market by re-education. Sounds great way!

The speaker Mr. Martin Bean from the RMIT showed the picture from Jetsons (1963, by Hanna Barber Production) that could already list multiple things that are invented for mass markets much later. Mr. Martin also reminded two things that got my attention: there are more people owning cellphone than toothbrush and 50 percent of jobs are going to transform to another in next 20 years.

Visit to expo area

After keynote I visited expo in Venetian Sands Expo area before heading to the Aria for the rest of Wednesday. The expo was huge, noisy, crowded etc. The more detail experience from last year was enough. At AWS Cloud Cafe I took panorama picture (click to zoom in) and that’s it, I was ready to leave.

I took the shuttle bus towards Aria. I was very happy that the bus driver dropped off us next the main door of Aria hotel which saves about average 20-30 minutes of queueing in Aria’s parking garage. Important change! On the way I passed the Manhattan of New York.

Get started with Amazon ElastiCache in 60 minutes – DAT407

Mr. Kevin McGehee (Principal Software Engineer, AWS) was the instructor for the ElasticCache Redis builder session. In the session we logged in to the Amazon console, opened Cloud9 development environment and then the just followed the clear written instructions.

The guide for builder session can be found from here: https://reinvent2019-elasticache-workshop.s3.amazonaws.com/guide.pdf

This session was about how to import data to the Redis via python and index and refine the data at the importing phase. In refinement the data becomes information with aggregated scoring, geo location etc. It’s easier to use by the requestor. That was interesting and looked easy.

Build an effective resource compliance program – MGT405

Mr.Faraz Kazmi (Software Development Engineer, AWS) held this builder session.

Conformance pack under AWS Config service was published last week. It can be integrated in AWS Organization level in account structure. With conformance packs you can make a group of config rules (~governance rules for common settings) easily in YAML format template and have consolidated view over those rules. There are few AWS managed packs currently available. “Operational Best Practices For PCI-DSS” pack is one for example.  It’s clear that AWS will provide more and more of those rule sets in upcoming months and so will also do the community via Github.

There are timeline view and compliance view of your all resources, so it makes this tool very effective to have consolidated view of compliance of resources.

You can find the material from here: https://reinvent2019.aws-management.tools/mgt405/en/

Btw. If you cannot find Conformance packs, you are possible using old Config service UI in the AWS Console. Make sure to switch to new UI. All new features are only done to the new UI.

The clean-up phase in the guide is not perfect. To addition to the guide you have to manually delete SNS topic and IAM roles that was created in the wizards. It was a disappointment that no sandbox account was provided.

Best practices for detecting and preventing data exposure – MGT408

Ms. Claudia Charro (Enterprise Solutions Architect, AWS) from Brasilia was the teacher in this session. This was very similar to my previous session that I was not aware. In both session we used Config rules and blocked public s3 usages.

The material can be found from here: https://reinvent2019.aws-management.tools/mgt408/en/cont/testingourenforcement.html

AWS Certification Appreciation Reception

The Wednesday evening started (as usually) with reception for certificated people at the Brooklyn Bowl. It is again nice venue to have some food, drinks, music and mingle with other people. I’m writing this around 8 pm so I left a bit early to get good night sleep for Thursday which is the last full day.

Brooklyn bowl outside Brooklyn bowl inside Brooklyn bowl bowling Brooklyn bowl dance floor

On the way back to my hotel (Paris Las Vegas) I found the version 3 Tesla Supercharge station which was the one of the first v3 stations in the world. It was not too crowed. The station was big when I’m comparing with the supercharger stations in Finland. The v3 Supercharger stations can provide up to 250kW charging power for Model 3 Long Range (LR) models, which has 75kWh battery size. I would have like to see the new (experimental) Cybertruck model.

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action

 

My Tuesday at AWS re:Invent 2019

Please also check my blog post from Monday.

Starting from Tuesday each event venue provides great breakfast with lightning speed service. It scales and works. It’s always amazing how each venue can provide food services for thousands of people in short period of time. The most people are there for the first time so guidance has to be very clear and simple.

Today started with keynote by Mr. Andy Jassy. I was not able to join the live session at Venetian because of my next session. Moving from one location to another takes at least 15 minutes, and you have to be at least 10 minutes before at your session to reclaim you reserved seat. Starting last year, the booking systems forces one hour cap between sessions in different venues.

Keynote by Andy Jassy on Tuesday

You can find the full recap written by my colleagues here: Andy Jassy’s release roller coaster at AWS re:Invent 2019

Machine learning was the thing today. The Sagemaker service received tons of new features. Those are explained by our ML specialists here: Coming soon!

So I joined overflow room at Mirage for Jassy’s keynote session. Everyone has their personal headphones. I have more than 15 years background in software development, so it was love in first sight with CodeGuru. There are good news and bad news. It is service for static analysis of your code, to make review and finally but not definitely least to provide realtime profiling concept via installed agent.

The profiling information is provided in 5 minutes periods and it will provide profiling for several factors: CPU, memory and latency. It was promising product because Mr. Jassy told that Amazon has used it by itself for couple of years already. So, it is mature product already.

So, what was the bad news. It supports only Java. Nothing to add to that.

The other interesting announcement for me was general availability of Outposts. Finally, also in Europe you can have AWS fully managed servers inside your corporate datacenter. Those servers integrate fully to AWS Console and can be used eg. for running ECS container services. The starting price 8300 USD per month is very competitive because it already includes roughly 200 cores, 800GB memory and 2,7TB of instance storage. You can add EBS storage additionally starting from 2.7TB.

You can find more information here: https://aws.amazon.com/blogs/aws/aws-outposts-now-available-order-your-racks-today/

Performing analytics at the edge – IOT405

This session was a workshop and level 400 (highest). It was held by Mr. Sudhir Jena (Sr. IoT Consultant, AWS) and Mr. Rob Marano (Sr. Practice Manager, AWS).

Industry 4.0 a.k.a. IoT was totally new sector for me. It was very informative and pleasant session. It was all about AWS IoT Greengrass service which can provide low latency response but still managed platform which tons of features for handling data stream from IOT devices locally.

For multiple people it was first touch to AWS Cloud Development Kit which I fell in love about three months ago. It has multiple advances like refactoring, strong typing and good IDE support. You can find more information about AWS CDK here: https://docs.aws.amazon.com/cdk/latest/guide/home.html

In our workshop session we demonstrated to receive temperature, humidity etc. time series data stream from IoT device. The IoT device was in our case EC2 which simulated IoT device. From AWS IoT Greengrass console you can eg. deploy new version of analytic functions to the IoT devices.

Material for workshop can be found from Github: https://github.com/aws-samples/aws-iot-greengrass-edge-analytics-workshop

AWS Transit Gateway reference architectures for many VPCs – NET406

This was a session and it was held by Nick Matthews (Principal Solutions Architect, AWS) in glamorous Ballroom F at Mirage. It fits more than thousand people. The session was almost full, so it was a very popular session.

To summaries the topic, there are several good ways to do interconnectivity between multiple VPCs and corporate data center. In small scale you can things more manually but in large scale you need automation.

One provided solution for automation based on the use of tags. The autonomous team (owner of account A) can tag their shared resources predefined way. The transit account can read those changes via CloudTrail logging. So, each modification will create CloudTrail audit event which triggers lambda function. The function checks for if change is required and makes change request item to metadata table in DynamoDB to wait for approval. The network operator is notified via SNS (Simple notification service). The operator can then allow (or decline) the modification. Another Lambda will then do the needed route table modifications for the transit account and for the account A.

If you are interested, you can watch video from August 2019: https://pages.awscloud.com/AWS-Transit-Gateway-Reference-Architectures-for-Many-Amazon-VPCs_2019_0811-NET_OD.html

If you want to wait, I’m pretty sure that this re:Invent talk was also recorded and can be found from AWS Youtube channel in few week: https://www.youtube.com/user/AmazonWebServices

Fortifying web apps against bots and scrapers with AWS WAF – SEC357

Mr. Yuri Duchovny (Solution architect, AWS) held the session. It was the most intensive session with a lots of todo with many architectural examples and usage scenarios in demo screen. The AWS WAF service has got a new shiny UI in AWS Console. Also the AWS published few new features already in last few weeks, eg. Managed rules to give more protection in nondisruptive way. The WAF it self did not have multiple predefined rules for protection, only XSS (Cross-site Scripting) an SQLi (SQL Injection) were supported. All other rules needed to configure manually as regular expressions or so.

The WAF is service that should always be turned on for CloudFront Distribution, Application Load Balancer (ALB) and API Gateway.

The workshop material is again public and can be found from here: https://github.com/gtaws/ProtectWithWAF

Encryption options for AWS Direct Connect – NET405

Mr. Sohaib Tahir (Sr Solutions Architect, AWS) from Seattle was the teacher in this session. It was more listening than doing because of the short period of time. We (attendees) were group of seven from USA, Japan and Finland.

There was five possibilities to encrypt direct connection:

1. Private VIF (virtual interface) + application-layer TLS
2. Private VIF + virtual VPN appliances (can be in transit VPC)
3. Private VIF + detached VGW + AWS Site-to-site VPN (CloudHub functionality)
4. Public VIF + AWS Virtual Private Gateway (GP, IPSec tunnel, BGP)
5. Public VIF + AWS Transit Gateway (BGP, IPSec tunnel, BGP) NEW!

It’s good to remember that single VPN connections has 1,25 Gbps limit which can be hit easily with DX connection and eg. data intensive migration jobs. AWS recommendation is to use number five architecture if it is possible. Using the fifth architecture requires to have own direct connection so you cannot use shared model direct connection from 3rd party operator.

AWS published yesterday cross-region VPC connectivity via Transit Gateway. During the session Mr. Tahir started to do demonstrate this new feature ad-hoc but we ran out of time.

My Monday at AWS re:invent 2019

I started the day with breakfast at Denny’s. It was nice to have typical (I think) American breakfast. Thanks Mr. Heikki Hämäläinen for your company. By the way, all attendees from Solita are wearing those bright red hoodie shown in the picture. Thanks to our Cloud Ambassador Anton Floor. The hoodie makes it a lot easier to spot a colleague in a crowded places. Okay, let’s start going through my actual sessions.

How NextRoll leverages AWS Batch for daily business operations – CMP311

Advertisement company’s Tech Lead Mr. Roozbeh Zabihollahi described shortly their journey with AWS Batch service. If I remember correctly, they use about 5000 CPU years which is huge amount of computing power. It was nice to hear NextRoll allows their teams quite freely to choose which services they want to use. Nowadays Mr. Zabihollahi sees that more and more teams are looking into AWS Batch as a promising choice to use, rather than Hadoop or Spark.

Mr Zabihollahi believes that AWS Batch is good for several things:

AWS Batch is good for

If you are consideration start using AWS Batch you should be familiar at least these challenges:

The Mr. Steve Kendrex (Sr. Technical Product Manager, AWS) presented the road map of AWS Batch service. The support for Fargate (a.k.a serverless container service) is coming but Steve could not provide details for a wide audience. My personal guess is the spot instance support for Fargate is coming soon which provide key cost efficiency factor for batch operations.

Build self-service registration with facial recognition – ARC320

My first builder session this year was about integrating facial recognition for registering guests to an event. Me and four other attendees were led by Mr. Alan Newcomer (Solutions Architect, AWS) to this interesting topic. Mr. Newcomer had lived before near Las Vegas which was interesting to hear about him.

Each builder session starts with short queuing for the right table which you have hopefully reserved a spot beforehand:

The hall has multiple tables which each has 7 chairs, one for a teacher and 6 for participants, and screen for guidance purposes.

Typical the teacher provides website which has all the required information to do exercise. Additional to that the teacher provides unique password for each participant eg. for AWS Console login. After that each participant can start doing the exercise by themselves. The teacher provides helps whenever needed. You need to keep good pace all the time to be able to do whole exercise.

During the recognition session we built an application with had tree main functionalities:  user registering, do RSVP one day before the event and finally registering user at event via facial recognition. You actually look up the workshop material by yourself here: http://regappworkshop.com/

Managing DNS across hundreds of VPCs – NET411

This was my second chalk talk today. It started very well because right at the beginning audience heard real life problems from different attendees. The chalk talk was guided by Mr. Matt Johnson (Manager, Solutions Architecture, WWPS, AWS) and Mr. Gavin McCullagh (Principal System Development Engineer, AWS). They did extremely well.

It was reminded that the support for overlapping private zones was published recently. It enables autonomous structured dns management in multi-account environment.  For more information go to: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-route-53-now-supports-overlapping-namespaces-for-private-hosted-zones/

During the session we looked up four different architecture for sharing DNS information with multiple VPCs (~accounts). The number four “Share and Associate Zones and Rules” was the most interesting which suites for massive number of private DNS zones and VPCs. It has hub account for outbound DNS traffic to corporate network and it uses private zone associating between VPC’s. The associating does not yet have native CloudFormation support but there are several ways to handle association, eg. using CloudFormation custom resources or custom Lamda function.

One major feature request was that AWS should support DNS query logging (query and response) in the VPC. The audience wanted to receive the logging information to CloudWatch log groups. The logging is needed for security/audit and debugging purposes.

Processing AWS Ground Station data in AWS – NET409

This my second builder session sounded very fancy, handling data from satellites. The attendees had very different experience from AWS and from satellites, everything from one to five in both topics. After the session I needed to update my CV…

In the sky there are few open satellites. Those can be listened by AWS Ground Station service and the data received to AWS account. The data link between the Ground Station and your VPC is made via elastic network interface (ENI).

In the example case we received 15 Mbps stream for 15 minutes. It was the period that the satellite was visible for the Ground Station’s antenna system. The stream from Ground Station needs always to be received by to Kratos DataDefender software that will parse UDP traffic. The Ground Station traffic is not in right order and sometimes missing species which is handled by the DataDefender.

The data stream was analyzed in few phases via S3 bucket and EC2 instances. The final product was precise TIFF format picture of the view of the satellite passing the Ground station antenna. The resolution was about 1 megapixel per kilometer.

Nordics Customer Reception

The evening ended to pleasant and well organised the Nordics Customer Reception event at the Barrymore. The Solita was one of the sponsors of the event. From the terrace we had great view towards the Encore hotel:

Would you like to hear more what happens in re:Invent 2019? Sign up to our team’s Whatsapp group to chat with them and register to What happens in Vegas won’t stay in Vegas webinar to hear a whole sum-up after the event.

New call-to-action