Turbulent times in security

We are currently living very turbulent times: COVID-19 is still among us, and at the same time we are facing a geopolitical crisis in Europe not seen since the second world war. You can and you should prepare for the changed circumstances by getting the basics in order.

On top of the normal actors looking for financial gains, state-backed actors are now likely to activate their campaigns against services critical for society. This expands beyond the crisis zone, we have for example already seen Denial of Service attacks against banks. It is likely that different types of ransomware campaigns and data wipers will also be seen in western countries targeting utilities providers, telecommunications, media, transportation and financial institutions and their supply chains.

So what should be done differently during these times in terms of securing our business and environments? Often an old trick is better than a bagful of new ones, meaning that getting the basics right should always be the starting point. There’s no shortcuts in securing the systems, there’s no single magic box that can be deployed to fix everything.

Business continuity and recovery plans

Make sure you have the business continuity plan and recovery plan available and revised. Also require the recovery plans from your service providers. Make sure that roles and responsibilities are clearly defined and everyone knows the decision making tree. Check that the contact information is up to date, and your service providers and partners have your contact information correct. It is also a good idea to practice cyberattack scenarios with your internal and external stakeholders to see potential pitfalls of your plan in advance.

Know what you have out there!

How certain are you that the CMDB you have is 100% up-to-date? When’s the last time you have checked how your DNS records have been configured? Do you really know what services are visible to the internet? Are you aware what software and versions you are using in your public services? These questions are the same what malicious actors are going through when they are gathering information on where to attack and how. This information is available on the internet for everyone to find out, and this is something that all organizations should also use for their own protection. There are tools and services (such as Solita WhiteHat) available to perform reconnaissance checks against your environment. Use them or get a partner to help you in this.

Keep your software and systems updated

This is something that everyone of us hears over and over again, but still: It is utmost important to keep the software up-to-date! Every single software contains vulnerabilities and bugs which can be exploited. Vendors are nowadays patching vulnerabilities coming to their attention rather quickly, so use that as your own benefit and apply the patches.

Require MultiFactor Authentication and support strong passwords

This one is also on every recommendation list and it’s not there for nothing. Almost all services nowadays provide the possibility to enable MFA, so why not to require it. It is easy to set up and provides an additional layer of security for users, preventing brute forcing and password spraying. It doesn’t replace a good and strong password, so a rather small thing to help users in creating strong passwords and prevent using same passwords in multiple services is to provide them a password manager software, such as LastPass or 1Password. If you have SSO service in place, make sure you take the most out of it.

Take backups and exercise recovery

Make sure you are backing up your data and services. Also make sure that backups are stored somewhere else than in the production environment, to prevent for example ransom trojans making them useless. Of course, just taking backups is not enough, but the recovery should be tested periodically (at least yearly) to make sure that when recovery is needed it will actually work.

What if you get hit

One famous CEO once said that there are two types of companies: ones that have been hacked and ones who don’t know they have been hacked. So what should you do if you even suspect that you have been attacked:

Notify authorities

National authorities run CERT (Computer Emergency Response Team) teams, who maintain the situational awareness and coordinate the response actions on national level. For example in Finland its kyberturvallisuuskeskus.fi and in Sweden cert.se.  So if you suspect a possible data leak or attack, notify the local CERT and at the same time, file a police report. It is also advisable to contact a service provider who can help you to investigate and mitigate the situation. One good source to find a service provider providing Digital Forensics and Incident Response services is from dfir.fi.

Isolate breached targets and change/lock credentials

When you suspect a breach, isolate the suspected targets from the environment. If possible cut off network access and let the resources still run, this way you are not destroying possible evidence by turning off the services (shutting down servers, deleting cloud resources).  At the same time, lock the credentials suspected to be used in the breach and change all the passwords.

Verify logs

Check that you have logs available from the potentially breached systems. Best case would be that the logs are available outside of the system in question. If not, back them up to external storage, to make sure that it doesn’t get altered or removed by the attacker.

Remember to communicate

Communicate with stakeholders, remember your users, customers and public. Although it may feel challenging to tell these kinds of news, it’s much better to be open in the early stages than to get caught your pants down later on.

To summarise

The threat level is definitely higher due to above mentioned circumstances, but getting the basics in order helps you to react if something happens. Keep also in mind that you don’t have to cope in this situation alone. Security service providers have the means and capacity to support you in efficient way. Our teams are always willing to help to keep your business and operations secure.

 

In search of XSS vulnerabilities

This is a blog post introducing our Cloud Security team and a few XSS findings we have found this year in various open source products.

Our team and what we do

Solita has a lot of security minded professionals working in various roles. We have a wide range of security know-how from threat assessment and secure software development to penetration testing. The Cloud Security team is mostly focused on performing security testing on web applications, various APIs, operating systems and infrastructure.

We understand that developing secure software is hard and do not wish to point fingers at the ones who have made mistakes, as it is not an easy task to keep software free of nasty bugs.

You’re welcome to contact us if you feel that you are in need of software security consultation or security testing.

Findings from Open Source products

Many of the targets that we’ve performed security testing for during the year of COVID have relied on a variety of open source products. Open source products are often used by many different companies and people, but very few of them do secure code review or penetration testing on these products. In general, a lot of people use open source products but only a few give feedback or patches, and this can give a false sense of security. However, we do think that open source projects that are used by a lot of companies have a better security posture than most closed source products.

Nonetheless, we’ve managed to find a handful of vulnerabilities from various open source products and as responsible, ethical hackers, we’ve informed the vendors and their software security teams about the issues.

The vulnerabilities found include e.g. Cross-Site Scripting (XSS), information disclosure and a payment bypass in one CMS plugin. But we’ll keep the focus of this blog post in the XSS vulnerabilities since these have been the most common.

What is an XSS vulnerability

Cross-site scripting vulnerability, shortened as XSS, is a client-side vulnerability. XSS vulnerabilities allow attackers to inject client-side JavaScript code that gets executed on the victim’s client, usually a browser. Being able to execute JavaScript on a victim’s browser allows attackers, for example, to use the same functionalities on the web site as the victim by riding on the victims’ session. Requests made with XSS vulnerability also come from the victims IP-address allowing to bypass potential IP-blocks.

All of the following vulnerabilities have been fixed by the vendors and updated software versions are available.

Apache Airflow

Apache Airflow is an open-source workflow management platform, that is being used by hundreds of companies such as Adobe, HBO, Palo Alto Networks, Quora, Reddit, Spotify, Tesla and Tinder. During one of our security testing assignments, we found that it was possible to trigger a stored XSS in the chart pages. 

There are a lot of things you can configure and set in the Apache Airflow, usually, we try to input injections in every place we can test and have some random text to be identified on which part was manipulated with a payload so we can later figure out which is the root cause of the injection.

In the Apache Airflow we found out that one error page did not properly construct the error messages on some misconfiguration cases, this causes the error message to trigger JavaScript. Interestingly in another place, the error message was constructed correctly and the payload did not trigger there, this just shows how hard it is to identify every endpoint where functionalities happen. Changing your name to XSS payload does not usually trigger the most common places where you can see your name such as profile and posts, but you should always look for other uncommon places you can visit. These uncommon places are not usually as well validated for input just due to a simple fact of being less used and not even that well-remembered they exist.


Timeline

15.04.2020 – The vulnerability was found and reported to Apache

20.04.2020 – Apache acknowledged the vulnerability

10.07.2020 – The bug was fixed in the update 1.10.11 and it was assigned the CVE-2020-9485

Grafana

Grafana is an open-source platform for monitoring and observation. It is used by thousands of companies globally, including PayPal, eBay, Solita, and our customers. During one of our security testing assignments, we found that it was possible to trigger a stored XSS via the annotation popup functionality.

The product sports all sorts of dashboards and graphs for visualizing the data. One feature of the graphs is the ability to include annotations for the viewing pleasure of other users.

While playing around with some basic content injection payloads, we noticed that it was possible to inject HTML tags into the annotations. These tags would then render when the annotation was viewed. One such tag was the image tag “<img>”, which can be used for running JavaScript via event handlers such as “onerror”. Unfortunately for us, the system had some kind of filtering in place and our malicious JavaScript payloads did not make it.

While inspecting the page DOM and source of the page, looking for a way to bypass the filtering, I noticed that AngularJS templates were in use as there were references to ‘ng-app’ mentioned. Knowing that naive use of AngularJS could lead to a so-called client-side template injection, I decided to try out a simple payload with a calculus operation in it ‘{{ 7 * 191 }}’. To my surprise, the payload got triggered and the annotation had the value of the calculation present, 1337!

A mere calculator was not sufficient though and I wanted to pop that tasty alert window with this newly found injection. There used to be a protective expression sandbox mechanism in AngularJS, but they probably grew tired of the never-ending cat and mouse play and sandbox escapes that hackers were coming up with, so they removed the sandbox in version 1.6 altogether. And as such providing a payload ‘{{constructor.constructor(‘alert(1)’)()}}’ did just the trick and at this point, I decided to write a report for the Grafana security team.

Timeline

16.04.2020 – The vulnerability was found and reported to the security team at Grafana

17.04.2020 – Grafana security team acknowledged the vulnerability

22.04.2020 – CVE-2020-12052 was assigned to the vulnerability

23.04.2020 – A fix for the vulnerability was released in Grafana version 6.7.3

Graylog

Graylog is an open-source log management platform that is used by hundreds of companies including LinkedIn, Tieto, SAP, Lockheed Martin, and Solita. During one of our security assignments, we were able to find 2 stored XSS vulnerabilities.

During this assessment we had to dig deep in the documentations of Graylog to see what kinds of things were doable in the UI and how to properly make the magic happen. At one point we found out that it’s possible to generate links from the data Graylog receives and this is a classic way to gain XSS by injecting a payload such as ‘javascript:alert(1)’ in the ‘href’ attribute. This, however, requires the user to interact with the generated link by clicking it, but this still executes the JavaScript and does not make the effect of the execution any less dangerous. 

Mika went through Graylog’s documentation and found out about a feature which allowed one to construct links from the data sources, but couldn’t figure out right away how to generate the data to be able to construct this link. He told me about the link construction and told about his gut feeling that there would most likely be an XSS vector in there. After a brief moment of tinkering with the data creation Mika decided to take a small coffee break, mostly because doughnuts were served. During this break I managed to find a way to generate the links correctly and trigger the vulnerability, thus finding a stored XSS in Graylog.



Graylog also supports content packs and the UI provides a convenient way to install third-party content by importing a JSON file. The content pack developer can provide useful information in the JSON such as the name, description, vendor, and URL amongst other things. That last attribute played as a hint of what’s coming, would there be a possibility to generate a link from that URL attribute?

Below you can see a snippet of the JSON that was crafted for testing the attack vector.


Once our malicious content pack was imported in the system we got a beautiful (and not suspicious at all) link in the UI that executed our JavaScript payload once clicked.


As you can also see from both of the screenshots, we were able to capture the user’s session information with our payloads due to it being stored in the browser’s LocalStorage. Storing sensitive information such as the user’s session in the browser’s LocalStorage is not always such a great idea, as LocalStorage is meant to be available for JavaScript to read. Session details in LocalStorage along with a XSS vulnerability can lead to a nasty session hijacking situation.

Timeline

05.05.2020 – The vulnerabilities were found and reported to Graylog security team

07.05.2020 – The vendor confirmed that the vulnerabilities are being investigated

19.05.2020 – A follow-up query was sent to the vendor and they confirmed that fixes are being worked on

20.05.2020 – Fixes were released in Graylog versions 3.2.5 and 3.3.

 

References / Links

Read more about XSS:

https://owasp.org/www-community/attacks/xss/

https://portswigger.net/web-security/cross-site-scripting

 

AngularJS 1.6 sandbox removal:

http://blog.angularjs.org/2016/09/angular-16-expression-sandbox-removal.html

 

Fixes / releases:

https://airflow.apache.org/docs/stable/changelog.html

https://community.grafana.com/t/release-notes-v6-7-x/27119

https://www.graylog.org/post/announcing-graylog-v3-3

 

AWS re:Inforce 2019 review

Looking at new releases, sessions which I attended, comparison to re:Invent and overall value of the first-ever re:Inforce.

This year I attended re:Inforce, the first incarnation of AWS conference specializing in security. The conference was a two day event in Boston Convention & Exhibition Center.

New releases

Amazon VPC Traffic Mirroring was probably the biggest new release in the event, but doesn’t touch my projects much. But, if you have systems for analyzing network traffic, this could be useful.

AWS Security Hub and AWS Control Tower are generally available. Haven’t yet tested much of these, but announced already in re:Invent.

Amazon EC2 Instance Connect was released in truth after re:Inforce, but should have been released during the conference. A new way to connect in case you don’t want to use Session Manager.

Attended sessions

Keynote by Stephen E. Schmidt, VP and CISO of AWS

Keynote

The keynote speakers were great and overall the whole keynote was good. I would have liked to have more new releases, now the main content was importance of security and existing solutions.

GRC346 – DNS governance in multi-account and hybrid environments

Builder sessions were already available in the last years re:Invent, but it didn’t get into any of them. DNS is not really my main focus area, but interesting topic nonetheless. Still, probably leaving setup of this to network specialists.

The setup was a little bit let down, because the room was quite noisy with multiple builder sessions going on the same time and participants didn’t do much themselves. But, it was very easy to ask questions and there was good discussion between AWS architect and participants.

SEJ001 Security Jam

Hackaton / Jam was again a highlight of the event. Sadly, I was just in time in and hence didn’t have much time to talk with the team beforehand. The duration was actually 3,5 hours which felt a little bit short.

We had a three-person team, but we didn’t really achieve much synergy. At first, we decided everyone would begin as a solo player and ask help when needed. During the whole jam I worked with one other only for about one hour and the last one worked solo the whole time. We did call out some questions and answers to each other from time to time, but very minimal team work.

One lesson that I relearned again was to double check everything. In one task, there needed to be a private endpoint for API Gateway. In security jams, some of the setup is already done for you. So, when I checked the list of private endpoints and there was one, I thought that it was the correct one. But it was for AWS Systems Manager and therefore I would have needed to add a new one.

AWS has been improving the platform so that companies can request access to either AWS architect or company own personnel lead jams. Going to look into this and maybe holding an internal jam. But the cost was unclear and number of interested colleague was low last time that I tried to hold a GameDay.

Other sessions:

I also attended one lecture type session, one workshop and couple of chalk talks. To keep the length of the post manageable, I will skip them. But, feel free to ask about them.

Other activities

Security Hub (Expo)

Expo floor

Many of the partner solutions were of course about web firewalls etc., which aren’t the main interest for a data developer/architect. But there were also companies about data masking, encryption and audit trails. I have received many follow up emails and phone calls after the event. Luckily, many of them are interesting even though might not have a use case right away.

Networking

There was multiple unofficial after parties at Tuesday evening, but I only attended Nordic gathering sponsored by AWS. Quite small gathering, but made discussion easier. Most of the evening there was two from Sweden and one from Finland at my table, but couple of others visited.

No alt text provided for this image

Closing reception was really informal with food, drinks and games inside the conference center and outside in a lawn. Very nice, but not the best setup for me network. I did exchange couple of words with the people I met in the Nordic gathering.

Comparison to re:Invent

From my point of view, one major reason for attending for re:Invent to be able to hear and question AWS about brand new systems that only selected companies have been able to test with strict NDAs. Even if they aren’t right away available in Europe, you know the basic capabilities of the system and can plan for the future. And usually the technical people giving workshops and sessions give much more honest opinions compared to marketing material released about the service. This was mostly missing from re:Inforce, because only Amazon VPC Traffic Mirroring was completely new service.

Good thing was that having everything in one place made logistics much easier and there wasn’t so much moving around. The expo was also much smaller and interesting companies were easier to find.

Re:Invent has four main days and two shorter ones. Compared to that, two days of re:Inforce is quite short time. You don’t get familiarity of the location which would make moving around faster and you don’t have time to reorganize calendar if you would like to learn more about a certain topic. Also, from a traveling perspective, travel vs conference ratio is much worse with re:Inforce.

Summary

First feelings after the conference was that it was ok, but it has risen to good level after some thinking about it more objectively. The first impression came mostly because I was automatically comparing re:Inforce to re:Invent. In that comparison re:Inforce is lacking in multiple areas. But, if we are looking at re:Inforce objectively there was quite a lot to learn and meeting of new AWS users. And to some, shorted length and cheaper tickets might make it possible possible to attend where re:Invent isn’t a possibility.

If attending again, I should keep more free time in the calendar and participate in the background events like ongoing security jam and capture the flag. Also, more planning beforehand, because conference being only two days there really isn’t much time to reorganize days during the event.

The next re:Inforce will be in Houston, but the feedback form had a question for re:Inforce outside USA. So, there might be hope for one in Europe at some time in the future.

Additional reading

Got a laptop case with badges from AWS booths.