Attackers could use Internet route hijacking to get fraudulent HTTPS certificates

Inherent insecurity in the routing protocol that links networks on the Internet poses a direct threat to the infrastructure that secures communications between users and websites.

The Border Gateway Protocol (BGP), which is used by computer network operators to exchange information about which Internet Protocol (IP) addresses they own and how they should be routed, was designed at a time when the Internet was small and operators trusted each other implicitly, without any form of validation.

If one operator, or autonomous system (AS), advertises routes for a block of IP addresses that it doesn’t own and its upstream provider passes on the information to others, the traffic intended for those addresses might get sent to the rogue operator.

Such incidents are called BGP hijacking, when done intentionally by a malicious actor, or route leaking, when caused by human error or misconfiguration, and are increasingly common. Their impact can be local or global, depending on their particular circumstances.

While there are best security practices that could prevent such incidents, they are not implemented by all network operators around the world. The networks where these security practices are not implemented are also the ones that are most likely to have vulnerable border gateway routers that hackers could attack.

At the Black Hat security conference in Las Vegas Wednesday there were two talks dedicated to BGP hijacking, highlighting the importance of this topic to the security community. In one of them, a Russian security researcher, named Artyom Gavrichenkov, showed how attackers could perform a BGP hijacking attack that would affect only a small geographic region, but which could help them trick a certificate authority to issue a valid certificate for a domain name they don’t own.

In order for this to work, the attackers would need to pick a target website whose IP address is part of an AS located in a different region of the world. For example attackers in Asia could decide to target Facebook. They would then need to pick a local certificate authority (CA) that is very close to the rogue autonomous system from where the attack will originate.

The goal of the attack would be to make the certificate authority’s ISP believe that Facebook’s IP address is owned by the rogue AS instead of Facebook’s real AS. The goal of picking a far away target is to lower the chances that the real AS will notice the hijacking—essentially that a small portion of the Internet believes Facebook is part of a different network.

The process of obtaining a TLS certificate for a domain involves proving that the person who requested the certificate has control of the domain name. This check can be done in an automated manner in several ways: by uploading a special CA-provided page to the server where the domain name is hosted so that the CA can check if it exists, by sending an email to the email address listed in the domain’s WHOIS record or by creating a Domain Name System TXT record for the domain. Only one of these methods is enough to confirm ownership.

Creating a page on the server that hosts the domain is the easiest check to pass by using a BGP hijacking attack. The attacker would need to set up a Web server, create the page, then advertise rogue routes for Facebook’s IP address. Those routes will propagate regionally affecting the certificate authority and tricking it into believing the page was actually hosted on Facebook’s domain. The CA would then issue the SSL certificate.

The fraudulent, but nevertheless valid digital certificate, could then be used to launch man-in-the-middle attacks against Facebook users anywhere in the world, not just the region where the BGP hijacking happened.

The current digital certificate infrastructure that underpins secure communications on the Web doesn’t take routing flaws into consideration, Gavrichenkov said. And because it is built into everything, from desktop computers to embedded devices and mobile phones, it can’t be easily changed, he said.

The underlying problem is with the Internet routing protocol and the lack of implementation of recommended security practices. However, the BGP hijacking issue has been known for a very long time and the researcher believes it’s unlikely to be fixed anytime soon either.

Efforts like the Certificate Transparency framework proposed by Google, or the certificate pinning mechanisms implemented in some browsers could help detect when rogue certificates are issued, but that’s more of a workaround than a fix since they’re not widely adopted yet.

via Attackers could use Internet route hijacking to get fraudulent HTTPS certificates | PCWorld.

Advertisements

HTTPS-crippling attack threatens tens of thousands of Web and mail servers

Tens of thousands of HTTPS-protected websites, mail servers, and other widely used Internet services are vulnerable to a new attack that lets eavesdroppers read and modify data passing through encrypted connections, a team of computer scientists has found.

The vulnerability affects an estimated 8.4 percent of the top one million websites and a slightly bigger percentage of mail servers populating the IPv4 address space, the researchers said. The threat stems from a flaw in the transport layer security protocol that websites and mail servers use to establish encrypted connections with end users. The new attack, which its creators have dubbed Logjam, can be exploited against a subset of servers that support the widely used Diffie-Hellman key exchange, which allows two parties that have never met before to negotiate a secret key even though they’re communicating over an unsecured, public channel.

The weakness is the result of export restrictions the US government mandated in the 1990s on US developers who wanted their software to be used abroad. The regime was established by the Clinton administration so the FBI and other agencies could break the encryption used by foreign entities. Attackers with the ability to monitor the connection between an end user and a Diffie-Hellman-enabled server that supports the export cipher can inject a special payload into the traffic that downgrades encrypted connections to use extremely weak 512-bit key material. Using precomputed data prepared ahead of time, the attackers can then deduce the encryption key negotiated between the two parties.

“Logjam shows us once again why it’s a terrible idea to deliberately weaken cryptography, as the FBI and some in law enforcement are now calling for,” J. Alex Halderman, one of the scientists behind the research, wrote in an e-mail to Ars. “That’s exactly what the US did in the 1990s with crypto export restrictions, and today that backdoor is wide open, threatening the security of a large part of the Web.”

Read More: HTTPS-crippling attack threatens tens of thousands of Web and mail servers | Ars Technica.

New Firefox features will eventually be limited to secure websites only

Mozilla is planning to gradually favor HTTPS (HTTP Secure) connections over non-secure HTTP connections by making some new features on its Firefox browser available only to secured sites.

The browser developer decided after a discussion on its community mailing list that it will set a date after which all new features will be available only to secure websites, wrote Firefox security lead Richard Barnes in a blog post. Mozilla also plans to gradually phase out access to browser features for non-secure websites, particularly features that could present risks to users’ security and privacy, he added.

The community has to still agree on what new features will be blocked for non-secure sites. Firefox users could, for instance, still be able to view non-secure websites. But those websites would not get access to new features such as access to new hardware capabilities, Barnes said.

“Removing features from the non-secure web will likely cause some sites to break. So we will have to monitor the degree of breakage and balance it with the security benefit,” he said, adding that Mozilla is already considering less severe restrictions for non-secure websites to find the right balance. At the moment, Firefox already blocks, for example, persistent permissions from non-secure sites for access to cameras and phone.

Mozilla’s move follows the introduction of “opportunistic encryption” to Firefox last month, which provides encryption for legacy content that would otherwise have been unencrypted. While that does not protect from man-in-the-middle attacks like HTTPS does, it helps against passive eavesdropping and was welcomed by security experts.

 

via New Firefox features will eventually be limited to secure websites only | PCWorld.

Firefox 37 supports easier encryption option than HTTPS

The latest version of Firefox has a new security feature that aims to put a band-aid over unencrypted website connections. Firefox 37 rolled out earlier this week with support for opportunistic encryption, or OE. You can consider OE sort of halfway point between no encryption (known as clear text) and full HTTPS encryption that’s simpler to implement.

For users, this means you get at least a modicum of protection from passive surveillance (such as NSA-style data slurping) when sites support OE. It will not, however, protect you against an active man-in-the-middle attack as HTTPS does, according to Mozilla developer Patrick McManus, who explained Firefox’s OE rollout on his personal blog.

Unlike HTTPS, OE uses an unauthenticated encrypted connection. In other words, the site doesn’t need a signed security certificate from a trusted issuer as you do with HTTPS. Signed security certificates are a key component of the security scheme with HTTPS and are what browsers use to trust that they are connecting to the right website.

The impact on you: Firefox support is only half of the equation for opportunistic encryption. Websites will still have to enable support on their end for the feature to work. Site owners can get up and running with OE in just two steps, according to McManus. But that will still require enabling an HTTP/2 or SPDY server, which, as Ars Technica points out, may not be so simple. So while OE support in Firefox is a good step for users it will only start to matter when site owners begin to support it.

More than OE

Beyond support for OE, the latest build of Firefox also adds an improved way to protect against bad security certificates. The new feature called OneCRL lets Mozilla push lists of revoked certificates to the browser instead of depending on an online database.

The new Firefox also adds HTTPS to Bing when you use Microsoft’s search engine from the browser’s built-in search window.

 

via Firefox 37 supports easier encryption option than HTTPS | PCWorld.

The Pirate Bay’s use of CloudFlare means ISP blocks are ineffective

Authorities seemed rather confident that they’d shut down popular torrent-sharing site The Pirate Bay (TPB) once and for all following a raid on its servers late last year. The resilient site resurfaced just a month later, however, and is now even easier to access thanks to a partnership with CloudFlare.

The UK’s High Court issued a series of blocking orders in 2012 to block TPB and other torrent and streaming portals. Since that time, a number of major ISPs in the region were required to enforce the orders which meant determined individuals had to jump through hoops to regain access to the site.

That no longer appears to be the case.

According to the operator of a Pirate Bay proxy (via Torrent Freak), what is likely happening is that when HTTPS Strict is enabled on CloudFlare, it removes certain identifiable information from a request. When an ISP checks the header to see if it is on the banned list, they get nothing and treat the site as unbanned.

The operator adds that any site that uses CloudFlare, has properly configured and signed its SSL certificate and enables HTTPS Strict should be good to go. What’s more, CloudFlare adds an additional layer or protection because the true location of a server is hidden behind CloudFlare.

Other benefits of CloudFlare include the fact that it helps to protect against DDoS attacks and reduces the bandwidth burden put on servers.

The partnership means that those who have had access to the site blocked are suddenly finding they can visit without incident, a fact that is steadily increasing TPB’s traffic.

via The Pirate Bay’s use of CloudFlare means ISP blocks are ineffective – TechSpot.