Yesterday’s Internet Outage In Parts of U.S. and Canada You Didn’t Hear About

A year ago, a DDoS attack caused internet outages around the US by targeting the internet-infrastructure company Dyn, which provides Domain Name System services to look up web servers. Monday saw a nationwide series of outages as well, but with a more pedestrian cause: a misconfiguration at Level 3, an internet backbone company—and enterprise ISP—that underpins other big networks. Network analysts say that the misconfiguration was a routing issue that created a ripple effect, causing problems for companies like Comcast, Spectrum, Verizon, Cox, and RCN across the country.


How a Tiny Error Shut Off the Internet for Parts of the US and Canada

Lily Hay Newman

a group of computer equipment

© Joe Raedle

A year ago, a DDoS attack caused internet outages around the US by targeting the internet-infrastructure company Dyn, which provides Domain Name System services to look up web servers. Monday saw a nationwide series of outages as well, but with a more pedestrian cause: a misconfiguration at Level 3, an internet backbone company—and enterprise ISP—that underpins other big networks. Network analysts say that the misconfiguration was a routing issue that created a ripple effect, causing problems for companies like Comcast, Spectrum, Verizon, Cox, and RCN across the country.

Level 3, whose acquisition by CenturyLink closed recently, said in a statement to WIRED that it resolved the issue in about 90 minutes. “Our network experienced a service disruption affecting some customers with IP-based services,” the company said. “The disruption was caused by a configuration error.” Comcast users started reporting internet outages around the time of the Level 3 outages on Monday, but the company said that it was monitoring “an external network issue” and not a problem with its own infrastructure. RCN confirmed that it had some network problems on Monday because of Level 3. The company said it had restored RCN service by rerouting traffic to a different backbone.

a close up of a map 

© Downdetector.com 

The misconfiguration was a “route leak,” according to Roland Dobbins, a principal engineer at the DDoS and network-security firm Arbor Networks, which monitors global internet operations. ISPs use “Autonomous Systems,” also known as ASes, to keep track of what IP addresses are on which networks, and route packets of data between them. They use the Border Gateway Protocol (BGP) to establish and communicate routes. For example, packets can route between networks A and B, but network A can also route packets to network C through network B, and so on. This is how internet service providers interoperate to let you browse the whole internet, not just the IP addresses on their own networks.

In a “route leak,” an AS, or multiple ASes, issue incorrect information about the IP addresses on their network, which causes inefficient routing and failures for both the originating ISP and other ISPs trying to route traffic through. Think of it like a series of street signs that help keep traffic flowing in the right directions. If some of them are mislabeled or point the wrong way, assorted chaos can ensue.

Route leaks can be malicious, sometimes called “route hijacks” or “BGP hijacks,” but Monday’s incident seems to have been caused by a simple mistake that ballooned to have national impact. Large outages caused by accidental route leaks have cropped up before.

“Folks are looking to tweak routing policies, and make mistakes,” Arbor Networks’ Dobbins says. The problem could have come as CenturyLink works to integrate the Level 3 network or could have stemmed from typical traffic engineering and efficiency work.

Internet outages of all sizes caused by route leaks have occurred occasionally, but consistently, for decades. ISPs attempt to minimize them using “route filters” that check the IP routes their peers and customers intend to use to send and receive packets and attempt to catch any problematic plans. But these filters are difficult to maintain on the scale of the modern internet and can have their own mistakes.

Monday’s outages reinforce how precarious connectivity really is, and how certain aspects of the internet’s architecture—offering flexibility and ease-of-use—can introduce instability into what has become a vital service.

FCC To Propose Strong Net Neutrality Rules

In an extraordinary turn of events, the U.S. Federal Communications Commission appears set to implement strong new rules, later this month to enforce Net Neutrality on the Internet. If the new rules are implemented, it will have major favorable implications for future global Internet policy with the International Telecommunications Union in Geneva, Switzerland. This means simply that all traffic on the Internet will be treated equally and fairly, which is one of the founding principles of the Internet, since its invention by Sir Tim Berners-Lee, Vin Cerf and others back in the 1980’s.


Rule Would Ban Practice Known as Paid Prioritization, Say Sources

tom wheeler FCC

U.S. Federal Communications Commission Chairman, Tom Wheeler

In an extraordinary turn of events, the U.S. Federal Communications Commission appears set to implement strong new rules, later this month to enforce Net Neutrality on the Internet.  If the new rules are implemented, it will have major favorable implications for future global Internet policy with the International Telecommunications Union in Geneva, Switzerland.  This means simply that all traffic on the Internet will be treated equally and fairly, which is one of the founding principles of the Internet, since its invention by Sir Tim Berners-Lee, Vin Cerf and others back in the 1980’s.  It is the same for voice telecommunications the World over.  The current problem has been the preference of large corporate Internet Service Providers (ISP’s) to charge preferentially for priority access, or “paid prioritization.” The potential for abuse by corporations is obvious. Comcast, one of the largest “carriers with content,” has been cited for the potential to prioritize its NBC content over other competitor content, if Net Neutrality were not enforced.  Netflix had already capitulated to Comcast and entered into what is known as a paid “peering agreement,” to insure priority of Netflix streaming content. If the FCC Title II rules are implemented, the Comcast/Netflix agreement would likely become null and void.

 

REBLOGGED from The Wall Street Journal

By GAUTHAM NAGESH
Updated Feb. 2, 2015 4:18 p.m. ET

Federal Communications Commission Chairman Tom Wheeler intends to seek a significant expansion of his agency’s authority to regulate mobile and fixed broadband providers, a move that would fully embrace the principle known as “net neutrality.”

According to multiple people familiar with the agency’s plan, Mr. Wheeler intends to change the way both mobile and fixed broadband firms are regulated. Rather than being lightly regulated information services, they would become like telecommunications companies, which would subject them to greater regulation on everything from pricing to how they deploy their networks.

A key element of the rule would be a ban on broadband providers blocking, slowing down or speeding up specific websites in exchange for payment, a practice known as paid prioritization, these people say.

Mr. Wheeler’s expected proposal tracks closely with President’s Barack Obama’s Nov. 10 statement, in which he called for the “strongest possible rules” to protect net neutrality, the principle that all Internet traffic should be treated equally. That represents a major shift from the chairman’s initial plan, which would have allowed some paid prioritization.

In his statement, the president called for Mr. Wheeler to classify broadband providers as common carriers under Title II of the Communications Act, a move that came after months of campaigning by activists, Web startups and others.

The proposal would also give the FCC the authority to regulate deals on the back-end portion of the Internet, where broadband providers such as Comcast Corp. and Verizon Communications Inc. pick up traffic from big content companies such as Netflix Inc. and network middlemen like Level 3 Communications Inc. The FCC would decide whether to allow these so-called paid peering deals based on whether it finds them just and reasonable, the standard under Title II.

A federal court struck down the FCC’s most-recent set of net neutrality rules in January 2014, sending the issue back to the agency for the third time. Wireless and broadband industry officials have indicated they plan to sue again if the FCC moves ahead with Title II, which they believe would saddle them with outdated regulations and depress investment in upgrading networks.

It remains unclear how the proposed rules will treat other practices besides paid prioritization, such as zero-rated mobile plans that let users access only a small number of apps without hurting their monthly data allowance. The FCC is also expected to exempt broadband providers from the bulk of Title II regulations, in areas including what they charge their customers, through a process known as forbearance.

Mr. Wheeler is expected to circulate his proposal on Thursday, with a vote scheduled for the FCC’s open meeting on Feb. 26. A majority of the FCC’s five commissioners must approve the rules for them to take effect.

Setback for Net Neutrality May Actually Speed Its Adoption

Yesterday, the United Stated Federal Court of Appeals in Washington, D.C. issued a ruling that was essentially a “technical” setback for the notion that all Internet traffic should be treated equally, better known as Net Neutrality. The ruling now permits giant corporations like Verizon, NBC/Comcast, and Time Warner to charge higher fees to content providers like Netflix, Amazon and even potentially, Google. If that sounds bad for consumers, you are right. This decision was essentially caused by an earlier decision of the U.S. Federal Communications Commission to maintain a free and open “hands off” policy, and not regulate Internet traffic, considered evil by Internet purists. But the effect of this Court ruling may be greater evil, leading to the conclusion that “common carrier” regulation may be the lesser of two evils.


Yesterday, the United Stated Federal Court of Appeals in Washington, D.C. issued a ruling that was essentially a “technical” setback for the notion that all Internet traffic should be treated equally, better known as Net Neutrality. The ruling now permits giant corporations like Verizon, NBC/Comcast, and Time Warner to charge higher fees to content providers like Netflix, Amazon and even potentially, Google.  If that sounds bad for consumers, you are right.

This Court decision has even deeper implications as NBC/Comcast is in the unique position of being both a “carrier” of the Internet bits, and a “content provider.” The enables Comcast to charge higher fees to content providers for content that competes with NBC. Is that anti-competitive? Sure sounds like it to me.

This decision was essentially caused by an earlier decision of the U.S. Federal Communications Commission to maintain a free and open “hands off” policy, and not regulate Internet traffic, considered evil by Internet purists.

But the effect of this Court ruling may be greater evil, leading to the conclusion that “common carrier” regulation of the Internet may be the lesser of the two evils, and an inevitable outgrowth of the NSA Internet espionage revelations, Chinese military Internet espionage revelations, and “balkanization” of the Internet by foreign governments, building protectionist national firewalls, and just plain old Internet traffic snooping of your privacy.   It is like what happened to the Summer of Love. The Internet was originally about free love, but before long the whole thing deteriorated into a jungle. That is what we have now, and by the simple decision of the FCC to declare the Internet a “common carrier,” a regulated telecommunications infrastructure, corporations would need to implement Net Neutrality and report their Internet traffic policies to the government.  For those who hate government regulation, I agree in principle. Sadly, it is the corporations, and the NSA that have made this imperative, to insure transparency, equality, and some level of Internet privacy.

In February of 2013 I wrote on this blog about the problem, and the book Captive Audience: The Telecom Industry and Monopoly Power in the New Gilded Age, by Yale Law School Professor Susan P. Crawford.

Read more: Why Internet Neutrality is so important

Urban Legend of Free Wi-Fi For the Masses: Devil In The Details


I just finished a long chat with one of my longtime Intel colleagues in Oregon.  I have the great good fortune to be recognized as an “Intel alumni,”  which allows me to simply pick up the phone to update myself with a free private seminar on pretty much any high tech market topic.  I promise not to bore readers with tedious technical issues, but anyone interested in the emergence and development of the multi-Billion dollar smart mobile market may find this very enlightening.

My friend led the Intel WiMax effort until it more or less morphed into the current 4G LTE mobile data standard.  Aside from catching up on a bunch of “what’s he doing now?” topics, our deep technical conversation was about the potential for so-called White Space WiFi  and all the many big corporate players in the mix in addition to Intel:  Microsoft, Google, Verizon, AT&T, Sprint and the rest of the mobile telecommunications companies.  What I learned from my wireless engineering guru, required that I brush off a lot of what I learned years ago at Mobile Data International about radio spectrum, bandwidth, radio signal propagation, transmit/receive power and contention, data signal computational requirements on both ends, and finally, current U.S. Federal Communications Commission politics.

Needless to say, the situation is a giant complex hairball.

I began my educational odyssey because of a flurry of stories that appeared this week on an alleged imminent FCC action to authorize use of unlicensed radio spectrum for free metropolitan scale WiFi.  All of the hoopla this week has culminated in a story on the National Public Radio blog in the United States, which at least clarifies the political dimension, if not the technical engineering dimension of this emerging urban legend, “Viral Story About Free WiFi Spotlights Hidden Policy War.”  In the NPR blog post, FCC Chairman Julius Genachowski is quoted referring to a “balanced policy,” but also a nascent “War on WiFi”  between the mobile telecommunications carriers on one side, and Intel, Microsoft and Google on the other side.. The blog post also points to predictable Democrat and Republican differences on FCC radio spectrum policy related to “free unlicensed spectrum” for WiFi. The Republicans are seeking to protect mobile telecom carriers “revenue streams,” while the Democrats want to see a ubiquitous free, or at least very cheap Internet.  The leading Internet broadband countries in the World, South Korea, Hong Kong and Japan, have made it a governmental policy priority and have borne a significant share of the cost. One could argue that Samsung has benefited immensely from South Korean government Internet policy.  The United States is nowhere near that kind of commitment, and to make matters worse, the carriers are not currently interested in additional capital expenditure to expand bandwidth to an Asian standard.  I have had a fair amount of experience with the telecommunications carrier mentality, going back to my DSL days, and I would have to say that the carriers are content with the bandwidth we have, and are more interested in extracting greater revenue from us.

Translation:  No political movement happening anytime soon. But that is only half the story. There are are also very complex technical issues that are unresolved.

Read more: http://www.npr.org/blogs/itsallpolitics/2013/02/05/171183700/viral-story-about-free-wifi-spotlights-mostly-hidden-policy-war

genachowskiFCC Chairman, Julius Genachowski

I first became aware of the potential for White Space WiFi in a  November 2011 Economist article in the Science and Technology section, “White Space Puts WiFi On Steroids.”  The article painted a picture of massive longer range, greater bandwidth WiFi networks being just around the corner. The new equipment was already being manufactured and deployment would surely be before 2015.  Looking back now, the author of the Economist article was probably not a wireless engineering expert or well-versed on FCC politics.

Read more: http://www.economist.com/node/21536999 

Stories on the potential for White Space WiFi, and even an imminent FCC vote on it,  can be found going back at least as far as 2008.  Needless to say nothing happened, and I needed to understand why.

At the 50,000 foot level, the technical potential seemed common sense and simple.  The Economist put it this way:

“White-space” is technical slang for television channels that were left vacant in one city so as not to interfere with TV stations broadcasting on adjacent channels in a neighbouring city…With the recent switch from analogue to digital tele­vision, much of this protective white-space is no longer needed. Unlike analogue broadcasting, digital signals do not “bleed” into one another—and can therefore be packed closer together. All told, the television networks now require little more than half the frequency spectrum they sprawled across previously.”

Voila!  We can put up maybe one or two big towers, crank up the transmit power and put the mobile carriers out of business in major metropolitan areas. We could dump our mobile contracts or at least downgrade them, browse the Internet, text message, and use Skype to make our calls, all without a mobile signal.  I think you can see where I am going with this.

There are numerous technical problems with this vision, as well as cost issues.   First, an entirely new IEEE spec would be required.  The current IEEE 802.11 specification for WiFi is local area only.  The transmit wattage is so low that the range is spec’d at 100 meters, and the current “handshake” process between the client and the base would not work in a metro scale WiFi scenario.  Current so-called metro WiFi deployments use a variety of incompatible proprietary mesh network architectures that do not scale well to true wide area deployment.  Bottom line: there is no wide area WiFi de facto or IEEE standard everyone agrees on = A Tower of Babel.

Second, while mobile devices would be able to “hear” the base from much longer distances, how would the mobile device transmit back to tower?  Current mobile transmit wattage is minuscule, and uses relatively little battery capacity because both WiFi and cellular telecom architectures do not require high wattage from the mobile device. Transmitting from the mobile to a base much further away, perhaps miles, would eat up battery capacity in a hurry. Some kind of hybrid solution that would allow mobile “upstream” transmission to a cellular tower or nearby WiFi hotspot might work, but presents additional complex technical, competitive and political issues.

Finally, there is the issue of cost.  While we are all unhappy with the current cost of our mobile service and data plans, the cost of providing high bandwidth wireless data and carrier-scale Gigabyte fiber backhaul to the Cloud is not cheap.  I heard actual cost numbers this morning that surprised me.  Some way would need to be found to dramatically reduce the cost of mega bandwidth long range WiFi.   I also learned that LTE (long term evolution) may yet be the key to future.

So, as stated above, this is still an urban legend, unlikely to become reality anytime soon….but hey, stay tuned. Anything could happen.