This publication can be found online at http://pubpub.ito.com/pub/dmca-drm-aml-kyc-backdoors.
The views expressed in this document are my personal views and do not represent those of my colleagues, friends or any organization with which I am affiliated.
Why anti-money laundering laws and poorly designed copyright laws are similar and should be revised
Intentionally or unintentionally, poorly crafted or outdated laws and technical standards threaten to undermine security, privacy and the viability of our most promising new technologies and networks, such as Bitcoin and Blockchain. We should vigilantly be reviewing and revising laws and standards for the public good and working to prevent the creation of fragile and cumbersome systems designed to comply with these poorly crafted or outdated laws. In this post, I discuss the Digital Millennium Copyright Act’s Anti-Circumvention provision, Digital Rights Management, Anti-Money Laundering Law, Know Your Customer Laws and security backdoors.
Joichi Ito
The Internet’s founding principles – openness, unbundling, diversity, open standards – made it robust; a force for democratizing access. That access created an explosion of innovation far beyond anyone’s imagination.
The Internet’s openness is its strength. It is a “stupid network” [1] whose internals are unbundled into layers of open standards that sandwich layers of diversity and innovation. Stupid networks focus on transporting bits from one place to another. That end-to-end principle allows for innovation at the network’s edges. By “unbundling” the transportation of bits from the provision of services, applications can be developed without permission. This is where we get the innovation in services that the network’s architects and managers never imagined or had to plan for.
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
Policy makers grapple with the nature of the open Internet with varying results. Sometimes they’ll pass rules or orders that “break” these principles or induce changes in the Internet’s architecture that work against its openness. These are usually the result of pressure from law enforcement or corporate capture in regulation and standards.
Here are a few of the worst offenders: rules, laws and standards that do damage to the net’s architecture, exceeding any benefit they deliver for their champions:
Sections 1201-1203 of the 1998 Digital Millennium Copyright Act (DMCA) make it illegal to circumvent locks that restrict access to copyrighted works regardless of whether you are actually breaking copyright law. This means that companies can use digital locks to hide content to which we should have legitimate access, and those locks have the force of law – breaking them is a felony with a maximum sentence of five years in prison and a $500,000 fine.
It doesn’t matter how legitimate your access is. You could be delving into your own car’s computers, your medical implant’s data-streams, or even content you created on devices you own yourself. (Farmers who gather soil-density surveys of their own fields while driving their tractors around them are not allowed to see those data unless they buy the data back from John Deere). This also inhibits research that focuses on whether the security of such systems is robust.
The FDA, for example, has been trying to get medical device companies to allow hacking currently prevented by the DMCA. [2] The Library of Congress has added car software to the list of exemptions from anti-circumvention [3], but unfortunately it appears that “exemptions created under the rulemaking apply only to the act of circumvention, and not the development and distribution of circumvention tools.” [4] Tough luck for drivers and researchers who aren’t also encryption experts.

Digital Rights Management (DRM) & The World Wide Web Consortium (W3C)

The W3C is currently standardizing DRM for use in HTML5, the next generation of core Web standards. By allowing DRM to be included in the standard, we “break” the architecture of the Internet by allowing companies to create places to store data and run code on your computer that you do not have access to and where breaking into code on your computer would constitute breaking the law. This is both a security risk and a fundamentally fragile system where vast amounts of content and information could be lost in the future as technologies evolve and companies change.
While DRM has been touted as critical for business, it is clear that people are willing to pay for streaming and licensing of content without technical protections. If someone could actually afford to pay the fees currently charged by the streaming vendors, why would they go to an illegal pirating site to download something? Netflix, Apple Music, Spotify and Pandora would most likely not even notice, nor would their users, if they removed DRM technology. While it may not be in their interest to announce the death of DRM, it’s likely to die a quiet death.
In the meantime, we will be left with a broken and fragile architecture, as well as browsers whose internals are off-limits to security researchers, who face brutal punishment for trying to determine whether your gateway to the Internet is secure enough to rely on.

Anti-Money Laundering Law (AML) and Know Your Customer Laws (KYC)

There are many laws that have been created to prevent money laundering - crimes that disguise the original ownership and control of the proceeds of criminal conduct by making such proceeds appear to have derived from a legitimate source. One of the reasons these laws exist is to track terrorists and criminals by monitoring money flows.
Laws to prevent money laundering create a requirement to report transactions above a threshold (usually $10,000), to report assets held anywhere in the world in your tax returns and for banks to “know your customer” and keep detailed records of who their customers are and what they are doing. Breaking these regulatory requirements is illegal. Like the anti-circumvention law, while you may not actually be laundering money, breaking these anti-money laundering monitoring systems constitutes a crime.
The personal information and transaction details are collected are stored in databases, and this presents a substantial risk to society. Criminal and foreign government hackers have over and over again hacked the most protected of government databases, such as the personal information of US government employees with security clearance in the OPM database. [5] In addition, these laws require banks and financial institutions to collect this information and structure their systems to allow this information to be collected, also making these systems vulnerable to attack.
While access to this information can sometimes be useful in investigations, almost all of the sophisticated technology to “catch the bad guys” doesn’t require access to the content of the messages, but rather only access to the metadata. This is evident in modern Signal Intelligence (SIGINT: the collection of data ranging from satellite communication to Internet packets), where intelligence and law enforcement agencies rely mostly on machine learning (artificial intelligence) and pattern recognition extracted from metadata, rather than from the content of the messages. (Snowden released a document revealing the state of the art of goverment SIGINT. [6] )
We are already seeing both research and practice in conducting SIGINT on the blockchain. [7] With Bitcoin and Blockchain technology in “vanilla” form, the ability to perform SIGINT is actually HIGHER than traditional more closed systems. AML and KYC laws are often impossible to implement while balancing the privacy of the users because the Blockchain is potentially visible to the whole world and not under the control of the selected entities. In fact, I believe that we must not only prevent the collection of the same kind of information in traditional financial system, but we must also discuss developing technologies to prevent privacy risks from the analysis of the Blockchain. If we are to deploy Blockchain broadly, we will have to look at both AML and KYC laws and upgrading them, taking into account the new technical architecture and environment and balancing the privacy and security concerns.
The traditional financial system as we know it will undergo significant changes in the future, especially if we are headed in the direction of Bitcoin and Blockchain. We cannot expect the current AML and KYC laws to work in this new dimension: these laws were conceived for closed, highlyguarded systems and not for international, open, technical standards. For instance, the “travel” rule [8] requires financial institutions to pass personal information to the next financial institution when transmitting funds. There is currently no secure or easy way to do this on the Blockchain.
Just like with the Internet, weaknesses in networks like the Blockchain propagate to countries and regions where privacy risks to users could cause significant risks to human rights workers, journalists or anyone who questions authority. The conversation on creating new AML and KYC laws for new financial systems like Bitcoin and Blockchain needs to be a global one.

iPhone/Backdoors

While putting backdoors on all of our communications and/or banning encryption hasn’t been passed as a law, there is a precedent for what is going on with Apple and the FBI. In the 90s, as telephones were going from analog phone lines to digital, the FBI argued that it could become more difficult to serve wiretap orders on phone companies. Rather than connecting alligator clips to wires at the phone company’s offices, they would have request their own backdoor on the switches the phone companies used.
When the government offered to pay the cost, the phone companies accepted the deal and the Communications Assistance for Law Enforcement Act (CALEA) was born. The FBI has built an extensive data collection system on top of this system. (One distinction is that CALEA is about backdoors on the transport platform, while the current iPhone debate is about encryption on the edges of the network.)
While Silicon Valley appears to be resisting the government requests more than the telephone companies did, they are under constant pressure, and as the Snowden documents have revealed, it appears that many companies have provided these back doors.
While I’m sure law enforcement officers would love to have even more tools for their investigations, we already have more tools to track and monitor the “bad guys” than at any time in history. The problem with backdoors is that they create a fragile infrastructure. So that even if you believe that we can trust the US government and US law enforcement, this creates a weakness that can be exploited by the “bad guys.”
One great example of the backdoor that was recently found on the Juniper’s ScreenOS Software. [9] It appears that the government may have created a backdoor on a key secure communication channel, but that someone else (unknown), put a backback door on it exploiting the backdoor to make it their own.
In my view, the risk to everyone on the Internet caused by crippling security isn’t worth the incremental increase in the ability for any government to engage in legitimate surveillance. This point was made clear by the President’s Review Group on Intelligence, an expert group, which said that the US government should “fully support and not undermine efforts to create encryption standards; (2) make clear that it will not in any way subvert, undermine, weaken, or make vulnerable generally available commercial encryption.” [10] These are just some examples of the “broken” laws and standards. If we do not work actively to prevent the passage of bad laws and standards, and fight to overturn or fix the existing ones, we will soon lose the Internet and all of the freedoms, innovation and opportunities that it represents.
Some of my colleagues and members of the Internet community seem to believe that we can ignore regulators, or that regulators are fundamentally at odds with our best interest. I believe that we can’t ignore regulators because they will eventually pass laws that impact the scope and the way in which the technology we are developing is deployed. I also believe that many regulators do believe in trying to strike the right balance, and to engaging with the right people in the right context to help create technical standards and laws that actually work in the real world. We have had many successes such as a relatively unregulated early Internet, but we have also made some mistakes. For instance, were able to stop some mistakes like SOPA and PIPA [11] and also the Clipper Chip [12], but many laws such as the anti-circumvention piece of the DMCA made it through.
I believe that we need to vigilantly monitor the activity of lawmakers, regulators, standards bodies and industry groups and their activities. We must constantly review existing legal and regulatory frameworks as we develop new technologies to make sure that they make sense and not default to trying to apply existing laws and regulations to new technologies without careful review from first principles.
One of the reasons I am involved in organizations such as Creative Commons and am excited about helping to create the Digital Currency Initiative at MIT is because I am interested in trying to avoid mistakes that could undermine the full potential of open and interoperable networks, such as the network of trust and value that Bitcoin and the Blockchain represent. I hope to play a role in working with all parties, such as the users, the technical community, businesses and regulators in trying to develop and implement sustainable and healthy ecosystems that will not ruin the technology or our freedoms, while providing appropriate safeguards and structures for civil society, business and government.

References

[1]"Rise of the Stupid Network". Computer Telephony. (1997): 16-26. [http://www.isen.com/stupid.html]
[2]"FDA presses medical device makers to OK good faith hacking". The Christian Science Monitor. (2016): [http://www.csmonitor.com/World/Passcode/2016/0210/FDA-presses-medical-device-makers-to-OK-good-faith-hacking]
[3]"Soon It'll Be OK To Tinker With Your Car's Software After All". all tech considered. (2015): [http://www.npr.org/sections/alltechconsidered/2015/10/27/450572915/soon-itll-be-ok-to-tinker-with-your-cars-software-after-all]
[4]"What’s Missing from the Register’s Proposals". re:create, (2015): [http://www.recreatecoalition.org/whats-missing-from-the-registers-proposals/]
[5]"Hacks of OPM databases compromised 22.1 million people, federal authorities say". The Washington Post. (2015): [https://www.washingtonpost.com/news/federal-eye/wp/2015/07/09/hack-of-security-clearance-system-affected-21-5-million-people-federal-authorities-say/]
[6]"HIMR Data Mining Research Problem Book - Redacted". (2011): [https://www.documentcloud.org/documents/2702948-Problem-Book-Redacted.html] UK Top Secret STRAP1 COMINT released by Edward Snowden
[7]"BitIodine: Extracting Intelligence from the Bitcoin Network". (2014): [https://miki.it/articles/papers/#bitiodine] First presented at Hack In The Box: Kuala Lumpur - October 15, 2014
[8]"Funds “Travel” Regulations:Questions & Answers". United States Department of the Treasury, Financial Crimes Enforcement Network, (1997): Num. Advisory: Issue 7. [https://www.fincen.gov/news_room/rp/advisory/html/advissu7.html]
[9]"On the Juniper backdoor". A Few Thoughts on Cryptographic Engineering. (2015): [http://blog.cryptographyengineering.com/2015/12/on-juniper-backdoor.html]
[10]"Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications". (2015): Num. MIT-CSAIL-TR-2015-026. [http://hdl.handle.net/1721.1/97690]
[11]"Protests against SOPA and PIPA". Wikipedia. [https://en.wikipedia.org/wiki/Protests_against_SOPA_and_PIPA]
[12]"Clipper chip". Wikipedia. [https://en.wikipedia.org/wiki/Clipper_chip]
Add to Comment
Creative Commons License
All discussions are licensed under a Creative Commons Attribution 4.0 International License.
Submit
ArchivedComment by Sylvain Zimmer1 point
^
6
^
Brian Behlendorf 3/14/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 10
Some of my colleagues and members of the Internet community seem to believe that we can ignore regulators, or that regulators are fundamentally at odds with our best interest. I believe that we can’t ignore regulators because they will eventually pass laws that impact the scope and the way in which the technology we are developing is deployed. I also believe that many regulators do believe in trying to strike the right balance, and to engaging with the right people in the right context to help create technical standards and laws that actually work in the real world. We have had many successes such as a relatively unregulated early Internet, but we have also made some mistakes. For instance, were able to stop some mistakes like SOPA and PIPA [11] and also the Clipper Chip [12], but many laws such as the anti-circumvention piece of the DMCA made it through.
I believe that we can’t ignore regulators because they will eventually pass laws that impact the scope and the way in which the technology we are developing is deployed. I also believe that many regulators do believe in trying to strike the right balance, and to engaging with the right people in the right context to help create technical standards and laws that actually work in the real world.
Let me double-down on this. I think it’s important not just to not ignore them (and by “them” I mean not just regulators, but policy makers and the general public who elects them), I think it’s important to understand what motivates them, and help them understand what motivates us. Through that we can probably find larger regions of common interest, and thus design products and protocols and standards that have a greater chance at widespread adoption and impact.
That said, we have an unfair advantage as builders of technology, as the policy questions largely follow implementation rather than precede them. Installed-base plays a huge role in norms setting, even to the judicial courts where some of these issues inevitably will be decided. DeCSS, for instance, played a huge role in demonstrating the folly of DVD region locking and DRM at that time. But we could do more. For example, a tool that breaks e-book DRM to allow the visually disabled the ability to “read” an e-book through text-to-speech, a right guaranteed by the Chafee Amendment and the Treaty of Marrakesh, would bevaluable to that community. It would also be illegal under current anti-circumvention regulations. But if it existed, it would demonstrate the folly and societal burden of hard-locked DRM, more effectively than any hypothetical.
Running code beats hypothetical argument. This is Bitcoin’s superpower, Mozilla’s superpower, even Apple’s superpower. If there’s any prescription for this paper, it should be to developers to build the plumbing for the kind of future they want; they likely have much more power to influence policy than they realize.
^
1
^
Rasty Turek 3/14/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 10
While DRM has been touted as critical for business, it is clear that people are willing to pay for streaming and licensing of content without technical protections. If someone could actually afford to pay the fees currently charged by the streaming vendors, why would they go to an illegal pirating site to download something? Netflix, Apple Music, Spotify and Pandora would most likely not even notice, nor would their users, if they removed DRM technology. While it may not be in their interest to announce the death of DRM, it’s likely to die a quiet death.
While DRM has been touted as critical for business, it is clear that people are willing to pay for streaming and licensing of content without technical protections. If someone could actually afford to pay the fees currently charged by the streaming vendors, why would they go to an illegal pirating site to download something?
I don’t think that the current use of DRM is well understood. Studios are not pressing streaming companies on using DRM to protect their content from being ripped-off. The day a content is available for streaming, it has been for long available on many p2p sharing networks.
Studios are requiring DRM because DRM functions as not-easily counterfeited counter. That means any time a user starts watching something, the counter increments by 1. These statistics are then shared with studios and streaming companies are charged based on those numbers. That’s why DRM is not going away anytime soon.
^
1
^
Mark Watson 3/14/2016
Permalink|Reply
Private. Collaborators only.
It’s worth noting that the Encrypted Media Extensions specification and its implementations have evolved significantly during the several years we have been working on them in W3C. DRMs under EME are now rather commoditized: having common features and using common, standard, encrypted files. They can be sandboxed, as Chrome and Mozilla have done, such that the DRM has no network access and is permitted to persist data or otherwise access the machine only as allowed by the (open source) sandbox. There are strict rules for privacy-sensitive identifiers and user consent. Users can completely disable the DRM, clear its storage, reset any identifiers. Sites using EME will be required to deploy HTTPS.
These changes in how DRM is integrated with the web (because it was, as has been mentioned, very much there before all of this) likely would not have happened without the W3C’s involvement.
I think it’s fair to say that few in the content industry share the view, expressed here, that the business risk of removing DRM is low making the likelihood of a “quiet death” any time soon very small.
^
1
^
Patrick Collins 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
The W3C is currently standardizing DRM for use in HTML5, the next generation of core Web standards. By allowing DRM to be included in the standard, we “break” the architecture of the Internet by allowing companies to create places to store data and run code on your computer that you do not have access to and where breaking into code on your computer would constitute breaking the law. This is both a security risk and a fundamentally fragile system where vast amounts of content and information could be lost in the future as technologies evolve and companies change.
By allowing DRM to be included in the standard, we “break” the architecture of the Internet by allowing companies to create places to store data and run code on your computer that you do not have access to and where breaking into code on your computer would constitute breaking the law
I don’t think this is true. The proposal is to allow for encrypted data to be sent to the browser without using a plugin like Flash. It’s nothing new, it’s just providing better support for something that is already being done everywhere.
^
4
^
Cory Doctorow 3/13/2016
Permalink|Reply
Private. Collaborators only.
Hey, Patrick. Here’s a pretty thoroughgoing look at the difference between what was (Silverlake, Flash etc) and what will be with EME: https://www.eff.org/deeplinks/2013/03/defend-open-web-keep-drm-out-w3c-standards
In the past two decades, there has been an ongoing struggle between two views of how Internet technology should work. One philosophy has been that the Web needs to be a universal ecosystem that is based on open standards and fully implementable on equal terms by anyone, anywhere, without permission or negotiation. This is the technological tradition that gave us HTML and HTTP in the first place, and epoch-defining innovations like wikis, search engines, blogs, webmail, applications written in JavaScript, repurposable online maps, and a hundred million specific websites that this paragraph is too short to list.
The other view has been represented by corporations that have tried to seize control of the Web with their own proprietary extensions. It has been represented by technologies like Adobe’s Flash, Microsoft’s Silverlight, and pushes by Apple, phone companies, and others toward highly restrictive new platforms. These technologies are intended to be available from a single source or to require permission for new implementations. Whenever these technologies have become popular, they have inflicted damage on the open ecosystems around them. Websites that depend on Flash or Silverlight typically can’t be linked to properly, can’t be indexed, can’t be translated by machine, can’t be accessed by users with disabilities, don’t work on all devices, and pose security and privacy risks to their users. Platforms and devices that restrict their users inevitably prevent important innovations and hamper marketplace competition.
The EME proposal suffers from many of these problems because it explicitly abdicates responsibilty on compatibility issues and let web sites require specific proprietary third-party software or even special hardware and particular operating systems (all referred to under the generic name “content decryption modules”, or CDMs, and none of them specified by EME). EME’s authors keep saying that what CDMs are, and do, and where they come from is totally outside of the scope of EME, and that EME itself can’t be thought of as DRM because not all CDMs are DRM systems. Yet if the client can’t prove it’s running the particular proprietary thing the site demands, and hence doesn’t have an approved CDM, it can’t render the site’s content. Perversely, this is exactly the reverse of the reason that the World Wide Web Consortium exists in the first place. W3C is there to create comprehensible, publicly-implementable standards that will guarantee interoperability, not to facilitate an explosion of new mutually-incompatible software and of sites and services that can only be accessed by particular devices or applications. But EME is a proposal to bring exactly that dysfunctional dynamic into HTML5, even risking a return to the “bad old days, before the Web” of deliberately limited interoperability.
Because it’s clear that the open standards community is extremely suspicious of DRM and its interoperability consequences, the proposal from Google, Microsoft and Netflix claims that “[n]o ‘DRM’ is added to the HTML5 specification” by EME. This is like saying, “we’re not vampires, but we are going to invite them into your house”.
Proponents also seem to claim that EME is not itself a DRM scheme. But specification author Mark Watson admitted that “Certainly, our interest is in [use] cases that most people would call DRM” and that implementations would inherently require secrets outside the specification’s scope. It’s hard to maintain a pretense that EME is about anything but DRM.
Stop Hollyweb DRM in HTML5 The DRM proposals at the W3C exist for a simple reason: they are an attempt to appease Hollywood, which has been angry about the Internet for almost as long as the Web has existed, and has always demanded that it be given elaborate technical infrastructure to control how its audience’s computers function. The perception is that Hollywood will never allow movies onto the Web if it can’t encumber them with DRM restrictions. But the threat that Hollywood could take its toys and go home is illusory. Every film that Hollywood releases is already available for those who really want to pirate a copy. Huge volumes of music are sold by iTunes, Amazon, Magnatune and dozens of other sites without the need for DRM. Streaming services like Netflix and Spotify have succeeded because they are more convenient than piratical alternatives, not because DRM does anything to enhance their economics. The only logically coherent reason for Hollywood to demand DRM is that the movie studios want veto controls over how mainstream technolgies are designed. Movie studios have used DRM to enforce arbitrary restrictions on products, including preventing fast-forwarding and imposing regional playback controls, and created complicated and expensive “compliance” regimes for compliant technology companies that give small consortia of media and big tech companies a veto right on innovation.
^
2
^
Fernando Gutierrez 3/12/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 7
Sections 1201-1203 of the 1998 Digital Millennium Copyright Act (DMCA) make it illegal to circumvent locks that restrict access to copyrighted works regardless of whether you are actually breaking copyright law. This means that companies can use digital locks to hide away content that we should have legitimate access, and those locks have the force of law – breaking them is a felony with a maximum sentence of five years in prison and a $500,000 fine.
This means that companies can use digital locks to hide away content that we should have legitimate access, and those locks have the force of law
It is interesting/worrisome how software is creating new types of limited property. We don’t really own our Kindle or iTunes libraries. We can’t disassemble our gadgets. One could argue that software is created new ad-hoc rights, but truth is most people don’t agree or even know.
ArchivedComment by Tim Chambers1 point
^
1
^
Fernando Gutierrez 3/12/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 5
There is already both research and practice in conducting SIGINT on the blockchain. [5] With Bitcoin and Blockchain technology in “vanilla” form, the ability to perform SIGINT is actually HIGHER than traditional more closed systems… AML and KYC laws are often impossible to implement while balancing the privacy of the users because the Blockchain is potentially visible to the whole world and not under the control of the selected entities. In fact, I believe that we must not only prevent the collection of the same kind of information in traditional financial system but also discuss technologies to prevent privacy risks from the analysis of the Blockchain. If we are to deploy Blockchain broadly we will have to look at both AML and KYC laws and upgrading them taking into account the new technical architecture and environment and balancing the privacy and security concerns.
If we are to deploy Blockchain broadly we will have to look at both AML and KYC laws and upgrading them taking into account the new technical architecture and environment and balancing the privacy and security concerns.
The analysis of the blockchain is both an attack on privacy and on fungibility, which is a basic property of money. By analyzing the blockchain, companies like Elliptic or Coinalytics are marking funds that have had any relation with “illegal activities”, so their clients take whatever meassure compliance with AML and KYC require. This makes indivuals unsafe because, deprived of the same tools companies have, they can’t know if their money is good or not. This can eventually lead to multiple classes of coins because not every jurisdiction will make the same judgements about the legality of activites funds have been involved with.
^
0
^
Patrick Collins 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
But whenever you optimize the network for one thing, you risk de-optimizing it for another.
I don’t think this is really the issue at hand with net neutrality/QoS/etc. There are no technical hurdles to a good QoS implementation as far as I know, and I believe it’s used successfully in internal infrastructure at many companies. It’s more of a moral/philosophical argument that ISPs shouldn’t be allowed to extort their customers.
^
-1
^
Richard Bennett 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
The claim that adding bandwidth cures all ills in “the network” is an anachronism left over from the time when “the network” consisted solely of wired data links that could be arbitrarily upgraded with little cost. While it has never been the correct solution to all forms of short-term congestion, it’s laughable out of touch with the reality of the wireless edge that currently dominates the Internet.
^
-2
^
Richard Bennett 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another.
Quality of Service is not a question of “optimizing” the network for one and only one service, it’s a matter of allowing the network to provide treatment for each class of application that is appropriate to the needs of the class. It’s primary function is mediating resource contention that arises between pairs of application classes that impose disparate patterns of load on the network when the loads are not necessary to end user Quality of Experience. This whole section has nothing to do with either Bitcoin or Copyright enforce and adds nothing to main argument. The people would be more coherent and credible if the (essentially religious) misrepresentation of Quality of Serivce were removed.