Creative Commons License
All discussions are licensed under a Creative Commons Attribution 4.0 International License.
Submit
ArchivedComment by Sylvain Zimmer1 point
^
6
^
Brian Behlendorf 3/14/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 10
Some of my colleagues and members of the Internet community seem to believe that we can ignore regulators, or that regulators are fundamentally at odds with our best interest. I believe that we can’t ignore regulators because they will eventually pass laws that impact the scope and the way in which the technology we are developing is deployed. I also believe that many regulators do believe in trying to strike the right balance, and to engaging with the right people in the right context to help create technical standards and laws that actually work in the real world. We have had many successes such as a relatively unregulated early Internet, but we have also made some mistakes. For instance, were able to stop some mistakes like SOPA and PIPA [11] and also the Clipper Chip [12], but many laws such as the anti-circumvention piece of the DMCA made it through.
I believe that we can’t ignore regulators because they will eventually pass laws that impact the scope and the way in which the technology we are developing is deployed. I also believe that many regulators do believe in trying to strike the right balance, and to engaging with the right people in the right context to help create technical standards and laws that actually work in the real world.
Let me double-down on this. I think it’s important not just to not ignore them (and by “them” I mean not just regulators, but policy makers and the general public who elects them), I think it’s important to understand what motivates them, and help them understand what motivates us. Through that we can probably find larger regions of common interest, and thus design products and protocols and standards that have a greater chance at widespread adoption and impact.
That said, we have an unfair advantage as builders of technology, as the policy questions largely follow implementation rather than precede them. Installed-base plays a huge role in norms setting, even to the judicial courts where some of these issues inevitably will be decided. DeCSS, for instance, played a huge role in demonstrating the folly of DVD region locking and DRM at that time. But we could do more. For example, a tool that breaks e-book DRM to allow the visually disabled the ability to “read” an e-book through text-to-speech, a right guaranteed by the Chafee Amendment and the Treaty of Marrakesh, would bevaluable to that community. It would also be illegal under current anti-circumvention regulations. But if it existed, it would demonstrate the folly and societal burden of hard-locked DRM, more effectively than any hypothetical.
Running code beats hypothetical argument. This is Bitcoin’s superpower, Mozilla’s superpower, even Apple’s superpower. If there’s any prescription for this paper, it should be to developers to build the plumbing for the kind of future they want; they likely have much more power to influence policy than they realize.
^
1
^
Rasty Turek 3/14/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 10
While DRM has been touted as critical for business, it is clear that people are willing to pay for streaming and licensing of content without technical protections. If someone could actually afford to pay the fees currently charged by the streaming vendors, why would they go to an illegal pirating site to download something? Netflix, Apple Music, Spotify and Pandora would most likely not even notice, nor would their users, if they removed DRM technology. While it may not be in their interest to announce the death of DRM, it’s likely to die a quiet death.
While DRM has been touted as critical for business, it is clear that people are willing to pay for streaming and licensing of content without technical protections. If someone could actually afford to pay the fees currently charged by the streaming vendors, why would they go to an illegal pirating site to download something?
I don’t think that the current use of DRM is well understood. Studios are not pressing streaming companies on using DRM to protect their content from being ripped-off. The day a content is available for streaming, it has been for long available on many p2p sharing networks.
Studios are requiring DRM because DRM functions as not-easily counterfeited counter. That means any time a user starts watching something, the counter increments by 1. These statistics are then shared with studios and streaming companies are charged based on those numbers. That’s why DRM is not going away anytime soon.
^
1
^
Mark Watson 3/14/2016
Permalink|Reply
Private. Collaborators only.
It’s worth noting that the Encrypted Media Extensions specification and its implementations have evolved significantly during the several years we have been working on them in W3C. DRMs under EME are now rather commoditized: having common features and using common, standard, encrypted files. They can be sandboxed, as Chrome and Mozilla have done, such that the DRM has no network access and is permitted to persist data or otherwise access the machine only as allowed by the (open source) sandbox. There are strict rules for privacy-sensitive identifiers and user consent. Users can completely disable the DRM, clear its storage, reset any identifiers. Sites using EME will be required to deploy HTTPS.
These changes in how DRM is integrated with the web (because it was, as has been mentioned, very much there before all of this) likely would not have happened without the W3C’s involvement.
I think it’s fair to say that few in the content industry share the view, expressed here, that the business risk of removing DRM is low making the likelihood of a “quiet death” any time soon very small.
^
1
^
Patrick Collins 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
The W3C is currently standardizing DRM for use in HTML5, the next generation of core Web standards. By allowing DRM to be included in the standard, we “break” the architecture of the Internet by allowing companies to create places to store data and run code on your computer that you do not have access to and where breaking into code on your computer would constitute breaking the law. This is both a security risk and a fundamentally fragile system where vast amounts of content and information could be lost in the future as technologies evolve and companies change.
By allowing DRM to be included in the standard, we “break” the architecture of the Internet by allowing companies to create places to store data and run code on your computer that you do not have access to and where breaking into code on your computer would constitute breaking the law
I don’t think this is true. The proposal is to allow for encrypted data to be sent to the browser without using a plugin like Flash. It’s nothing new, it’s just providing better support for something that is already being done everywhere.
^
4
^
Cory Doctorow 3/13/2016
Permalink|Reply
Private. Collaborators only.
Hey, Patrick. Here’s a pretty thoroughgoing look at the difference between what was (Silverlake, Flash etc) and what will be with EME: https://www.eff.org/deeplinks/2013/03/defend-open-web-keep-drm-out-w3c-standards
In the past two decades, there has been an ongoing struggle between two views of how Internet technology should work. One philosophy has been that the Web needs to be a universal ecosystem that is based on open standards and fully implementable on equal terms by anyone, anywhere, without permission or negotiation. This is the technological tradition that gave us HTML and HTTP in the first place, and epoch-defining innovations like wikis, search engines, blogs, webmail, applications written in JavaScript, repurposable online maps, and a hundred million specific websites that this paragraph is too short to list.
The other view has been represented by corporations that have tried to seize control of the Web with their own proprietary extensions. It has been represented by technologies like Adobe’s Flash, Microsoft’s Silverlight, and pushes by Apple, phone companies, and others toward highly restrictive new platforms. These technologies are intended to be available from a single source or to require permission for new implementations. Whenever these technologies have become popular, they have inflicted damage on the open ecosystems around them. Websites that depend on Flash or Silverlight typically can’t be linked to properly, can’t be indexed, can’t be translated by machine, can’t be accessed by users with disabilities, don’t work on all devices, and pose security and privacy risks to their users. Platforms and devices that restrict their users inevitably prevent important innovations and hamper marketplace competition.
The EME proposal suffers from many of these problems because it explicitly abdicates responsibilty on compatibility issues and let web sites require specific proprietary third-party software or even special hardware and particular operating systems (all referred to under the generic name “content decryption modules”, or CDMs, and none of them specified by EME). EME’s authors keep saying that what CDMs are, and do, and where they come from is totally outside of the scope of EME, and that EME itself can’t be thought of as DRM because not all CDMs are DRM systems. Yet if the client can’t prove it’s running the particular proprietary thing the site demands, and hence doesn’t have an approved CDM, it can’t render the site’s content. Perversely, this is exactly the reverse of the reason that the World Wide Web Consortium exists in the first place. W3C is there to create comprehensible, publicly-implementable standards that will guarantee interoperability, not to facilitate an explosion of new mutually-incompatible software and of sites and services that can only be accessed by particular devices or applications. But EME is a proposal to bring exactly that dysfunctional dynamic into HTML5, even risking a return to the “bad old days, before the Web” of deliberately limited interoperability.
Because it’s clear that the open standards community is extremely suspicious of DRM and its interoperability consequences, the proposal from Google, Microsoft and Netflix claims that “[n]o ‘DRM’ is added to the HTML5 specification” by EME. This is like saying, “we’re not vampires, but we are going to invite them into your house”.
Proponents also seem to claim that EME is not itself a DRM scheme. But specification author Mark Watson admitted that “Certainly, our interest is in [use] cases that most people would call DRM” and that implementations would inherently require secrets outside the specification’s scope. It’s hard to maintain a pretense that EME is about anything but DRM.
Stop Hollyweb DRM in HTML5 The DRM proposals at the W3C exist for a simple reason: they are an attempt to appease Hollywood, which has been angry about the Internet for almost as long as the Web has existed, and has always demanded that it be given elaborate technical infrastructure to control how its audience’s computers function. The perception is that Hollywood will never allow movies onto the Web if it can’t encumber them with DRM restrictions. But the threat that Hollywood could take its toys and go home is illusory. Every film that Hollywood releases is already available for those who really want to pirate a copy. Huge volumes of music are sold by iTunes, Amazon, Magnatune and dozens of other sites without the need for DRM. Streaming services like Netflix and Spotify have succeeded because they are more convenient than piratical alternatives, not because DRM does anything to enhance their economics. The only logically coherent reason for Hollywood to demand DRM is that the movie studios want veto controls over how mainstream technolgies are designed. Movie studios have used DRM to enforce arbitrary restrictions on products, including preventing fast-forwarding and imposing regional playback controls, and created complicated and expensive “compliance” regimes for compliant technology companies that give small consortia of media and big tech companies a veto right on innovation.
^
2
^
Fernando Gutierrez 3/12/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 7
Sections 1201-1203 of the 1998 Digital Millennium Copyright Act (DMCA) make it illegal to circumvent locks that restrict access to copyrighted works regardless of whether you are actually breaking copyright law. This means that companies can use digital locks to hide away content that we should have legitimate access, and those locks have the force of law – breaking them is a felony with a maximum sentence of five years in prison and a $500,000 fine.
This means that companies can use digital locks to hide away content that we should have legitimate access, and those locks have the force of law
It is interesting/worrisome how software is creating new types of limited property. We don’t really own our Kindle or iTunes libraries. We can’t disassemble our gadgets. One could argue that software is created new ad-hoc rights, but truth is most people don’t agree or even know.
ArchivedComment by Tim Chambers1 point
^
1
^
Fernando Gutierrez 3/12/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 5
There is already both research and practice in conducting SIGINT on the blockchain. [5] With Bitcoin and Blockchain technology in “vanilla” form, the ability to perform SIGINT is actually HIGHER than traditional more closed systems… AML and KYC laws are often impossible to implement while balancing the privacy of the users because the Blockchain is potentially visible to the whole world and not under the control of the selected entities. In fact, I believe that we must not only prevent the collection of the same kind of information in traditional financial system but also discuss technologies to prevent privacy risks from the analysis of the Blockchain. If we are to deploy Blockchain broadly we will have to look at both AML and KYC laws and upgrading them taking into account the new technical architecture and environment and balancing the privacy and security concerns.
If we are to deploy Blockchain broadly we will have to look at both AML and KYC laws and upgrading them taking into account the new technical architecture and environment and balancing the privacy and security concerns.
The analysis of the blockchain is both an attack on privacy and on fungibility, which is a basic property of money. By analyzing the blockchain, companies like Elliptic or Coinalytics are marking funds that have had any relation with “illegal activities”, so their clients take whatever meassure compliance with AML and KYC require. This makes indivuals unsafe because, deprived of the same tools companies have, they can’t know if their money is good or not. This can eventually lead to multiple classes of coins because not every jurisdiction will make the same judgements about the legality of activites funds have been involved with.
^
0
^
Patrick Collins 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
But whenever you optimize the network for one thing, you risk de-optimizing it for another.
I don’t think this is really the issue at hand with net neutrality/QoS/etc. There are no technical hurdles to a good QoS implementation as far as I know, and I believe it’s used successfully in internal infrastructure at many companies. It’s more of a moral/philosophical argument that ISPs shouldn’t be allowed to extort their customers.
^
-1
^
Richard Bennett 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
The claim that adding bandwidth cures all ills in “the network” is an anachronism left over from the time when “the network” consisted solely of wired data links that could be arbitrarily upgraded with little cost. While it has never been the correct solution to all forms of short-term congestion, it’s laughable out of touch with the reality of the wireless edge that currently dominates the Internet.
^
-2
^
Richard Bennett 3/13/2016
Permalink|Reply
Private. Collaborators only.
Selection made on Version 8
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another. It turns out that just adding more bandwidth has been cheaper than making the network “smarter” (This argument - that you fix networks by making them faster, not smarter - is key to understanding net neutrality).
There is a perennial call to “make the network smart.” Someone always wants to optimize it, establish “quality of service” mechanisms – for example, to make voice calls more reliable. But whenever you optimize the network for one thing, you risk de-optimizing it for another.
Quality of Service is not a question of “optimizing” the network for one and only one service, it’s a matter of allowing the network to provide treatment for each class of application that is appropriate to the needs of the class. It’s primary function is mediating resource contention that arises between pairs of application classes that impose disparate patterns of load on the network when the loads are not necessary to end user Quality of Experience. This whole section has nothing to do with either Bitcoin or Copyright enforce and adds nothing to main argument. The people would be more coherent and credible if the (essentially religious) misrepresentation of Quality of Serivce were removed.