Why anti-money laundering laws and poorly-designed copyright laws are similar and should be revised

Intentionally or unintentionally, poorly crafted or outdated laws and technical standards threaten to undermine security, privacy

Authors

Director, MIT Media Lab
Discussions10Share

Discussions

Authors
Labels
Sort
Discussion by Brian Behlendorf
Brian Behlendorf
Let me double-down on this. I think it's important not just to not ignore them (and by "them" I mean not just regulators, but policy makers and the general public who elects them), I think it's important to understand what motivates them, and help them understand what motivates us. Through that we can probably find larger regions of common interest, and thus design products and protocols and standards that have a greater chance at widespread adoption and impact.That said, we have an unfair advantage as builders of technology, as the policy questions largely follow implementation rather than precede them. Installed-base plays a huge role in norms setting, even to the judicial courts where some of these issues inevitably will be decided. DeCSS, for instance, played a huge role in demonstrating the folly of DVD region locking and DRM at that time. But we could do more. For example, a tool that breaks e-book DRM to allow the visually disabled the ability to "read" an e-book through text-to-speech, a right guaranteed by the Chafee Amendment and the Treaty of Marrakesh, would bevaluable to that community. It would also be illegal under current anti-circumvention regulations. But if it existed, it would demonstrate the folly and societal burden of hard-locked DRM, more effectively than any hypothetical.Running code beats hypothetical argument. This is Bitcoin's superpower, Mozilla's superpower, even Apple's superpower. If there's any prescription for this paper, it should be to developers to build the plumbing for the kind of future they want; they likely have much more power to influence policy than they realize.
Discussion by Rasty Turek
Rasty Turek
I don't think that the current use of DRM is well understood. Studios are not pressing streaming companies on using DRM to protect their content from being ripped-off. The day a content is available for streaming, it has been for long available on many p2p sharing networks.Studios are requiring DRM because DRM functions as not-easily counterfeited counter. That means any time a user starts watching something, the counter increments by 1. These statistics are then shared with studios and streaming companies are charged based on those numbers. That's why DRM is not going away anytime soon.
Discussion by Mark Watson
Mark Watson
It's worth noting that the Encrypted Media Extensions specification and its implementations have evolved significantly during the several years we have been working on them in W3C. DRMs under EME are now rather commoditized: having common features and using common, standard, encrypted files. They can be sandboxed, as Chrome and Mozilla have done, such that the DRM has no network access and is permitted to persist data or otherwise access the machine only as allowed by the (open source) sandbox. There are strict rules for privacy-sensitive identifiers and user consent. Users can completely disable the DRM, clear its storage, reset any identifiers. Sites using EME will be required to deploy HTTPS.These changes in how DRM is integrated with the web (because it was, as has been mentioned, very much there before all of this) likely would not have happened without the W3C's involvement.I think it's fair to say that few in the content industry share the view, expressed here, that the business risk of removing DRM is low making the likelihood of a "quiet death" any time soon very small.
Discussion by Patrick Collins
Patrick Collins and Cory Doctorow
I don't think this is true. The proposal is to allow for encrypted data to be sent to the browser without using a plugin like Flash. It's nothing new, it's just providing better support for something that is already being done everywhere.
Hey, Patrick. Here's a pretty thoroughgoing look at the difference between what was (Silverlake, Flash etc) and what will be with EME: https://www.eff.org/deeplinks/2013/03/defend-open-web-keep-drm-out-w3c-standardsIn the past two decades, there has been an ongoing struggle between two views of how Internet technology should work. One philosophy has been that the Web needs to be a universal ecosystem that is based on open standards and fully implementable on equal terms by anyone, anywhere, without permission or negotiation. This is the technological tradition that gave us HTML and HTTP in the first place, and epoch-defining innovations like wikis, search engines, blogs, webmail, applications written in JavaScript, repurposable online maps, and a hundred million specific websites that this paragraph is too short to list.The other view has been represented by corporations that have tried to seize control of the Web with their own proprietary extensions. It has been represented by technologies like Adobe's Flash, Microsoft's Silverlight, and pushes by Apple, phone companies, and others toward highly restrictive new platforms. These technologies are intended to be available from a single source or to require permission for new implementations. Whenever these technologies have become popular, they have inflicted damage on the open ecosystems around them. Websites that depend on Flash or Silverlight typically can't be linked to properly, can't be indexed, can't be translated by machine, can't be accessed by users with disabilities, don't work on all devices, and pose security and privacy risks to their users. Platforms and devices that restrict their users inevitably prevent important innovations and hamper marketplace competition.The EME proposal suffers from many of these problems because it explicitly abdicates responsibilty on compatibility issues and let web sites require specific proprietary third-party software or even special hardware and particular operating systems (all referred to under the generic name "content decryption modules", or CDMs, and none of them specified by EME). EME's authors keep saying that what CDMs are, and do, and where they come from is totally outside of the scope of EME, and that EME itself can't be thought of as DRM because not all CDMs are DRM systems. Yet if the client can't prove it's running the particular proprietary thing the site demands, and hence doesn't have an approved CDM, it can't render the site's content. Perversely, this is exactly the reverse of the reason that the World Wide Web Consortium exists in the first place. W3C is there to create comprehensible, publicly-implementable standards that will guarantee interoperability, not to facilitate an explosion of new mutually-incompatible software and of sites and services that can only be accessed by particular devices or applications. But EME is a proposal to bring exactly that dysfunctional dynamic into HTML5, even risking a return to the "bad old days, before the Web" of deliberately limited interoperability.Because it's clear that the open standards community is extremely suspicious of DRM and its interoperability consequences, the proposal from Google, Microsoft and Netflix claims that "[n]o 'DRM' is added to the HTML5 specification" by EME. This is like saying, "we're not vampires, but we are going to invite them into your house".Proponents also seem to claim that EME is not itself a DRM scheme. But specification author Mark Watson admitted that "Certainly, our interest is in [use] cases that most people would call DRM" and that implementations would inherently require secrets outside the specification's scope. It's hard to maintain a pretense that EME is about anything but DRM.Stop Hollyweb DRM in HTML5 The DRM proposals at the W3C exist for a simple reason: they are an attempt to appease Hollywood, which has been angry about the Internet for almost as long as the Web has existed, and has always demanded that it be given elaborate technical infrastructure to control how its audience's computers function. The perception is that Hollywood will never allow movies onto the Web if it can't encumber them with DRM restrictions. But the threat that Hollywood could take its toys and go home is illusory. Every film that Hollywood releases is already available for those who really want to pirate a copy. Huge volumes of music are sold by iTunes, Amazon, Magnatune and dozens of other sites without the need for DRM. Streaming services like Netflix and Spotify have succeeded because they are more convenient than piratical alternatives, not because DRM does anything to enhance their economics. The only logically coherent reason for Hollywood to demand DRM is that the movie studios want veto controls over how mainstream technolgies are designed. Movie studios have used DRM to enforce arbitrary restrictions on products, including preventing fast-forwarding and imposing regional playback controls, and created complicated and expensive "compliance" regimes for compliant technology companies that give small consortia of media and big tech companies a veto right on innovation.
Discussion by Patrick Collins
Patrick Collins
I don't think this is really the issue at hand with net neutrality/QoS/etc. There are no technical hurdles to a good QoS implementation as far as I know, and I believe it's used successfully in internal infrastructure at many companies. It's more of a moral/philosophical argument that ISPs shouldn't be allowed to extort their customers.
Discussion by Richard Bennett
Richard Bennett
Quality of Service is not a question of "optimizing" the network for one and only one service, it's a matter of allowing the network to provide treatment for each class of application that is appropriate to the needs of the class. It's primary function is mediating resource contention that arises between pairs of application classes that impose disparate patterns of load on the network when the loads are not necessary to end user Quality of Experience. This whole section has nothing to do with either Bitcoin or Copyright enforce and adds nothing to main argument. The people would be more coherent and credible if the (essentially religious) misrepresentation of Quality of Serivce were removed.
Discussion by Richard Bennett
Richard Bennett
The claim that adding bandwidth cures all ills in "the network" is an anachronism left over from the time when "the network" consisted solely of wired data links that could be arbitrarily upgraded with little cost. While it has never been the correct solution to all forms of short-term congestion, it's laughable out of touch with the reality of the wireless edge that currently dominates the Internet.
Discussion by Fernando Gutierrez
Fernando Gutierrez
It is interesting/worrisome how software is creating new types of limited property. We don't really own our Kindle or iTunes libraries. We can't disassemble our gadgets. One could argue that software is created new ad-hoc rights, but truth is most people don't agree or even know.
Discussion by Fernando Gutierrez
Fernando Gutierrez
The analysis of the blockchain is both an attack on privacy and on fungibility, which is a basic property of money. By analyzing the blockchain, companies like Elliptic or Coinalytics are marking funds that have had any relation with "illegal activities", so their clients take whatever meassure compliance with AML and KYC require. This makes indivuals unsafe because, deprived of the same tools companies have, they can't know if their money is good or not. This can eventually lead to multiple classes of coins because not every jurisdiction will make the same judgements about the legality of activites funds have been involved with.