European Parliament overwhelmingly approved the first draft of the Digital Services Act to regulate big tech data collection and advertising
On January 20, 2022, the European Union took a major, first step in passing laws to transform how technology companies do business in the EU. There are still several steps before the laws become final, but in the initial vote the 27-nation members overwhelmingly approved tighter controls.
The proposed Digital Services Act would, among other things, require major technology companies to aggressively police content and further limit advertising. For example, the law would require companies to remove content considered illegal in the country where it is viewed. This would include such things as Holocaust denials in Germany and racist postings in France. It would also allow Europeans to more easily opt out of targeted advertising and prohibit advertising targeted at children.
To quote the colorful warning statement of Christel Schaldemose, the center-left lawmaker from Denmark who led negotiations on the bill:
With the [Digital Services Act] we are going to take a stand against the Wild West the digital world has turned into, set the rules in the interests of consumers and users, not just of Big Tech companies and finally make the things that are illegal offline illegal online too.
This is a warning shot across the bow for high technology companies everywhere.
The debate by the European Parliament and Council of the European Union on the final language is expected to take months. The law may serve as a model for the U.S. where Congress is also considering legislation. Greater control over digital practices worldwide seems inevitable. Tech companies would be wise to modify and amplify their efforts accordingly. That will make tweaking a little easier down the road when legislation is final. It is not to hard to read the writing on the wall.
This is another example of the “Brussels effect”, well known in old-economy industries like chemicals and cars. Regulation spreads through market forces. Companies adopt the EU’s rules as the price of participating in the huge EU market, and then impose them across their global businesses to minimise the cost of running separate compliance regimes. The rules are sometimes codified by foreign governments or through international organisations, but not necessarily.
We’ll need to see how it develops. The GDPR was a bumpy ride and much of its original power was gutted during the negotiation stages. The European Commission aims for the DSA to enter into force by 2023 but even the MEPs involved with the Commission and Council negotiations realize this is nigh impossible.
And like any legislation, as you’ve noted, a lot could change between now and the bill ultimately becoming law. It’s no wonder, then, that the tech industry has gone into overdrive bending lawmakers’ ears: lobbyists have so far reported 613 meetings on the Digital Services Act with members of European Parliament, with the top companies represented being Google (23), Meta (16), Amazon (15) and Microsoft (12).
As with most regulations, there are things in the DSA that I could take or leave. Giving people more control over how they are tracked online is a good thing, assuming it’s done in a way that normal people find easy to use (right now the draft is pretty obtuse). Making platforms liable for what they distribute is more palatable to me when it comes to marketplaces (incentivizing the removal of counterfeit goods, stolen antiquities, weapons and drugs), and less so when it comes to social networks (where it is typically hostile to free expression).
Burt there are issues.
1. The DSA doesn’t take into consideration a lot of the existing EU legislation (e.g., ePrivacy Directive and General Data Protection Regulation) so there are conflicts. Platforms, namely on the liability regime and the use of personal data for direct marketing (targeted advertising), will be “ok” under one Act but liable under another so that needs to get fixed.
2. Platforms can still use automated tools, but they are not allowed to scan and monitor every piece of content shared online. There must be human oversight. But the requirements as written give platforms the impossible task of identifying illegal content in realtime, at speeds no human moderator could manage – with stiff penalties for guessing wrong. Inevitably, this means more automated filtering – something the platforms often boast about in public, even as their top engineers are privately sending memos to their bosses saying that these systems don’t work at all. So if I was a large platform, I’d want to overblock, remove, content according to the fast-paced, blunt determinations of an algorithm – and then just run through the appeal process for the wrongfully silenced – a review process that, like the algorithm, has to date been opaque and arbitrary. That review will also be slow: speech will be removed in an instant, but only reinstated after days, or weeks – or maybe 2.5 years.
3. But at least the largest platforms will be able to try and comply with the DSA because they have the resources. It’s far worse for small services, run by startups, co-operatives, nonprofits and other organizations that want to support, not exploit, their users. These businesses (“micro-enterprises” in EU jargon) will not be able to operate in Europe at all if they can’t raise the cash to pay for legal representatives and filtering tools.
So what do you have? The DSA sets up rules that allow a few American tech giants to control huge swaths of Europeans’ online speech, because they are the only ones with the means to do so. Within these American-run walled gardens, algorithms will monitor speech and delete it without warning, and without regard to whether the speakers are bullies engaged in harassment – or survivors of bullying describing how they were harassed.
The DSA ain’t so clear cut as pundits would have you believe. The cut and thrust of these proposals is about letting people use the internet – practically vital in 2022 – without having to concede that the inevitable trade off is that the big tech firms will monitor your behavior, create profiles, and use it to monetise you. The only real way of avoiding that at the moment is to opt out of the 21st century entirely and use a flip phone and snail mail to communicate.
I really need to see more detail on what is being proposed on the content side. For instance, no where is “illegal content” defined. But will presumably be spelled out if all this becomes law. More slippery of course is what is meant by “misinformation” – as this is hard to pin down conclusively and depends on who you ask. Hence in the Parliament’s internal discussions/negotiations before last week’s vote those 2 terms were subject to rancorous debate.
All in all, this represents the latest broadside from the EU against the general influence of Big Tech. With the weight of all EU member states behind it should it become law in some fashion and will no doubt have significant impact on how the major tech platforms operate, especially with regards to privacy.