Wikimedia: Ist die digitale Gesellschaft noch zu retten?

/** Beim Wikimedia-Salon saß ich zu dem Buchstaben „V“ wie Vertrauen auf einem Panel. Zudem habe ich die letzte Zeit viel zu der Wikipedia und ihrer Krise geschrieben, als auch über die Geschichte und Zukunft der Digitalisierung. All das fließt in diese längere Reflexion zur Zukunft der digitalen Gesellschaft ein, in der ich auch versuche Hoffnung zu schöpfen. Ich hoffe, das ist gelungen. **/

Wir sprechen über die digitale Gesellschaft, als ob wir wüssten, was das ist. Oder als wäre die digitale Gesellschaft einfach der Nachfolger der analogen Gesellschaft. Also dieselbe Gesellschaft, nur mit Online-Banking, Amazon statt Einkaufszentrum und Facebook-Gruppe statt Stammtisch.
Ich empfinde es als Vorteil, dass wir in Deutschland die „Digitalisierung“ auch als gesellschaftlichen Prozess diskutieren. Leider läuft man dabei schnell in die Gefahr zu glauben, es reiche, die vorhandenen Strukturen zu nehmen und einfach digital neu zu denken. Die Rede von „Digitaler Gesellschaft“ scheint mir genau in diese Falle zu laufen, denn sie übersieht, dass Gesellschaft – egal ob als abstraktes Gebilde oder konkrete Struktur – immer auch ein Produkt medialer Bedingungen ist. Das hat bereits der Lehrer und Mentor von Marshall McLuhan – Harold Innes – verstanden. In seinem Buch „Empire and Communications“ von 1950 zeigt er, wie schon die Imperien und antiken Hochkulturen von der Erfindung der Schrift ermöglicht und strukturiert wurden. Dass die Gesellschaften der Sprachkultur, der Schriftkultur und der Buchkultur sich fundamental unterscheiden, ist seit Jahren der Ausgangspunkt von Dirk Baeckers Überlegungen zu der „nächsten Gesellschaft“, wie er die „digitale Gesellschaft“ auch nennt.

[Weiterlesen bei Wikimedia >>]


Veröffentlicht unter Das Neue Spiel Digital Tribalism extern Plattformpolitik Weltkontrollverlust
Hinterlasse einen Kommentar

Die Geschichte der Digitalisierung in fünf Phasen

Es gibt kein englisches Wort für “Digitalisierung”. Dort spricht man je nachdem von „Technology“, “Internet”, “Artificial Intelligence” oder „Innovation“ und adressiert damit auch jeweils andere Dinge und unterschiedliche Debatten. In Deutschland hat sich der Begriff hingegen vor allem in der Politik durchgesetzt und bildet eine Klammer für all die strukturellen Anpassungsprozesse – politische, wirtschaftliche, kulturelle – die die Gesellschaft durch den fortschreitende Einzug der digitalen Technologie in unseren Alltag nach sich zieht.

Es ist ein Vorteil der deutschen Sprache, diese doch sehr heterogenen Prozesse als ein großes Ganzes betrachten zu können. Es hat aber auch Nachteile, da die schier unübersehbare Größe des Phänomens einschüchternd, gar erdrückend wirken kann.

Klar ist, die Digitalisierung wälzt die gesellschaftlichen Strukturen um. Aber um zu klären, wie das geschieht – um sich einen Überblick zu verschaffen – muss man den Monolithen „Digitalisierung“ zunächst wieder aufsprengen. Nicht in seine vielfältigen Bestandteile (dann würde es wieder unübersichtlich), sondern systematischer. Zum Beispiel historisch.

Was ich hier versuchen möchte, ist eine “narratologische Rampe” zu bauen. Ich teile die Geschichte der Digitalisierung in vier Phasen ein, die nacheinander von den 80er Jahren bis heute reichen. Die Idee ist, bei der Narration der vier Phasen genügend Beschleunigung generieren, um über die Rampe ein Stück weit in die Zukunft – also die fünfte Phase – zu schießen, das heißt: eine anschlussfähige Spekulation zu wagen.

Die Phasen wären folgende:

  1. Die frühen Netzwerk-Utopien (1985 – 1995)
  2. Remediation (1995 – 2005)
  3. Kontrollverlust (2005 – 2015)
  4. Das Neue Spiel (2015 – 2025)
  5. Restrukturierung (2025 – 2035)

Wir haben also die Rampe, brauchen wir noch die Beschleunigung. Der Antrieb ist die “historische Dialektik” die den Phasen zugrunde liegt. Die Annahme hierbei ist, dass jede Bewegung im Kern ihre Gegenbewegung immer schon mit hervorbringt. Jede neue Phase ist somit immer die Synthese aus der Bewegung (These) und Gegenbewegung (Antithese) der vorherigen Phase.

Erste Phase: Frühe Netzwerk-Utopien (1985 – 1995)

Natürlich lässt sich auch die erste Phase, mit der ich hier beginnen möchte, auf eine solche historische Dialektik zurückbeziehen, auch wenn ich diese Vor-Phase hier nur kurz anreißen möchte: Die 70er Jahre waren in Sachen Digitalisierung geprägt von … eigentlich waren sie noch gar nicht von Digitalisierung geprägt. Aber Computer gab es immerhin schon, allerdings waren sie noch so groß wie überdimensionierte Kühlschränke und standen vornehmlich in Universitäten, militärischen Einrichtungen und Großkonzernen. IBM hatte sowas wie ein Monopol auf Computing und ihre Mitarbeiter liefen in Schlips und Anzug herum, denn sie hatten es nur mit Business- oder Regierungs-Kundschaft zu tun. Mit Computern kamen in dieser Zeit nur Leute in dafür spezialisierten Berufen in Berührung, die meisten Menschen kannten sie nur aus Erzählungen.

Die erste Phase war also insofern eine Gegenbewegung zu diesem Zustand, als die Revolution des Personal-Computers (PC), die sich Anfang/Mitte der 80er Jahre Bahn brach, explizit als Angriff auf die Vorherrschaft der grauen Männer mit ihren Großcomputern verstanden wurde. In der Tat erzählt die Legende von Steve Jobs, Steve Wozniak und der Gründung von Apple genau dies als Heldenreise zweier Underdogs, die dem großen, bösen IBM das Fürchten lehren. Es war die Zeit des Aufbruchs, der Demokratisierung des Computing. Mit dem Apple Macintosh Computer sollte laut Werbung „1984 nicht wie 1984 werden“1. Mit dem Personal-Computing wurde aus der unheimlichen, unzugänglichen Kriegstechnologie Computer das Emanzipations-Werkzeug des modernen Bürgers. So jedenfalls war das Selbstverständnis des damaligen Aufbruchs.

In den 80er Jahren fingen außerdem die frühe Onlinedienste wie das Usenet, AOL und Compuserve an, die PCs miteinander zu vernetzen. In diesen frühen Netzcommunities wie “The WELL” trafen sich die „Early Adopter“ und entwickelten kühne Thesen über die vernetzte Zukunft der Gesellschaft.2 Mitte der 90er, also am Ende dieser Ära kommt schließlich das Internet selbst in viele Haushalte, während zeitgleich das World Wide Web erfunden wird.

Für diesen Aufbruchsmoment steht nicht nur die Hackerszene, die sich entlang der Entstehung des PCs kristallisierte, sondern auch die vielen anderen gesellschaftlichen Diskurse, die das „Netzwerk“ als neue Strukturmetapher dankbar aufnahmen. Gille Deleuze und Felix Guttari zeigten, dass Kultur anhand des netzwerkartigen Wurzelwerks „Rhizome“ auch dezentral und nicht-hierarchisch gedacht werden kann.3 Am Centre de Sociologie de l’Innovation in Paris arbeiteten Bruno Latour und andere an einer wissenschaftstheoretischen Beschreibungsmethode, um Interaktions-Zusammenhänge darstellbar zu machen, bei denen der Mensch nur noch eine agierende Instanz unter vielen ist. Die sogenannte „Akteur Netzwerk Theorie“ erlaubte, Makroperspektive und Mikroperspektive im Netzwerkschema zu transzendieren und so komplexe Zusammenhänge abzubilden und zu untersuchen.4 Schließlich brachte Manuel Castells die sich unter dem Einfluss vernetzter Kommunikation verändernde Gesellschaft auf den Punkt, indem er ihr als „Network Society“ attestierte, Hierarchien zu verflachen und (Unternehmens- und Institutions-)grenzen operativ überschreitbar zu machen.5

Aus dem Soziotop um The WELL entwickelte sich derweil nicht nur das einflussreiche Wired-Magazin, sondern auch die „Electronic Frontier Foundation“, dessen Mitgründer John Perry Barlow 1996 den versammelten Staatschefs Davos zurief, dass ihre „Giganten aus Fleisch und Stahl“ im „Cyberspace“ nichts zu sagen hätten.6

Es war die Zeit, als man dachte, dass dieser „neue Ort des Geistes“ ein Ort mit eigenem Recht sei, ein utopischer Raum, in dem die weltliche Identität keine Rolle mehr spielte. In der Anonymität des Netzes würden akademische Grade, Herkunft, Hautfarbe, Religion, Geschlecht und Sexualität keine Rolle mehr spielen und stattdessen Wort gegen Wort abgewogen werden.7 Dezentralität, Hierarchiefreiheit, Offenheit/Konnektivität und totale Kommunikationsfreiheit waren die ideologischen Grundsteine auf denen eine neue, bessere Gesellschaft gebaut werden sollte.

Natürlich war längst nicht alles so rosig, wie sich die Netz-Utopisten das damals ausmalten. Die Gegenbewegung formte sich aus den Leuten, die im Internet nicht einen post-identitären, utopischen Raum, sondern vor allem einen neuen Markt sahen. Und so wuchs – zunächst langsam, aber gegen Ende immer deutlicher – auch die „New Economy“ im Schatten der Netzdiskurse heran. Die Kommerzialisierung erzwang eine Rückbindung des Cyberspace an die physische Welt. Um E-Commerce zu machen, braucht es dann eben doch wieder (bürgerliche) Identitäten und alles, was sich daran knüpft. Im Zuge der Kommerzialisierung passte sich das Netz den Erfordernissen der realen Welt immer weiter an, womit das nächste Paradigma eingeläutet wurde.

Zweite Phase: Remediation (1995 – 2005)

Remediation bezeichnet die Abbildung eines Mediums durch ein anderes Medium. Dass das Medium die Botschaft sei hatte Marshall McLuhan bereits anhand der ersten elektronischen Medien wie Radio und Fernsehen festgestellt.8 Doch auch das Internet würde sich zunächst daran machen, die bisherigen Medien zu imitieren und bereits hier und da obsolet zu machen.

Während die New Economy noch in den Kinderschuhen steckte, digitalisierte sich zunächst der Briefverkehr. 1995 bekam man noch deutlich mehr physische Post als E-Mails aber das drehte sich sehr schnell um. Das Web wuchs und wuchs und wurde immer unübersichtlicher. Yahoo! beanspruchte daher der digitale Katalog des neuen Mediums zu werden und etwa um 2000 herum wurden Websites zu Blogs – eine Art persönliche Zeitungen im Internet.

Als sich um die Jahrtausendwende die Träume der New Economy fürs erste zerschlugen und die vielen tausend Startups, die sich aufgemacht hatten, das neue, gelobte Land ökonomisch urbar zu machen, verschluckt wurden, überlebten vor allem Services, die mit der analogen Welt im direkten Konkurrenzverhältnis standen. Vielleicht wirkte die Tatsache, dass sie eine Entsprechung in der realen Welt hatten als Legitimation ihrer Bemühungen.

Doch nach dem Crash ging es weiter mit der Remediation: Mit Youtube und iTunes digitalisierten sich das Fernsehen und die Plattensammlung, mit Skype die Telefonie und mit Amazon sogar der Einzelhandel.

Gegen Ende der Remediations-Phase wurde auch die Gegenbewegung sichtbar. Es entstanden neue Medien, die diesen Namen auch verdienen. Medien, die nicht versuchten, ihre analoge Pendants zu ersetzen, sondern die in ihrer Struktur erst durch das Internet möglich wurden. Der Aufstieg der Suchmaschinen und insbesondere Google, soziale Booksmarkingdienste wie del.ico.us und Fotoplattformen mit Tagging- und Sharingfunktion wie Flickr boten eine völlig neue Form an, mit digitalen Objekten zu arbeiten, sie zu teilen, weiterzuleiten und darüber zu kommunizieren. Und natürlich entstanden hier die sozialen Netzwerke wie Friendster, Myspace und schließlich Facebook und eroberten die Onlinezeit der Nutzer/innen. „Web 2.0“ war das Schlagwort, das 2005 die Remediation-Phase des Internets für beendet erklärte und ein neues, ein soziales Netz ausrief. Die Digitalisierung findet gewissermaßen zu sich selbst – und verliert damit auch ihre Harmlosigkeit.9

Dritte Phase: Kontrollverlust (2005 – 2015)

Genau genommen ist das Paradigma des Kontrollverlusts10 sehr viel früher eingeläutet worden, als 2005, aber als Napster 1999 das Licht der Welt erblickte, war noch nicht klar, wie prophetisch es den Fortgang des Netzes vorwegnehmen sollte.11 Das, was die Musikindustrie damals mit dem Filesharing durchmachte, stand bald schon der Filmbranche bevor, dann den Nationalstaaten und schließlich uns allen. Doch der Verlust über die Kontrolle von Daten- und Informationsströme nimmt erst ab Mitte der Nullerjahre wirklich Fahrt auf. Einer der Katalysatoren ist natürlich Social Media, wie das Web 2.0 alsbald genannt wurde. Auf einmal fingen die Leute an, alle möglichen Daten in das Internet zu laden, selbst die privatesten. Als uns ab 2007 dann mit dem Smartphone ein mit allerlei Sensoren und ständiger Internetverbindung ausgestatteter Hosentaschen-Computer an das Internet bindet, das „Internet of Things“ unsere Wohnräume und Städte vernetzt und all dieser Daten in „der Cloud“ – also auf irgendwelchen Rechnern im Internet – landeten, stand dem allgegenwärtigen Kontrollverlust nichts mehr im Weg.

Die Kontrollverlust-Phase ist die Zeit der Wikileaks-Enthüllungen, die Banken, Regierungen, Parteien und sonstige Machtapparate nackt hat stehen lassen. Es ist die Zeit von Big Data, der Auswertung von großen Datenmengen, aus denen ungeahnte Informationen aus vorhandenen Datenmassen destilliert werden. Es ist schließlich auch die Zeit von Edward Snowden, der zwar die Geheimdienste nackt machte, aber nur um zu zeigen, dass wir alle längst nackt sind. Nach Snowden folgten die Shadow Brokers, eine Hackergruppe, die die geheimsten Hackingtools der NSA offenlegten während Wikileaks dasselbe mit denen der CIA machte. Doch es traf nicht nur die Geheimdienste. Pentagon Papers, Cable Leaks, Strafor Leaks, Panama Papers, Swiss Leaks, Luxemburg Leaks, Syria Files, Offshore Leaks, Football Leaks … Leaken ist zum Volkssport geworden und dabei sind die zahllosen Hacks noch nicht mal mit aufgezählt.

Es wurde klar, dass niemand verschont bleibt, dass wir alle – Menschen, Unternehmen, Regierungen und Institutionen – die Kontrolle verloren haben.

Zugleich finden sich schon in dieser Phase Kontrollverlust-Phänomene zweiter Ordnung. Die Occupy-wallstreet-Proteste, der arabische Frühling, Proteste in Spanien und Tel Aviv. Die Welt schien aus den Fugen und die Digitalisierung hatte einen nicht geringen Anteil daran. So wie Daten außer Kontrolle geraten, so haben die digitalen Werkzeuge eine neue Form der Spontan-Organisation von Menschen und Informationen ermöglicht, die sich wiederum in eruptiven „Smartmobs“ weltweit Bahn brachen und Regierungen in Bedrängnis und oft sogar zu Fall brachten.

Doch auch die Gegenbewegung wird deutlich. Neue Kontrollstrukturen haben sich über das Internet gelegt. Der Napster-Schock, der die Phase des Kontrollverlusts einleitete, wurde schließlich durch neue, kontrollierbare Vertriebsstrukturen wie denen von iTunes und später Spotify eingehegt. Google schaffte derweil Ordnung im Chaos im Web und wuchs zum globalen Konzern heran. Facebook – jetzt bitte nicht lachen – brachte die Privatsphäre als „Privacy-Einstellung“ ins Internet.12 Aus den sympathischen, kleinen Web2.0-Diensten sind mächtige Plattformen geworden, die mit ordnender Hand Inseln der Kontrolle im Meer des Kontrollverlusts schaffen. Der Aufstieg der Plattformen als neue Kontroll- aber auch als unheimliche Machtapparate leitet die nächste Phase der Digitalisierung ein.

Vierte Phase: Das Neue Spiel (2015 – 2025)

Ich habe die aktuelle Phase nach meinem Buch, „das Neue Spiel“13 benannt, denn schon beim Schreiben 2014 hatte ich das Gefühl, dass gerade etwas neues beginnt, das bereits nicht mehr (nur) durch das Kontrollverlust-Paradigma bestimmt ist. Das liegt daran, dass bereits bestimmte Individuen, Unternehmen und Institutionen Strukturanpassungen für die neue Situation vorgenommen hatten und die Macht der Plattformen ist nur ihre plakativste Ausformung.

Der Erfolg des Plattform-Paradigmas basiert einerseits auf „Kontrolle als Produkt“ und andererseits auf dem Netzwerkeffekt, der die Netzwerke immer nützlicher macht, je mehr Leute daran teilnehmen.14 GAFA (Google, Apple Facebook und Amazon) sind ohne Zweifel die dominierenden Player unserer Zeit aber mit Airbnb, Uber, Foodora, Deliveroo und co. hat sich das Plattform-Prinzip längst aus den Grenzen der reinen Onlinewelt befreit und gestaltet die Welt im Ganzen um. Mit den dezentralen und anti-hierarchischen Netz-Utopien der ersten Phase hat das alles nur noch wenig zu tun.

Aber es geht nicht nur um die Plattformen. Diese Phase ist grundsätzlich davon geprägt, dass einzelne Menschen und Institutionen die Dynamiken des Kontrollverlusts durchschaut haben und neue Strategien entwickelt haben, in dieser Welt ihre Ziele zu erreichen.15 Wer keine Kontrolle über die Datenströme hat, kann zum Beispiel nicht mehr verhindern, dass Informationen an die Öffentlichkeit kommen. Zensur ist somit nur mit sehr viel Aufwand überhaupt durchzuführen. Man kann aber die Institutionen der Informationsverbreitung wie die Massenmedien diskreditieren, indem man ständig Falschnachrichten streut und echte Nachrichten als Fake News bezeichnet, bis niemand mehr weiß, was wahr und was falsch ist. Niemand hat das besser verstanden als Wladimir Putin, der mittels massenhafter Desinformations-kampagnen eine ganz neue Form der informationellen Kriegsführung ins Werk setzt. In einer Zeit, in der es keine Privatsphäre mehr gibt, ist es eben nicht der tadellose Politiker, der sich durchsetzt, sondern der, dessen Ruf so ruiniert ist, dass Skandale ihm nichts mehr anhaben können. Donald Trump ist “antifragil” gegenüber der Öffentlichkeit. Je mehr Skandale und Kritik er auf sich vereint, desto stärker wird er.16

Die Strategien im Neuen Spiel sind andere, als die des alten und wer sie anwendet, kann unerwartete Gewinne erzielen. Der Kontrollverlust ist somit nicht mehr ganz ein Kontrollverlust – jedenfalls nicht mehr für alle. Doch je mehr Leute an den neuen Hebeln reißen, desto größer wird das Chaos, das sie anrichten.

Hier sehe ich auch schon die Gegenbewegung zum vorherrschenden Paradigma. Wie das Jahr 2016 gezeigt hat, hat die Digitalisierung noch ganz andere Effekte auf die Gesellschaft. Sowohl die Präsidentschaftswahl in den USA, als auch die Brexit-Volksabstimmung im Vereinigten Königreich verweisen auf Entwicklungen, die dem „Kontrollverlust zweiter Ordnung“, den wir bereits bei der Occupy-Bewegung und im arabischen Frühling am Werk sahen, sehr ähneln, allerdings eine deutlicher zu erkennende Struktur aufweisen. Der Aufruhr hat sich stabilisiert. Während in der Phase des Kontrollverlusts die “Smart Mobs” die Weltgeschichte aufwirbelten, aber auch schnell wieder in alle Richtungen verwehten, brechen im Neuen Spiel deutlich zu erkennende Demarkationslinien hervor, die quer zu allen bisherigen politischen Spektren verlaufen. Genausowenig wie Trump ein typischer Republikaner ist, lässt sich die Brexitfrage entlang der etablierten politischen Parteien klären. Und während die AfD Stimmen bei allen deutschen Parteien fischt haben diese Probleme, sich klar in der Flüchtlingsfrage zu positionieren. Diese neuen Demarkationslinien wirken zudem unüberwindbar und unversöhnlich. Quer zu den etablierten Parteien haben sich gewissermaßen politische Stämme gebildet, die sich gegenseitig nicht mehr als Interessenvertreter unterschiedlicher Milieus und somit als politische Gegner sehen, sondern als Feinde der eigenen Identität.

Dieser „digitale Tribalismus“ ist auch der Treiber hinter Fake News und Online-Belästigungs-Kampagnen, er wird gefüttert aus einer neu erwachten psychologischen Prädisposition des Menschen, die sich im Internet ungehindert entfalten kann.17 Der Tribalismus ist nebenbei auch der Hebelpunkt für russische Hacker und andere Manipulationsversuche. Am digitalen Tribalismus lässt sich zudem die Ohnmacht der eben noch allmächtig gewähnten Plattformen studieren, die ihm wie hilflose Zauberlehrlinge gegenüberstehen. Der Tribalismus ist der Gegentrend zum Neuen Spiel und und ist als “Kontrollverlust zweiter Ordnung” mit den entwickelten Kontroll-Strategien nicht einhegbar. Er wird das neue Paradigma der nächsten Phase der Digitalisierung einläuten.

Fünfte Phase: Restrukturierung (2025 – 2035)

An dieser Stelle verlassen wir die Rampe und schießen in einer ballistischen Bahn in die Spekulation. Wir wissen nicht, wie der Kampf Plattformisierung vs. Tribalisierung ausgehen wird, welche Volten er noch schlagen wird und welche Institutionen dabei noch in Mitleidenschaft gezogen werden. Aber ich gehe davon aus, dass die Tribalisierung vorerst nicht eingehegt werden kann und somit das folgende Paradigma wesentlich mitbestimmen wird. Dass es also einen Kontrollverlust zweiter Ordnung geben wird, der die Gesellschaft noch deutlicher durchrütteln wird, als der der ersten Ordnung. Dieses Szenario drängt sich auch deswegen auf, weil die Historie analoge Phänomene in vergleichbaren Situationen hervorgebracht hat. Vereinfacht zusammengefasst: Ein neues Medium tritt auf den Plan und verschiebt die Grenze des praktisch Kommunizierbaren, was so lange zu gesellschaftlicher Unruhe führt, bis neue Institutionen, Denk- und Verhaltensweisen etabliert sind, die einen neuen Modus des Zusammenlebens ermöglichen.18 Es ist eine Restrukturierung von Gesellschaft. Eine solche Restrukturierung sage ich für die Zeit von 2025 bis 2035 voraus.

Man kann das gut am Buchdruck zeigen: Wie das Internet heute, hat auch der Buchdruck die Gesellschaft grundlegend verändert. Heute schauen wir darauf zurück und bewerten diese Veränderung zumeist positiv. Der Buchdruck brachte allgemeine Lesekompetenz, eine Demokratisierung und Mehrung des Wissens. Wenn wir ein kulturgeschichtliches Phänomen mit dem Buchdruck verbinden, dann ist es die Aufklärung. Das ist zwar nicht falsch, aber unterschlägt den Fakt, dass zwischen der Erfindung des Buchdruck und Aufklärung rund 250 Jahre Chaos, Krieg und Zerstörung lagen. Etwas vereinfacht lässt sich vom Buchdruck zur Reformation über die Bauernkriege direkt zum 30jährigen Krieg eine lineare Wirkungskette ziehen, die aus der Distanz wie die gerade skizzierte Restrukturierung von Gesellschaft aussieht und die erst im Nachgang die Aufklärung möglich machte.19

Nimmt man den Buchdruck als Analogie und bettet die bisherigen Überlegungen zur Digitalisierung in den Prozess ein, ergibt sich folgendes Bild: Das alte Mediensystem, unsere Ideen von Öffentlichkeit und gesellschaftlichen Diskurs, die repräsentative Demokratie und vieles mehr wurde in einer Zeit konzipiert, in der nur eine geringe Zahl an Menschen eine geringe Menge an Informationen über eine geringe Distanz verbreiten konnte. Dieses System trifft nun auf überwältigende Mengen weltweit außer Kontrolle geratener Datenströme und auf eine ungekannte Organisationsfähigkeit von Menschen und Informationen. Es ist nur folgerichtig, dass dadurch Machtstrukturen radikal in Frage gestellt werden, ohne dass bereits klar wäre, was sie ersetzen wird. An diesem Punkt sind wir.

Das Chaos, das durch die Erfindung des Buchdrucks ausgelöst wurde, stellte vor allem die Herrschaft der katholischen Kirche in Frage. Mit der Reformation wurden ihr plötzlich mehrere Gegenkonzepte zur Seite gestellt und in der blutigen Auseinandersetzung, die das provozierte, wurde ein neues Paradigma des Zusammenlebens geschaffen. Der souveräne und bürokratische Staat hatte sich bereits in Frankreich unter Ludwig dem XIV entwickelt und wurde nach dem Westphälischen Frieden zum Vorbild europäischer Staatlichkeit. Aber Frieden wurde möglich, da dieses neue Herrschaftsinstrument säkularisiert werden konnte, also nicht einseitig an eine bestimmte Form des Glaubens gekoppelt werden musste. Der souveräne, säkulare und bürokratische Staat erlaubte einen neuen Versuch des friedlichen Zusammenlebens und wurde schließlich zur Bedingung der Möglichkeit von Aufklärung und Demokratie.

Eine neue Institution, die einerseits machtvoll genug ist, die diversen Kontrollverluste des neuen Mediums wieder in friedliche Bahnen zu lenken, aber gleichzeitig eine Legitimation, ähnlich der des Nationalstaats hat, könnte auch am Ende unserer Restrukturierungs-Phase stehen. Wie dieses Konstrukt aussehen wird, kann ich nur raten. Aber mein Tipp wäre, die Entwicklung des chinesischen Staatsmodells genau im Auge zu behalten, das die staatliche Souveränität versucht mit dem neuen Plattform-Paradigma in einklang zu bringen.20 Aber auch die EU könnte hier interessante Impulse liefern, falls sie aus ihrer Nationalstaatserstarrung einmal aufwacht. Vielleicht müssen wir auch wieder viel kleiner denken und die zivilen Grassroots-Organisationen in Athen, Barcelona oder in den kurdisch kontrollierten Gebieten Iraks und Syriens in den Blick nehmen.21 Ich bin mir jedenfalls sicher, dass irgendwo da draußen bereits die Grundsteine der großen Restrukturierung gelegt werden, denn seit William Gibson weiß ich: die Zukunft ist längst da, sie ist nur ungleich verteilt.

  1. Der ikonische Apple Werbespot, der zum Super Bowl 1984 ausgestrahlt wurde, gehört ohne Frage zum popkulturellen Erbe der Digitalisierung. https://www.youtube.com/watch?v=_VvW_uWSbX0
  2. Vgl. Fred Turner: From Counterculture to Cyberculture.
  3. Félix Guattari, Gilles Deleuze: A Thousand Plateaus – Capitalism and Schizophrenia.
  4. Bruno Latour: Reassembling the Social: An Introduction to Actor-Network-Theory, Oxford.
  5. Manuel Castells: The Information Age: Economy, Society and Culture.
  6. John Perry Barlow: A Declaration of the Independence of Cyberspace, https://www.eff.org/cyberspace-independence
  7. Donna Haraway: A Cyborg Manifesto, https://web.archive.org/web/20120214194015/http://www.stanford.edu/dept/HPS/Haraway/CyborgManifesto.html.
  8. Marshall McLuhan: Understanding Media: The Extensions of Man.
  9. Tim O’Reilly: What Is Web 2.0 – Design Patterns and Business Models for the Next Generation of Software, https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html.
  10. Den Begriff “Kontrollverlust” habe ich 2010 in die Debatte um die Digitalisierung eingebracht. Ich halte ihn immer für eine gültige Beschreibungsebene unserer Gegenwart, aber als Paradigma war er nur bis ca. 2015 tatsächlich hegemonial. Vgl. Michael Seemann: Glossar – Kontrollverlust, http://www.ctrl-verlust.net/glossar/kontrollverlust/.
  11. Tom Barnes: 16 Years Ago Today, Napster Changed Music as We Knew It, https://mic.com/articles/119734/16-years-ago-today-napster-changed-music-as-we-knew-it#.zhSidyIr4.
  12. Allen Unkenrufen zum trotz muss man festhalten, dass Privatsphäre vor Facebook im Internet nicht extsitent war. Es gab nur die globale Öffentlichkeit oder (unverschlüsselte) 1zu1 Kommunikation. Eine pragmatische und beliebig granulare Eingrenzung der Öffentlichkeit war eine der Innovationen von Facebook und teil seines Erfolgskonzeptes. Vgl. Michael Seemann: Plattformprivacy, http://www.ctrl-verlust.net/plattformprivacy.
  13. Michael Seemann: Das Neue Spiel – Strategien für die Welt nach dem digitalen Kontrollverlust.
  14. Vgl. Wikipedia: Netzwerkeffekt, https://de.wikipedia.org/wiki/Netzwerkeffekt.
  15. Eine 10Punkteliste mit Strategien hatte ich in meinem Buch vorgestellt. Vgl. Das Neue Spiel, S. 153ff.
  16. Vgl. zum Begriff der “Antifragilität” einerseits Nicholas Taleb: Antifragile: Things that Gain from Disorder, und die Anwendung seiner Theorie auf den Kontrollverlust, Das Neue Spiel S. 162.
  17. Vgl. Michael Seemann: Digitaler Tribalismus und Fake News, http://www.ctrl-verlust.net/digitaler-tribalismus-und-fake-news/. Zu einer allgemeinen Analyse des politischen Tribalismus – allerdings mit starker US-Perspektive – siehe: Amy Chua: Political Tribes: Group Instinct and the Fate of Nations.
  18. Dirk Bäcker spricht in diesem Zusammenhang davon, dass neue Medien als “Katastrophe” auf die Gesellschaft wirken. Vgl. Dirk Baecker: Studien zur nächsten Gesellschaft.
  19. Clay Shirky macht diesen Vergleich in seinem Ted Talks auf und prophezeit bereit 2005, dass das Internet ca. 50 Jahre Chaos in die Welt bringen wird, bevor es besser wird. Sie Clay Shirky:Instituions and Collaboration, https://www.ted.com/talks/clay_shirky_on_institutions_versus_collaboration?language=en.
  20. Von der totalitären Kontrollgier des chinesischen Konstruktes muss man sich dabei nicht abschrecken lassen. Die moderne Republik wurde – wie gesagt – auch nur von der absolutistischen Kontrollgier eines Sonnenkönigs ermöglicht, dessen bürokratischer Staatsapparat 150 Jahre später durch die Bürger übernommen werden konnte.
  21. Siehe zum Beispiel Joanna Theodorou: What Grassroots Groups Can Teach Us About Smart Aid, https://www.newsdeeply.com/refugees/community/2018/02/21/what-grassroots-groups-can-teach-us-about-smart-aid oder
    Owen Jones: Kurdish Afrin is democratic and LGBT-friendly. Turkey is crushing it with Britain’s help, https://www.theguardian.com/commentisfree/2018/mar/16/turkey-democracy-kurdish-afrin-britain-syria-arming oder
    Barcelona, the capital of a new state, https://ajuntament.barcelona.cat/barcelonallibres/sites/default/files/publicacions_fitxers/llibreblancang-2150.pdf.

Veröffentlicht unter Das Neue Spiel Digital Tribalism extern Kontrollverlust Plattformpolitik
12 Kommentare

Fünf beunruigende Fragen an den digitalen Kapitalismus

Tatsächlich denke ich seit Jahren über das Thema nach. Mein Buch hatte damals die wesentlichen ökonomischen Zusammenhänge rund um Kontrollverlust und Plattformkapitalismus weitestgehend ausgeklammert. Das kam mir damals bereits als Mangel vor aber … ich war noch nicht so weit.

Die Theoriebildung zu diesem Vortrag hat sich vor allem entlang meiner Beschäftigung mit Marx, Wert und Preis entwickelt, einiges hatte ich hier auch schon verbloggt. Zum Nachlesen:

1. Alles begann mit der Wert-Theorie von Marx. Die ich ablehne und hier begründe warum.
2. Da ich aber auch das Gegenmodell der klassischen Wirtschaftswissenschaften (die Grenznutzentheorie) nicht überzeugend fand, habe ich mich weiter damit befasst und darüber nachgedacht, wie Wertschöpfung in Zeiten des Kontrollverlusts mittels Plattformen bewerkstelligt wird. Daraus ergab sich die Frage nach der Überkommenheit der Eigentumsordnung im digitalen. Nachzulesen hier: Die Gewalt der Plattform und der Preis des Postkapitalismus.
3. Ein dritter Stein zum Puzzel kam mit der im Vortrag genannten Uber-Studie und der Erkenntnis, dass Preisdiskriminierung eigentlich eine radikale Abkehr vom Marktprimzip bedeutet: Das Ende der Konsumentenrente oder Wert und Preis III.

Weitere Anstöße waren dann noch vor allem das Buch „Capitalism without Capital“ von Jonathan Haskel und Stian Westlake, sowie meine kritische Beschäftigung mit Robert J Gordons „Rise and Fall of American Growth“ sowie Stefan Heidenreichs: Geld. (Und natürlich alle anderen im Vortrag genannten Bücher.)

Ich werde den Vortrag sicher auch noch mal richtig verschriftlichen. Vielleicht findet sich ja ein Abnehmer?


Veröffentlicht unter Das Neue Spiel Kontrollverlust Plattformpolitik
2 Kommentare

The Central Fate of the Blockchain (In Case There is a Future at All)

/********
Recently an essay of mine has been published in the German issue of Technology Review (TR 10/2018), in which I examine the history of the internet in order to predict the fate of the blockchain technology, especially regarding to it’s promise of decentralization. This is a translated and also extended version of the German text.
********/

The internet interprets censorship as damage and routes around it.“ The sentence became the battle cry of the early internet activists in the 1990s. It was coined by John Gilmore, co-founder of the digital civil rights organization Electronic Frontier Foundation (EFF) in a 1993 interview with Time Magazine.1 It summed up the internet’s promise of technological freedom in a nutshell: „If you put the scissors in one place on the internet, the flow of information will simply bypass that place and still arrive unhindered at the recipient.“ This uncensoredness has always been an essential part of the Internet’s promise of freedom and is based on its „decentralization“.

Looking back, one can argue whether the internet has ever delivered on this promise. Today, Google, Amazon and Facebook laugh at the dreams of a hierarchy-free internet. The Internet has certainly shifted the balance of power here and there, but it has also concentrated and monopolized it in ways unimaginable at the time. And while some Chinese people still manage to smuggle some unauthorized bits into the country, the government’s censorship efforts through the „Chinese firewall“ can certainly be regarded as successful.

But the same promise of freedom of decentralization is now part of the blockchain discourse. Like the internet back then, the blockchain is now supposed to make censorship efforts and state influence impossible. Like the internet then, the blockchain today is supposed to dismantle hierarchies, strengthen the periphery, give a voice to the weak and give us all our freedom; unregulated, wild and at a level playing field. With the blockchain it should be possible to operate online services „serverless“ – i.e. without a centralized computer infrastructure – because the data is distributed on all the computers of the participants. This would make it possible – blockchain enthusiasts believe – to place the digital infrastructure and its control in the hands of the users. With the decentralizing power of the blockchain, the original promise of the Internet would be finally within reach. But it is precisely the history of the internet that offers some objections to these myths.

The Birth of the Internet from the Hardware that was Available

However, the origin of the decentralization of the internet had nothing to do with any idea of freedom in the first place, but resulted from plain necessities. Paul Baran, who is regarded as one of the first pioneers of today’s internet, was a member of the RAND Corporation, a think tank close to the Pentagon. In his collection of essays „On Distributed Communications“2 of the early 1960s, he mentions two major reasons why a decentralized computer network should be built: The first was a military necessity: at the height of the Cold War, the Pentagon was interested in transferring data from the numerous radar stations for airspace surveillance (SAGE Project) quickly and securely to the command center. The network was supposed to function even when individual nodes were destroyed. The second reason was economic in nature. At a time when computers were as rare as wonders of the world and almost as expensive, a decentralized computer network offered the opportunity to utilize the existing, extremely expensive computer hardware available.

Baran’s central idea for solving these problems was packet switching, the concept of splitting data into individual packets and sending them individually from station to station until they arrived at their destination. Instead of building a large supercomputer to control all connections, many less powerful computers could be utilized to establish the data transmission.

When the ARPANET, the first precursor of the internet, was put into operation in 1969, the decentralized approach actually allowed the processing load and the costs of communication to be distributed among many computers and institutions, which in turn were distributed over the entire territory of the USA. In addition, the network was easy to expand.

Ten years later, when Robert Kahn and Vint Cerf laid the cornerstone of today’s internet with the development of the TCP/IP protocol suite, they took over packet switching and also introduced the end-to-end paradigm. The intelligence of the transmission – the splitting and sending of the data packets as well as the control over incoming packets – should lie solely at the ends of the transmission, i.e. at the sender and receiver. All intermediate stations should only be „stupid“ routers, which just threw the data packets to the next node without getting an overview of what was actually happening within the network. This design element, which was also transfigured as a particularly liberating element because it left the control of communication to the user’s computers, also had a concrete, technical purpose. In the course of the 70s a zoo of different network protocols had evolved. Cerf and Kahn therefore developed a standard that could act as an intermediary between different protocols. As long as the ends spoke TCP/IP, the transmission could run over all possible network standards in between. This way, TCP/IP could actually connect heterogeneous networks with each other. The result was a network between networks, an INTER-NET.

Free as in Free Scale

In 1982, just before ARPANET was connected to the internet, it had about 100 nodes and the number of connections per node already varied greatly. As more and more universities and institutions connected to the network, individual nodes became too much frequented hubs and some data lines became the early backbones. Over time, the network looked less like a fishing net, where all nodes had more or less the same number of connections, but the „nodes to edges (connections) ratio“ approached the Power Law Distribution. Roughly speaking, the top 20% of the nodes had 80% of the connections and the remaining 20% of the connections were distributed in a „long tail“ to 80% of the nodes. The Arpanet resembled more or less a network of roads, with its highways and main roads and smaller secondary roads, or like a tree branching out in ever finer branches. Network topographically one also speaks of a „scale free network“, a network in which the node-edge ratio always remains the same as with a Mandelbrot set, no matter from which zoom level one views the network.

Scale freedom very often occurs with organically grown networks, because as a new participant in a network it makes sense to connect to the largest node possible. Behind this is a hidden economy: one needs less „hops“ to reach one node from any other. Clumps, it turns out, are abbreviations. It became apparent that even a distributed approach to data transmission brings its own centralization tendencies. Finally in 1986, NSFNET (National Science Foundation Network)3, the first backbone – a kind of main artery – between individual research institutions in the USA, was built into the still young internet and formally introduced a hierarchy of data lines.

Scaleless networks are both centralized and decentralized, because instead of one center, there are many large and small centers. It helps to imagine decentralization as a spectrum. On the one end, we’ve got a network with a central hub that regulates everything and is therefore incredibly efficient, because every connection from one point to the other requires exactly one hop. On the other end of the spectrum would be the mesh network, where all nodes have the same number of connections, but a communication between two nodes must, in doubt, hop through hundreds of nodes in between. So the scale free network is a kind of compromise between decentralization and efficiency.

Such concentrations and clusters by large internet providers such as AT&T, Telekom and international consortia such as Level 3 also exist on the internet and Google has already put its private second internet next to the public one, but even today hundreds of thousands of small and large internet providers worldwide still serve billions of people on the basis of the common protocol basis, thus keeping the promise of decentralization at least at this level.

However, the first reason Paul Baran cites for decentralisation – stability against failures due to military or other attacks – is only conditionally valid due to the internets freedom of scale. In any case, this is the result of theoretical studies conducted by network researchers such as Albert László Barabási and others in Nature.4. According to the study, a random collapse of up to 80 percent of the nodes would keep the network stable. But if an „informed attacker“ were to attack the central nodes in a targeted manner, Barabási wrote, it would be relatively easy to switch off the entire internet. A prediction that has become considerably more explosive due to the major DDoS attacks of 2016, which paralyzed Twitter, PayPal, Netflix and Spotify over several hours.5 Although the number of such attacks has fallen in the meantime, security experts are by no means giving the all-clear.

The Hidden Costs of Decentralization

So the internet has actually become much more central. But not at all levels equally. While a largely decentralized approach still prevails on the lower layers, the most concerning signs of concentration have taken place above them. To visualize this, one has to imagine the internet as a stack where the protocols layer on top of each other. At the lowest level there are protocols providing WiFi, Ethernet or DSL and back in the days the ARPANET, which has been switched off in the meantime. These are the protocols that TCP and IP have been able to connect with each other by putting themselves on top as a general standard. On top of TCP/IP there is the so-called application layer. This is where our everyday internet usage actually happens. E-mail, WWW, but also the apps on our smartphones are part of this layer. And while decentralized approaches such as e-mail and the World Wide Web initially flourished on the application layer, it is precisely this layer that is dominated today by Google, Facebook, Amazon and other monopolistic, centralized platforms.

This concentration in the application layer is inevitable because innovation can hardly take place on the underlying layers. Decentrally implemented protocols such as TCP/IP have the major disadvantage of being immune to any form of further development due to their „path dependency“. Once a path has been followed, it can no longer be changed significantly and any further development must be based on the previous design decisions. You can see that effect by looking at the transition of internet addresses from IP Version 4 to IP Version 6, which has been underway for 20 years now and still isn’t finished yet. Once a distributed approach has been poured into a protocol, it develops a unruly structural conservatism. You can’t just flip a switch somewhere to update the system. No, billions of people have to flip billions of switches. And in case of doubt they say: why should we? Why change a running system? As a result, the actual innovation has been pushed upwards. Instead of equipping the network protocol with new features, the services were developed on top. That was certainly the idea, but it opened up a whole new space that, although based on decentralized structures, made new centralized services possible, and – in a sense – inevitable.

But why is the application layer dominated by large corporations today, when in the 1990s decentralized approaches like the WWW, e-mail and other protocols were initially predominant?

An answer to this is provided by the “economies of scale”. In the industrial society it meant that enormous cost reduction effects would occur if a product was manufactured 100,000 times instead of 1000 times. The same applies to the digital world. Amazon needs considerably fewer employees and consumes less power to operate a data center with 100,000 servers than 100 hosting-providers need to keep their 1000 servers running. Add this to the fact that server-based services such as Facebook and Google can roll out innovations easily, while protocol-based platforms are always stuck in their current state due to their path dependency, and the dominance of centralized services is virtually self-evident.

Related to the scale effect is the network effect – a scale effect on the demand side – also known as Metcalfe’s Law6 in the networking world since the 90s. Robert Metcalf, one of the engineers of the Ethernet standard, formulated that the value of a network increase proportionally to the square of its participants. The more participants a network has, the greater the benefit for each individual. A growing network thus becomes more and more attractive and develops a pull effect on potential network participants through this positive feedback loop. In the end, everyone is on Facebook because everyone is on Facebook. However, everyone has e-mail because everyone has e-mail and Facebook and e-mail are based on TCP/IP because almost all internet services are based on TCP/IP. In other words: Network effects work for decentralized platforms as for centralized ones.

However, this effect has a negative effect on many decentralized platforms via detours. In the early 2000s, Google had shown how a central search index could render a decentralized network like the millions of websites on the WWW actually useful. This was shortly before Facebook showed that it was possible to do without decentralized elements by simply letting users create the content directly on the platform. Both Google and Facebook show that central data storage has a special advantage: it can be searched. And it’s the searchability that often makes the network effects really come to the fore. What good is it to have your friends communicate on the same standard as yourself when you can’t find them anyway?

While the internet protocol works fine without central searchability, because it only has to know it’s routing table to find the next router, non-searchability, combined with the existence of a disproportionately large competitor, is the main obstacle to the growth of alternative, decentralized social networks. That’s why Diaspora, Status.net, Mastodon and all other alternatives to Facebook and Twitter never really took off.

The lack of searchability is indeed one of the problems that blockchain technology has addressed with some success. Because all participants of the network can view and search all interactions in the network, network effects can unfold unhindered despite a lack of central data storage.

But this generates costs elsewhere. Not only is there a need for millions of parallel data stores instead of one data store for each process, but there is also the problem that these millions of data records have to align each other to a common state again and again. This alignment problem is essential, because otherwise every participant could spend his or her Bitcoin or Ether several times, the so-called „double spending“. This problem alone devours the annual energy budget of Austria only for the Bitcoin network.7 And even if less energy-hungry agreement procedures are already being applied to other crypto currencies, any solution, no matter how sophisticated, will always be thousands of times more complex than a simple „write“ into a conventional, central database.

Meanwhile, the scale effects of clumping undermine the blockchain promise. Bitcoin Gold – a Bitcoin variant – has already experienced a 51% attack.8 This is an attack, in which an attacker brings 51% of the computing power of the network under his oder her control, in order to write on the blockchain on his own authority; for example stealing money by doublespending. Back, when Bitcoin started, this was a purely theoretical possibility, today – where mining has professionalized and computing power has concentrated on a few players, it has become a real possibility that some miners could join forces or rent additional computing capacity to carry out such an attack.

The structural conservatism of path dependency also makes the blockchains difficult to develop further. A recent attempt to change Bitcoin in order to increase the block size from currently 1 megabyte to 1.8 megabyte, failed.9 This would have dramatically increased the speed of transactions, which had been down to several days in the meantime. But for a hard cut (fork) you have to have the majority of the community (at least 51% of the computing power) on board, but they often have their own agenda to protect their possessions. Just like in the analogous capitalism, the established forces profit from the status quo and for that reason oppose change.

For Bitcoin, Ethereum and many other crypto-currencies, additional external services to enrich the protocols with extra services are already in development. Wallet service for example have adopted the attitude of storing the huge blockchain data on central servers. The coin-exchanges, where you can buy and trade Bitcoin, Ether and co., are popular, and therefor central points of attack – for hackers as well as for law enforcement. Ethereum applications (dApps) are distributed by design but are often managed through centralized Web sites. In other words: it is already happening what has happened to all decentralized approaches: new services move to higher layers and establish new centralities there.

The Historical Dialectic of Decentrality

It is far from obvious whether or when blockchain-based technologies will really have the disruptive impact on our centralized internet that it’s said to have.10 Currently, most blockchains are still solutions looking for a problem. Their only unique selling point – decentralization – has a torrent of hidden costs attached to it, which already proofed prohibitive for similar approaches in the past.

However, important insights can be drawn from the history of the successful and less successful decentralized architectures of the internet. Decentralized approaches seem to work when infrastructure is geographically distributed anyway, as it is the case with regional and national internet service providers. They work when the decentralized infrastructure doesn’t need to be significantly further developed because innovation can move to higher layers. They also flourish when you can manage to make them searchable, as Google did for the WWW and Thepiratebay for Bittorrent. When you can reduce the extra costs of decentralization enormously, or justify them with a huge extra benefit, as in the early days of thei. It also helps immensely if what you build in a decentralized manner does not already exist as a centralized service. This is the only explanation a standard as inadequate as e-mail could still prevail – and last so long.

So let’s imagine that enough of these criteria have been met for a new, decentralized, protocol-based infrastructure based on blockchains to raise its head. Are we finally free then?

I doubt so. A historical dialectic can be observed in the history of decentralized architectures. Due to the inherently structure-conservative path dependency of decentralized architectures, innovation shifts to higher layers, in which new players can play out their strengths given by centrality.

Let’s imagine the following scenario. The Ethereum network produces its first truly meaningful and widespread killer app, the VisiCalc11 of the blockchain age. Let’s call it Woolit. Woolit is a dApp – a decentralized app – for buying, exchanging, managing and storing the 100 most important crypto-currencies. It’s not just a wallet, but it is connected to its own coin exchange, which makes dealing with all kinds of crypto-currencies super easy.

Now this dApp needs a website for advertising and in order to administrate your account and operate the coin-exchange. The Woolit website is conventionally stored on a web-server. The interface does no longer write to a database, but to the Ethereum blockchain, which doesn’t make a visible difference in user experience. The company also publishes apps for iPhone and Android, which can also be downloaded from the website. The blockchains of the respective cryptocoins are also stored on the central server for the sake of simplicity and efficiency.

However, the popularity of the app only really gets through the roof when it introduces a feature that processes transactions among its users in milliseconds and makes them completely free of charge. This works via an internal escrow system that executes the transactions in parallel on the server-side database and guarantees the value transfer until the actual blockchain transfer is completed. The fiduciary system can suddenly also be used to limit fraud as transactions can be recalled automatically. The Woolit-Wallet automatically instructs the fraudulent party to return the money. If such an intervention does not suit you, you can give up your Woolit and switch to another Wallet. The Lightning Network12, which has been under construction for several years and is supposed to provide similar functionality via a decentralized architecture, is still not finished at this point and therefore has nothing to oppose Woolit’s fast and technically pragmatic solution. But Woolit is now as simple, convenient and secure as any other payment app on the market and makes handling all major crypto-currencies a mass-market for the first time.

Woolit is such a great success that it initially drives most other wallet systems off the market, then gradually many coin exchanges. The Woolit exchange begins to differ from its competitors, offering features and conditions that the others cannot keep up with. Woolit starts taking fees from other exchanges when they want to transfer money to Woolit customer IDs. Retail stores now all have a Wooli logo on their checkout systems that indicates that customers can pay conveniently with the Woolit app. Soon Woolit makes its customers a special offer: if they only transfer money within Woolit and pay things, they get their fees waived every twelfth month. Most Woolit customers join in.

One day Woolit receives an official request from the American State Department. They are asked to freeze and block all accounts of the Swedish carpet trading company Carpet.io because it is guilty of doing business in Iran contrary to the sanctions. Of course Woolit complies since it is based in the US. Woolit can’t delete or freeze accounts on any blockchain systems, but it can block access to them via its woolit-interface. Of course Carpet.io can now use another wallet – there are still a few open source projects on Github – but these are slow and usually don’t support all features or all coins that Carpet.io got. In addition, Carpet.io has lost access to the Woolit exchange and has to go through other exchanges that have worse prices and features. Most importantly, they lost access to most other coin-owners, because most of them are Woolit-customers and – in order to save the fees – only exchange coins exclusively there. That’s faster, safer and more convenient anyway. Carpet.io gives up and files for bankruptcy.

Today Woolit has 50,000 employees, data centers worldwide and is the third most valuable company in the world. It also has the most mining capacity and could easily launch a 51% attack on the majority of its hosted crypto currencies. But that would only crash the value of these crypto-currencies and why would it hurt itself with such nonsense? Woolit customers also understand this and therefore trust the company. Just like most governments with which Woolit has a trustful relationship since it, together with some authorities, has dried up most of the organized crime. Money laundering has become difficult since Woolit dominated the crypto-market. Who needs decentralisation anyways?

  1. Philip Elmer-Dewitt: First Nation in Cyberspace, http://kirste.userpage.fu-berlin.de/outerspace/internet-article.html
  2. Paul Baran: On Distributed Computing, https://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM3420.pdf
  3. Wikipedia: https://en.wikipedia.org/wiki/National_Science_Foundation_Network
  4. Albert László Barabási, Reka Albert, Hawoong Jeong: Error and attack tolerance of complex networks, https://www.researchgate.net/publication/1821778_Error_and_attack_tolerance_of_complex_networks
  5. Nickey Woolf: DoS attack that disrupted internet was largest of its kind in history, experts say, https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-mirai-botnet
  6. Wikipedia: https://en.wikipedia.org/wiki/Metcalfe%27s_law
  7. This obviously is changing constantly. https://digiconomist.net/bitcoin-energy-consumption
  8. OSATO AVAN-NOMAYO: 51 PERCENT ATTACK: HACKERS STEALS $18 MILLION IN BITCOIN GOLD (BTG) TOKENS, https://bitcoinist.com/51-percent-attack-hackers-steals-18-million-bitcoin-gold-btg-tokens/
  9. Kyle Torpey: The Failure of SegWit2x Shows Bitcoin is Digital Gold, Not Just a Better PayPal, https://www.forbes.com/sites/ktorpey/2017/11/09/failure-segwit2x-shows-bitcoin-digital-gold-not-paypal/
  10. Michael Seemann: Blockchain for Dummies, http://www.ctrl-verlust.net/blockchain-for-dummies/
  11. VisiCalc of Dan Bricklin and Bob Frankston, http://history-computer.com/ModernComputer/Software/Visicalc.html
  12. Wikipedia https://en.wikipedia.org/wiki/Lightning_Network

Veröffentlicht unter english extern Plattformpolitik
Verschlagwortet mit
1 Kommentar

Cambridge Analytica, the Kontrollverlust and the Post-Privacy Approach to Data-Regulation

There is a heated debate going on about Facebook and privacy since the revelations about Cambridge Analytica surfaced. The reaction is a cry for more privacy regulation. The European approach of the General Data Protection Regulation (GDPR), which will come into effect by late May this year, is seen by many as a role model for a much needed privacy regulation in the US.

But they are wrong. I feel that there are a lot of misconceptions about the effectiveness of data protection in general. This is not surprising since there are few similar rules in the US and so the debate is based more on projections than on actual experiences.

I want to add the perspective of someone who has lived long enough within a strict privacy regime in Germany to know the pitfalls of this approach. From this angle I want to reflect the Cambridge Analytica case regarding of how effective EU style privacy regulation would have been to prevent this event from happening. Jürgen Geuter has already published a very readworthy and detailed critic of the GDPR, but my angle will be more conceptual and theory driven.

I will apply the theory of ‘Kontrollverlust’ to this case to come to a deeper understanding of the underlying problems of data control. You can read a much more detailed examination of the theory in my book ‘Digital Tailspin – Ten Rules for the Internet after Snowden’ from 2014.

In short: the notion of Kontrollverlust is basically the idea that we already lost control over our data, and every strategy should acknowledge that in the first place. There are three distinct drivers that fuels this loss of control and they are all closely entangled with the advancements of digital technology.

The first driver of Kontrollverlust reads:

„Every last corner of the world is being equipped with sensors. Surveillance cameras, mobile phones, sensors in vehicles, smart meters, and the upcoming ‘Internet of Things’ – tiny computers are sitting in all these objects, datafying the world around us. We can no longer control which information is recorded about us and where.“

This holds certainly true and you can watch an instance of this ever unraveling enlightenment in the outrage about the related issue of how the facebook android app has been gathering all your cellphone data. But it is the remaining two drivers of Kontrollverlust that are at the heart of the Facebook scandal.

1. The „Data Breach“

The second driver of Kontrollverlust is:

„A computer will make copies of all the data it operates with, and so the internet is basically a huge assemblage of copying machines. In the digital world, practically everything we come into contact with is a copy. This huge copying apparatus is growing more powerful every year, and will keep on replicating more and more data everywhere. We can no longer control where our data travels.“

Regardless if you like to call the events around Cambridge Analytica a „data breach“ or not, we can agree on the fact that data has fallen into the wrong hands. Dr Alexandr Kogan, the scientist who first gathered the data with his Facebook app, illegally sold it to Cambridge Analytica. While this was certainly a breach of his agreement with Facebook, I’m not entirely sure if it was also a breach of the law at that time. I’ve come to understand that the British data protection agency is already investigating the case so I guess we will find out at some point.

However, what becomes obvious is – regardless of which kind of privacy regulation would have been in effect – it wouldn’t have prevented this from happening. The criminal intent with which all parties were acting suggests that they would have done it one way or the other.

Furthermore, Christopher Wylie – the main whistleblower in this case – revealed that an ever growing circle of people also got their hands on this data. Including himself and even black market sites on the internet.

The second driver of Kontrollverlust suggests that we already live in a world where copying even huge amounts of data has become so convenient and easy to do that it is almost impossible to control the flow of information. Regardless of the privacy regulation in place we should consider our data being out there and with anybody who has an interest in knowing about it.

Sure, you may trust big corporations in trying to prevent this from happening, since their reputation is on the line and with the GDPR there may also be huges fines to be paid. But even if they try very hard, there will always be a hack, a leak, or just the need for third parties to access the data and thus the necessity to also trust them. ‘Breach’ won’t be an event anymore but the default setting of the internet.

This certainly doesn’t mean that corporations should cease trying to protect your data since it’s hopeless anyways and this is also not an argument against holding these companies accountable by the means of the GDPR – please do! Let’s prevent every data breach we can from happening. But nevertheless you shouldn’t consider your data safe, regardless of what the law or the corporations will tell you.

2. The Profiling

But much more essential in this case is what I call the third driver of Kontrollverlust:

„Some say that these huge quantities of data spinning around have become far too vast for anyone to evaluate any more. That is not true. Thanks to Big Data and machine learning algorithms, even the most mundane data can be turned into useful information. In this way, conclusions can be drawn from data that we never would have guessed it contained. We can no longer anticipate how our data is interpreted.“

There is also a debate about how realistic the allegations concerning the methods of Cambridge Analytica are and how effective this kind of approach would be (I consider myself on the rather sceptical side of this debate). But for this article and the sake of argument let’s assume that CA has been able to provide their magical big data psycho weapon and that it has been indeed pivotal in both the Brexit referendum and the Trump election.

Summing it up, the method works as follows: By letting people do psychological tests via Mechanical Turk and also gaining access to their facebook profiles, researchers are able to correlate their Facebook likes with their psychological traits from the test. CA was allegedly using the OCEAN model (Big Five Personality Traits). So the results would assumingly read somewhat like this: if you like x, y and z you are 75% likely to be open to new experiences and 67% likely to be agreeable.

In the next step you can produce advertising content that is psychologically optimized for some or for all of the different traits in the model. For instance they could have created a specific ad for people who are open but not neurotic and one for people who also match high on the extraversion scale and so on.

In the last step you isolate the likes that correlate with the psychological traits and use them to steer your ad campaign. Facebook gives you the ability to target people by using their likes, so you can use the infrastructure to match your particularly optimized advertisement content to people who are probably most prone to it.

(Again: I’m deeply sceptical about the feasibility of such an approach and I even doubt that it came into play at all. For some compelling arguments against this possibility read this and this and this article. But I will continue to assume it’s effectiveness for the duration of this article.)

You think the GDPR would prevent such profiling from happening? Think again. Since Cambridge Analytica only needs the correlation between likes and traits, it could have completely anonymized the data an be fine fine with GDPR. They totally can afford to lose every bit of identifiable information within the data and still be able to extract the correlation at hand, without any loss of quality. Because identity does’t matter for these procedures and this is the Achilles‘ heel of the whole data protection approach. It only applies to where the individual is concerned. (We’ll discuss this in detail in a minute.) And since you already agreed to the Facebook TOS which allows Facebook to use your data to target ads towards you, the GDPR – relying heavily on ‘informed consent’ – wouldn’t prohibit targeting you based on this knowledge.

So let’s imagine a data protection law that addresses the dangers of such psychological profiling.

First we need to ask ourselves what we learned from the case in prospect of data regulation? We learned that likes are a dangerous thing, because they can reveal our psychological structure and by doing that, also our vulnerabilities.

So, an effective privacy regulation should keep Facebook and other entities from gathering data about the things we like, right?

Wrong. Although there are certainly differences in how significant different kinds of data may correlate to certain statements about a person, we need to acknowledge the fact that likes are nothing special at all. They are more or less arbitrary signals about a person and there are thousands of other signals you could match against OCEAN or similar profiling models. You can match login times, or the amount of tweets per day, browser and screen size, the way someone reacts to people or all of the above to match it against any profiling model. You could even take a body of text from a person and match the usage of words against any model and chances are that you get usable results.

The third driver of the Kontrollverlust basically says that you cannot consider any information about you innocent, because there can always appear a new statistical model, a new data resource to correlate your data with or a new kind of algorithmic approach to render any kind of seemingly harmless data into a revelation machine. This is what Cambridge Analytica allegedly did and what will continue to happen in the future, since all these data analysis methods will continue to evolve.

This means that there is no such thing as harmless information. Thus, every privacy regulation that reflects this danger should prevent every seemingly arbitrary bit of information about you from being accessible by anyone. Public information – including public speech – has to be considered dangerous. And indeed the GDPR is trying to do just that. This has the potential to turn into a threat to the public, to democracy and to the freedom of the individual.

Privacy and Freedom in the Age of Kontrollverlust

When you look back to the origins of (German) data protection laws you will find that the people involved have been concerned about the freedom of the individual being threatened by the government. Since state authority has the monopoly on force – e.g. through police and jails – it is understandable that there should be limits for it to gather knowledge on citizens and non-citizens. „Informational self determination“ has been recognized by the Federal Supreme Court as a basic civil right back in 1983. The judges wanted to enable the individual to secure a sphere of personal privacy from the government’s gaze. Data protection was really a protection of the individual against the government and as such it has proven to be somewhat effective.

The irony is that data protection was supposed to increase individual freedom. But a society where every bit of information is considered harmful wouldn’t be free at all. This is also true on the individual level: Living in constant fear about how your personal data may fall into someones hands is the opposite of freedom.

I do know people – especially within the data protectionist scene – who promote this view and even live that lifestyle. They spend their time hiding from the public and using the internet in an antiseptically manner. They won’t use most of the services, only some fringe and encrypted ones, they never post anything private anywhere and go constantly after people who could reveal anything about them on the internet. They are not dissidents, but they chose to live like ones. They would happily sacrifice every inch of public sphere to get to the point of total privacy.

But the truth is: With or without GDPR; those of us who wouldn’t devote their lives to that kind of self restrictive lifestyle, already lost control of their data and even the ones who do will make mistakes at some point and reveal themselves. This is a very fragile strategy.

The attempt to regain the control won’t increase our liberties but already does the opposite. This is one of the central insight that brought me to advocate against the idea of privacy for privacy’s sake, which is still the basis of every data protection law and also of the GDPR.

The other insight is the conclusion that privacy regulation doesn’t solve much of the problems that we currently deal with, but is making it much harder to tackle them properly. But this needs a different explanation.

The Dividualistic Approach to Social Control

I’m not saying that we do not need regulation. I do think that there are harmful ways to use profiling and targeting practices to manipulate significant chunks of the population. We do need regulation to address those. But data protection is no sufficient remedy to the problem at hand, because it was designed/conceived for a completely different purpose – remember: the nation state with its monopoly on force.

In 1997 Gilles Deleuze made the point that next to the disciplinary regimes like the state and its institutions that we know since the seventeenth century, there has been a new approach of social control coming up, which he called the “Societies of Control”. I won’t get into the details here but you can pretty much apply the concept on Facebook and other advertisement infrastructures. The main difference between disciplinary regimes like, say, the nation state and regimes of control like, say, Facebook is the role of the individual.

The state always refers to the individual, mostly as a citizen, that has to play by the rules. As soon as the citizen oversteps the state uses force to discipline him back to being a good citizen. This concept also applies down to all the states institutions: The school disciplines the student, the barrack the soldier, the prison the prisoner. The relation is always institution vs individual and it is alway a disciplinary relation.

The first difference is that Facebook doesn’t have a monopoly on force. I doesn’t even use force. It doesn’t need to.

Because second, it doesn’t want to discipline anyone. (although you can argue that enforcing community standards needs some form of disciplinary regime, it is not Facebook’s primary objective to do so.) The main objective Facebook is really thriving for has …

… Third nothing to do with the individual at all. What it cares for is statistics. The goal is to drive the conversion rate for an advertisement campaign from 1.2% to 1.3% for instance.

Getting this difference wrong is one of the major misconceptions about our time. We are used to think of ourselves as individuals. But that’s increasingly not the way the world looks back at us. Instead of the individual (which means the un-dividable) it sees the dividual (the dividable): our economic, socio-demographic and biological characteristics, our interests, our behaviors and yes, at some point probably our OCEAN rating is what counts for these Institutions of control. We may think of these characteristics as part of our individual selves but they are everything but unique. And Facebook cares for them precisely because they are not unique, so they can put us into a target group and address this group instead of us personally.

For instance: Facebook doesn’t care if an ad really matches your interest or your taste as a person. It is not you, Facebook cares about, but people like you. It’s not you, but people like you that are now 0.1% more likely to click on the ad, that makes all difference and thus all the millions for Facebook.

People who are afraid of targeted advertisement because they think of it as exceptionally manipulative as well as people who laugh off targeted ads as a poor approach because the ad they were seeing the other day didn’t match their interest – both get this new way of social control wrong. They get it wrong because they can’t help thinking individually instead of dividualistic.

And this is why the data protection approach of giving you individual rights doesn’t provide the means to regulate a dividualistic social control regime. It’s just a mismatch of tools.

Post-Privacy Policy Proposal

Although the argument provided here may seem quite complicated, the solution doesn’t need to be. In terms of policy I mostly propose a much more straightforward approach of regulation. We need to identify the dangers and the harmful practices of targeted advertisement and we need to find rules to address them specifically.

  1. For starters we need more transparency for political advertisement. We need to know which political ads are out there, who is paying for them, how much money has been paid, and how these ads are being targeted. This information has to be accessible for everyone.
  2. Another angle would be to regulate targeting on psychological traits. I feel psychological ads aren’t necessarily harmful but it is also not difficult to imagine harmful applications, like finding psychological vulnerable people and exploit their vulnerabilities to sell them things they neither need nor can afford. There are already examples for this. It won’t be easy to prohibit such practices, but it will be a more effective approach on the long run than trying to hide these vulnerabilities from potential perpetrators.
  3. There is also a need to break the power of the monopolistic data regimes like Facebook, Google and Amazon. But contrary to the public opinion their power is actually not a function of their ability to gather and process data, but to be in the unique position to do so. It’s because they mopolized the data and are able to exclude everybody else from using it, what makes them invincible. Ironically it was one of the few attempts of Mark Zuckerberg to open up his data silo by giving developers access through their API, which caused the Cambridge Analytica trouble in the first place. Not just ironically but also unfortunately, because there is already a crackdown going on against open APIs and that is a bad thing. Open APIs are exactly what we need the data monopolists to implement. We need them to open up their silos to more and more people; scientists, developers, third party service providers, etc. in order to tackle their powers, by seizing the exclusiveness of their data usage.
  4. On a broader level we need to set up society to be less harmful for personal data being out there. I know this is far reaching but here are some examples: Instead of hiding genetic traits from your health insurance provider, we need a health care system that doesn’t punish you for having them. Instead of trying to hide some of your characteristics from your employer, we need to make sure everybody has a basic income and is not existencial threatened to reveal information about themselves. We need much more policies like this to pad out society against the revealing nature of our digital media.

Conclusion

“Privacy” in terms of „informational self determination“ is not only a lost cause to begin with, but it doesn’t even help regulating dividualistic forces like Facebook. Every effective policy should consider the Kontrollverlust, that is to assume the data to be already out there and used in ways beyond our imagination. Instead of trying to capture and lock up that data we need ways to lessen the harm such data could possibly cause.

Or as Deleuze puts it in his text about societies of control: “There is no need to fear or hope, but only to look for new weapons.”


Veröffentlicht unter Algorithmenkritik Das Neue Spiel english Kontrollverlust Plattformpolitik Postprivacy Queryology
2 Kommentare

Vorschlag: Open Source als Plattformpolitik

Sei doch nicht immer so negativ!“ wird mir manchmal vorgeworfen. „Immer bist du gegen alle möglichen politischen Maßnahmen, immer haben alle unrecht, aber was wäre denn nun die richtige Politik?

Ich gebe zu, dass ich mich gerade bei (netz-)politischen Themen in einer gewissen Meckerposition eingerichtet habe – zumindest überwiegt sie unschön.

Oft habe ich zum Beispiel festgestellt, dass Plattformen eine enorme Macht haben, aber sobald die Politik Maßnahmen vorschlägt, Plattformen zu regulieren, finde ich das auch wieder doof. Ja, was denn nun?

Ja, ich habe in verschiedenen Texten (und auch schon in meinem Buch) gezeigt, dass staatliche Plattformregulierung sehr häufig gegenteilige Effekte als die Angestrebten produziert. Genauer: sie zwingen Plattformen politische Macht und Legitimation auf, die diese gar nicht haben wollen und machen Staaten im Gegenzug abhängig von der Regulierungsmacht der Plattformen.

Aber als ich vorletztes Jahr als Sachverständiger im Bundestag war (schriftliche Stellungnahme als PDF, Video der Anhörung) habe ich auch einen konkreten Vorschlag gemacht, wie die Macht der Plattformen durch Staaten effektiv einzuhegen wäre, der damals meines Erachtens zu wenig Aufmerksamkeit bekommen hat. Deswegen will ich ihn hier etwas ausführlicher Ausbreiten:

Der Staat muss sich mit der Open Source Bewegung kurzschließen, um selber Plattformanbieter zu werden.

Nun gehöre ich nicht zu den typischen Open Source-Apologeten, die sich aus Überzeugung mit Linux rumschlagen und zur Not auf Geräte verzichten, weil es dafür keine Open Source-Treiber gibt. Im Gegenteil. Ich bin weitestgehend zufriedener Bewohner des Apple-Ökosystems und halte die meiste Open Source Software für eine Zumutung.

Auf der anderen Seite glaube ich aber auch, dass Open Source, offene Standards und dezentrale/distributed Service Ansätze das Einzige sind, was die Macht kommerzieller Plattformen – wenn nicht bedrohen, aber immerhin in Schach halten können. Oder könnten.

Leider tun sie das bislang nur sehr begrenzt. Das liegt zum einen daran, dass diese Ansätze allesamt noch zu schlecht, zu langsam in der Innovation, zu frickelig und so weiter sind. Das ist ein Problem, dass man mit Geld und Manpower gelöst bekommt. Das andere Problem ist ein Henne-Ei-Problem: die Systeme sind nicht attraktiv, weil sie zu wenig genutzt werden. Das leidige Thema Netzwerkeffekte.

Und hier die gute Nachricht: der Staat ist genau der richtige Akteur, beide Probleme zu lösen.

Kommen wir zum ersten Problem: Klar, wenn ein Unternehmen die Wahl hat, Geld in eine propietäre Technologie zu stecken, von der nur es selnst profitiert, wird es das lieber tun, als in Open Source zu investieren, wo im Zweifel auch die Konkurrenz noch von profitiert. Investition ist hier – zumindest zum Teil – ein Nullsummenspiel, was die vergelichsweise mickrige Finanzierung von Open Source erklärt.

Aber Staaten sind da anders. Sie könnten Geld in Open Source stecken und es kann ihnen egal sein, ob andere Staaten oder Unternehmen oder gar Privatpersonen davon profitieren. Je nach politischer Gesinnung könnte man das sogar als etwas positives sehen. (Ich zum Beispiel sehe das so)

Hinzu kommen die Standardargumente: Staaten könnten, wenn sie in Open Source investieren, eine größere Kontrolle über ihre Systeme bekommen. Sie könnten den Code für ihre Bedürfnisse anpassen, den Source Code auf Sicherheitslücken überprüfen und eigene Kompetenzen in Wartung und Weiterentwicklung aufbaue und so die direkte Abhängigkeit von Plattformanbietern (zum Beispiel Microsoft) reduzieren.

Aber so richtig spannend wird es, wenn man mit dem Staat das zweite Problem adressiert, das offene Ansätze haben: fehlende Netzwerkeffekte. Denn was oft vergessen wird, ist, dass der Staat eben auch ein riesiger Konsument von Software ist und sein Benutzen oder Nichtbenutzen von Systemen ein enormes Gewicht in die Waagschale wirft.

Konkret: Je mehr Behörden Linux und LibreOffice und co. installieren, desto mehr Kompatibilitäten werden über die Ämter und Behörden hinweg hergestellt. Ab zwei Städten mit kompatiblen Systemen würde es sich lohnen eigene Softwarelösungen zu entwicken. Partnerprojekte würden aus dem Boden schießen, denn sie könnten sich die Entwicklungskosten teilen und voll profitieren. Nach und nach würden aber immer mehr auf den Zug aufspringen, weil die Plattform durch neue Software immer attraktiver würde. Wir haben es mit dem Ins-Werk-Setzen von positiven Feedbackloops zu tun, die sich immer weiter verstärken und beschleunigen.

Und jetzt stellen wir uns vor, der ganze deutsche Staat, bis runter zur die letzten Landes- und Regionalbehörde würde auf Open Source setzen, zig Millionen von Installationen, dann hätte das einen globalen Impact auf die Open Source Welt als solche und darüber hinaus. Unternehmen würden vermehrt auf Open Source umsatteln, weil man nur so an lukrative Staatsaufträge kommt. Millionen Angestellte in den Behörden würden vielleicht auch privat anfangen Open Source-Syteme zu nutzen, einfach weil sie sich damit gut auskennen. Immer mehr Leute würden partizipieren, Bugs fixen, Software weiterentwickeln, forken, etc. Die Software würde immer vielfältiger, benutzbarer und sicherer.

Auf einmal würde es wirtschaftlich Sinn machen eine Bundes-Distribution für Linux herauszugeben, mit speziell entwickelter Behördensoftware, standardisiert und garantiert kompatibel über das gesamte Bundesgebiet. Es wäre wirtschaftlich eigene Spezialist/innen in großen Mengen auszubilden, Systemadins, Entwickler/innen, Secrurityexpert/innen. Projekte würden sich auf Projekte setzen, es würden eigene Infrastrukturen geschaffen, eigene Clouds, eigene Hardware, eigene Services, etc. Es würde ein lebendiges Ökosystem entstehen, dass Möglichkeiten an Möglichkeiten knüpfen würde.

Aber warum in Deutschland halt machen? Wenn einmal diese Softwarepakete in der Welt sind, würden schnell auch europäische Partner auf die Idee kommen, die Software einzusetzen. Sie müssten gar nicht mal die initialen Kosten investieren, weswegen die Schwelle für ihren Einstieg noch geringer ist. Die Netzwerkeffekte würden international abheben und Deutschland würde im Umkehrschluss davon profitieren. Je mehr Länder mitmachen, desto besser und vielfältiger wird die Software, desto größer wird der Pool an Experten, desto mächtiger wird das Ökosystem.

Man könnte per EU auch eine koordinierte Anstrengung machen, die gesamte EU auf die neue Open Source Strategie zu migrieren. Man stelle sich vor, wie die Netzwerkeffekte dann reinkicken würden. Spätestens dann wären Projekte möglich, mit denen man Facebook, Google und Amazon tatsächlich gefährlich werden könnte.

Linux und Co. wären sehr bald nicht mehr die Nerdsoftware, die wir heute kennen, sondern der gut benutzbare, besonders sichere Gold-Standard, mit dem jedes Kind umgehen kann.

Fazit

Staaten und Open Source sind sowas wie natürliche Partner, denn für Staaten ist Investition in Software kein Nullsummenspiel und er kann deswegen die Netzwerkeffekte freier Software sehr viel sorgloser einstreichen, als Unternehmen das können. Das Open Source Prinzip stellt auf der anderen Seite sicher, dass Staaten hier keine Tricks und Kniffe für heimliche Überwachung oder Zensurmaßnahmen in die Systeme schmuggeln. Am Ende gewinnt der Staat allgemeine Interoperabilität, bessere Software, die Macht der Netzwerkeffekte und sowas wie „Cyber-Souveränität“, weil er eigene Ressourcen zur Verteidigung seiner Infrastruktur aufbauen kann.

Die großen, kommerziellen Plattformen wird das nicht zerstören, aber es wird ihnen ein Gegengewicht entgegengestellt und die allgemeine Abhängigkeit der Staaten von ihnen wird enorm reduziert.

Zu guter Letzt gewinnen Staaten nun neue Möglichkeiten steuernd in die Plattformwelt einzugreifen: indem sie Standards durch ihre eigenen Netzwerkeffekte pushen oder verhindern.

Der Staat würde also ein relevanter Akteur im Spiel der Plattformen, indem er selbst Plattformbetreiber wird. Er könnte zwar nicht die Spielregeln bestimmen, aber sie relevant mitgestalten. Das ist weit mehr, als zu was er derzeit im Stande ist.

Ich bin überzeugt: die Zukunft der Staaten liegt im Open Source.


Veröffentlicht unter Das Neue Spiel Plattformpolitik
13 Kommentare

Blockchain For Dummies

[German Version]

The ‚blockchain‘ is currently being praised as a new miracle technology. The word appears six times in the German coalition agreement for the new government – and always in the context of new and promising digital technologies.

But what is behind all this?

Blockchain technology was born with its first popular application: Bitcoin. Bitcoin is based on the fact that all transactions made with the digital currency are recorded in a kind of ledger. However, this ledger is not located in a central registry, but on the computers of all Bitcoin users. Everyone has an identical copy. And whenever a transaction happens, it is recorded more or less simultaneously in all these copies. It is only when most of the ledgers have written down the transaction that it is considered completed. Each transaction is cryptographically linked to the preceding transactions so that their validity is verifiable for all. For instance, if someone inserts a fake transaction in between, the calculations are no longer valid and the system raises an alarm. What we’ve got, is a storage technology that no individual can control or manipulate.

Early on, even bitcoin skeptics admitted that besides the digital currency itself, it is the blockchain technology behind it that holds the real future potential. Since then, many people have been wondering where else it could be applied.

The Internet has always been regarded as a particularly decentralized technology, but this holds no longer true for most services today: All of us use one search engine (Google), one social network (Facebook) and one messenger (WhatsApp). And all these services are based on centralized data storage and data processing. Blockchain technology seems to offer a way out–all services that previously operated via central databases could now be organized with a distributed ledger.

The ideas go so far as to depict complex business processes within blockchains. For example, automated payouts to an account when a stock reaches a certain value. That’s called „Smart Contract.“

The hype about blockchain is almost as old as the one around Bitcoin, so we are talking about the inevitability of this technology for like six to seven years now. Hundreds, if not thousands of start-ups have been established since then, and just as many applications of the blockchain have been claimed.

As an interested observer one asks himself, why there is yet no other popular application besides cryptocurrencies (which themselves are more or less speculation bubbles without any real world application)? Why hasn’t a blockchain-based search engine threatened Google, or a blockchain-based social network Facebook? Why don’t we see a blockchain-based drive mediation app, and no accommodation agency–although these purposes have been praised so often? Why do all blockchain technologies remain in the project phase and none of them finds a market?

The answer is that Blockchain is more an ideology than a technology. Ideology means, that it is backed by an idea of how society works and how it should work.

One of the oldest problems in social sciences is the question of how to establish trust between strangers. Society will only exist if this problem is adequately resolved. Our modern society’s approach is to establish institutions as trustful third parties that secure interactions. Think of banks, the legal system, parties, media, etc. All these institutions bundle trust and thus secure social actions between strangers.

However, these institutions gain a certain social power through their central role. This power has always caused a headache to a certain school of thinking: the Libertarians, or anarcho-capitalists. They believe that there should not be a state that interferes in people’s affairs. The market – as the sum of all individuals trading with each other – should regulate everything by its own. Accordingly, they are also very critical of institutions such as central banks that issue currencies. The basic idea behind Bitcoin is to eliminate the central banks – and indeed banks in general – from the equation.

Blockchain is the libertarian utopia of a society without institutions. Instead of trusting institutions, we should have confidence in cryptography. Instead of our bank, we should trust in the unbreakability of algorithms. Instead of trusting in Uber or any other taxi app, we should trust a protocol to find us a driver.

That’s why Bitcoin and blockchain technology is so popular with the American right, which has a long libertarian tradition and rejects the state as such. That’s why it is also very popular with German rights, for example with Alice Weidel from the AfD, who will now hold the keynote at a big German Bitcoin conference and is founding her own blockchain startup. Those who are opposed to the „lying media“ and the „old parties“ are also critical of all other institutions, it seems.

So when you invest in Blockchain, you make a bet against trust in institutions. And that’s also the reason why this bet hasn’t been won once, yet. It’s because the ideology of anarcho capitalism is naive.

Technically speaking, a blockchain can do the same things any database has long been able to do–with a tendency to less. The only feature that distinguishes blockchain here is, that no one has ever to trust a central party. But this generates also costs. It takes millions of databases instead of one. Instead of writing down a transaction once, it has to be written down millions of times. All this costs time, computing power and resources.

So if we do not share the libertarian basic assumption that people should mistrust institutions, the blockchain is just the most inefficient database in the world.


Veröffentlicht unter Algorithmenkritik english extern Plattformpolitik Weltkontrollverlust
3 Kommentare

What Is Platform Politics? Foundations of a New Form of Political Power

/**** First published in ‘Zeitschrift für sozialistische Politik und Wirtschaft’ (SPW), S. 44 – 49. in December 2017. ****/

[PDF]
[German Original]

In early 2017, shortly after the inauguration of Donald Trump, the rumor began to spread that Facebook founder Mark Zuckerberg himself was planning to enter the presidential race in 2020.
Following Trump’s victory, everything may seem possible, but this speculation was based solely on the so-called “Listening Tour”, Zuckerberg’s trip through the US, where he wanted to meet Facebook users in person.1

This rumor is only a symptom of the general lack of understanding of our times. For Mark Zuckerberg has long been a politician. He has an enormous impact on the daily lives of two billion people. He makes decisions that affect how these people get together, how they interact, even how they see the world. So Zuckerberg is already perhaps the most powerful politician in the world. Any job in traditional politics, including the office of US president, would be a step down.

In this text, I will try to determine and analyze the ways in which platforms act politically, examining how they organize their power base and in which fields their policies are already changing the world. But first we should eliminate three fundamental misconceptions about platform politics.

Three Misconceptions About Platform Politics

1. Platforms are not (only) the objects of politics, but also powerful political subjects

When we talk about “platform politics” or platform regulation, we tend to think of platforms as the subjects of regulation and policymaking. That isn’t wrong as such, but it conceals the far more important point that today, the platforms themselves have become influential regulators and political actors.

Platforms provide the infrastructure for our digital coexistence – with an emphasis on “structure”. For this structure is neither arbitrary nor neutral: Defining the structure of communication is a political act in and of itself, one that enables certain interactions and reduces the likelihood of other kinds of communication. This is a profound intervention into our social lives, and therefore in itself political.

So it makes sense to think about platforms not merely as companies that provide Internet services, but as political entities or even institutions.2 Their impact on the political debate, on our society and coexistence, and therefore on all kinds of political decisions, is nothing short of the influence of traditional media. Platforms can be regarded as the Fifth Estate. But unlike the other four estates, platforms are not limited by the boundaries of the nation state; they act and think globally by design. And in contrast to other institutions, they don’t try to overemphasize their socio-political significance; after all, political responsibility is bad for business. Platforms rather tend to downplay their political power and refuse to take responsibility. They are political actors in spite of themselves.

2. Platforms exercise a different form of power

One reason why platforms are still not taken seriously as political actors is the general lack of understanding of their power dynamics.

When politicians come up against platforms, they like to throw the weight of their political legitimation around. They talk about the “primacy of politics”, as if to convince themselves and others of their agency. This primacy is derived from the fact that the politician came into office by way of a sovereign, collective decision. But platforms, too, generate a kind of legitimation through collective decision-making, even though this works slightly differently.

In his book Network Power, David Singh Grewal argues that the adoption of standards can be understood as a collective decision.3 And standards are but the conditions of possible interactions, which is why the social relevance of every decision for or against a standard is inherent political. The mere fact that these decisions are not all taken simultaneously as they would be in an election, but rather in staggered intervals (“aggregated”), does not diminish their social impact.

The power of these aggregated, collective decisions is nothing new. It relates to the languages we speak, the manners we cultivate or accept, and of course, to the choice of network service we choose to use. In the end, we join Facebook not because of its great product quality, but because all our friends are on Facebook.

In economics, this phenomenon is called the “network effect”, but Grewal is quite right to view it as a political power factor in its own right. Once a certain standard is widely established, the pressure on the individual becomes so great that there is little choice but to adopt that standard as well – the alternative often being social ostracism.

We accept the “network power” that these standards wield, because ultimately they can never be enforced by individuals. At least that applies to open standards. No one can prevent me from learning Russian, or from making a server available on the Internet with open protocols like TCP/IP. Social pressure always comes from the community as a whole, so it can never instrumentalized individually.

Network power, however, becomes “platform power” when the standards adapted contain key mechanisms of exclusion. Facebook could withhold access to my friends at any time, or place temporal or local restrictions on it. On the one hand, access control of the standard is at the heart of the platform’s business model, on the other hand, it is the basis of its political power.

To sum up: Platform Power = Network Power + Access Control.

3. Regulation of platforms increases their power

Of course, even in conventional politics it has become clear that platforms have this uncanny power, but since politicians don’t understand that power, they are simply making matters worse. They are under the misconception that dealing with Google, Facebook, Apple and Co is much the same thing as the corporate power structures they might have encountered at Siemens or at Deutsche Bank. And so they resort to the playbook of political regulation to match these powers.

But platform providers are not just large enterprises; their power is based on more than just money and global expansion. Rather, the platform is facing down the nation state itself, as a systemic competitor – even if neither side is prepared to admit it yet.

This is why all efforts in conventional politics to regulate platforms must lead to a paradox. Even while politicians are shaking their fists at Google and Facebook, they are granting these platforms more sovereignty by the minute. Any new constraints devised by policymakers just serve to strengthen the political power and legitimacy of the platform. One example is the European Court of Justice ruling on the so-called “right to be forgotten”, which forces Google to redact search results following a very vague list of criteria.4 Another example is the notorious Network Enforcement Act, recently introduced by the German Federal Minister of Justice Heiko Maas, which obliges Facebook and other platforms to delete what is deemed “obvious unlawful content”.5 In both cases, the state has relinquished its powers of jurisdiction and law enforcement to the platform in question. At first sight, this makes perfect sense, because platforms are the logical point of contact for regulating the digital world, thanks to their platform power and deep, data-driven insights. At the same time, this is fatal, because the state further increases the power of the platforms in this way, making itself dependent on its very competitors.

The Three Spheres of Platform Politics

The political influence of platforms takes many forms. I would like to examine three departments more closely where platforms are already very influential today and will gain even more influence in future (without this claiming to be an exhaustive list): domestic net policy, foreign net policy, and security net policy.

Domestic Net Policy

The term “net politics” (Netzpolitik) has become widespread, in the German-speaking net in particular, since it originated here with the popular political blog of the same name.6 The Netzpolitik site addresses topics like data protection, net neutrality, censorship and many other Internet-related areas of politics. It is important to regard the network as the subject of net politics in this case.

For now, the term “domestic net policy” is merely intended to highlight the concession that these internal/external or object/subject relationships no longer exist – political issues pertaining to the net increasingly arise from within the net. Which implies that these issues can only be solved from within. This is not only pertinent to those problems with hate speech, trolling and fake news we are currently discussing, but also to older issues such as identity theft or doxxing (publishing personal information with malicious intent).

Since these problems mostly arise on platforms, it is logical to expect the according countermeasures to come from the platforms themselves. While this does indeed happen occasionally, overall these interventions are still seen as insufficient. In fact, platforms display a lot of reluctance towards regulations in general. They are hesitant to make use of the political power they already wield, for instance by establishing and enforcing stricter community rules.7
Still, the awareness of the problem seems to have sharpened. Facebook’s new mission statement in February 2017 already indicated as much8, and Twitter9 and Google10 have been giving similar indications. After the Nazi march that escalated in Charlottesville, many platform providers were pushed to action and subsequently banned right-wing accounts and websites from their services. Twitter and LinkedIn suspended a range of “White Supremacist” accounts, while Facebook, Google and GoDaddy, a popular domain registry, blocked domains and groups that were spreading hate. Most notably, the Nazi website Daily Stormer was blocked, and even kicked out of the content delivery network Cloudflare.11 It is still not clear, however, whether these measures are really the most suited to address the aforementioned problems. The results so far give little cause for hope.12

Foreign Net Policy

While Facebook’s handling of hate speech and fake news can be relegated to the domestic net policy department, Heiko Maas‘ aforementioned Network Enforcement Act would be the subject of the foreign net policy department. “foreign net policy” mostly (but not exclusively) references the way in which platforms encounter the state, and how these parties meet and negotiate their mutual interests. Of course, the standard case is a state attempting to regulate a platform, as we have seen above. The EU, for example, has several lawsuits pending against Facebook and Google, and the conflicts between the US government and platform providers are becoming increasingly apparent as well.13

Relations between platforms and states have not always been this bad in the past. Notably, the US State Department under Hillary Clinton made use of various platforms for foreign policy purposes. In her influential speech on Internet and Freedom in 2010, Clinton described the platform providers as important partners in terms of global spreading of democracy and human rights.14

Jared Cohen played a particularly pivotal role here.15 Cohen had joined the State Department while still under Condoleezza Rice, but rose to prominence during Clinton’s office. When in 2009, a revolution was threatening to break out in Iran, Cohen called Twitter and convinced them to postpone their scheduled maintenance downtime.16 Twitter played an important part in the coordination of the upheaval.

When the Arab Spring finally broke out in early 2011, Cohen was already working at Google, where he helped coordinate various inter-platform projects. Facebook, Twitter and Google in their own ways all tried to support the uprisings in the Arab World, and even cooperated with one another to do so. One example is the case of the service speak2tweet: Google provided a telephone number which people from Egypt could call to record a message. These messages were then published on Twitter, thus bypassing the Egyptian Internet shutdown.17

Since the Snowden revelations of 2013 at the latest, relations between Silicon Valley and Washington have cooled down significantly. Platforms have since been trying to protect and distance themselves from state interference. This is mostly achieved through the increasing use of encrypted connections, and through elevated technical and legal security.18 In the US, this development in general, and the move towards more cryptographically secure systems in particular, is viewed with a mounting sense of discomfort.

The conflict than escalated in spring 2016, due to the iPhone that FBI investigators found with the perpetrator of the San Bernardino attacks. The phone was locked and encrypted, and so the investigators ordered Apple to assist with the decryption. Apple refused – in order to unlock the phone, Apple would have had to introduce a security vulnerability in its security software. A dangerous endeavor, from Apple’s perspective, that would have reduced the security of all other Apple devices and therefore, consumer confidence. In the end, the FBI had to work with a third-party security company to unlock the iPhone.19

Beside these varied forms of cooperation and conflict between platforms and states, platform-platform relations should also be taken into account, of course.20 One politically tangible example is the fact that Facebook is increasingly losing users from the extreme right and right-wing spectrum to its competitor, VKontakte.21 VKontakte is the equivalent of Facebook in Russia, albeit with a completely different set of guidelines. For instance, while you might get into trouble for posting homophobic contents on Facebook, you might get into trouble on VKontakte for posting the Rainbow Flag.

A segregation of society along the boundaries of different platforms and their according policies seems to be a plausible scenario, and may well provide a lot more material for foreign net policy in future.

Security Net Policy

For some time now, there has been growing debate on issues like “cyberwar” and “cyber security” in political circles. The expression simply references a new form of war, conducted with digital means. The United States and Israel were the vanguard here, and in 2010 managed to destroy a uranium enrichment facility in Iran, using a highly rigged and upgraded “cyber weapon”, in this case the custom-designed computer worm Stuxnet.22 The so-called Stuxnet shock marked the beginning of a global arms race in terms of hacking capacity in general. Cyber-attacks have since become more and more commonplace, be it China’s attacks on Google23, North Korea’s attack on Sony Pictures24, or Russia’s attack on the US elections. The “cyber” terminology is frequently explained by the fact that the military has different areas of operation: ground forces (army), water (navy) and air (air force) – and now, “cyber” opens up a whole new area of operations, complete with the demand that specific capacities be strengthened accordingly.25

That said, the core misunderstanding here is the assumption that cyber-wars primarily take place between nation states. Even today, that is hardly the case. On the one hand, almost every “cyberattack” is an assault on a platform at the same time. The attack might pertain to the Microsoft operating system (as in the case of Stuxnet and many others), or to specific services (the attack on Google was directed at Gmail mailboxes, as was the Russian hack of John Podesta’s emails). Almost without exception, a software or service provided by a specific platform is involved.

Further, many attacks are directed at platforms as their primary target. Perhaps the most prominent case is the 2015 attack from China on the GitHub developer platform. GitHub is a popular website where software developers can store and synchronize versions of their code and share with other users. Nearly all popular open source projects can be found there – including one called “The Great Fire”. The “Great Firewall” is what China’s powerful Internet censorship architecture is usually referred to, and accordingly, “The Great Fire” is a special toolkit designed to circumvent the Chinese firewall. Of course, the Chinese government didn’t find this at all agreeable.
While it is not unheard of that China simply shuts off services it objects to by activating the Great Firewall, GitHub was a notable exception. Blocking local developers‘ access to Github would have been tantamount to shutting down the Chinese software industry altogether, something not even China could afford. But with a censorship infrastructure that lets millions of requests per second come to nothing, the Chinese came up with another idea: redirecting the censored requests from China to one single destination on the net instead. This is the core idea behind “The Great Cannon”.26

GitHub was hit by millions and millions of requests from all over China, pushing the website to its utmost limits. In IT security terms, this is called a DDoS attack, or “distributed denial of service”.27

Finally, platforms are not only the target of cyber attacks, but more and more frequently the last line of defense for other targets. In 2016, a DDoS attack came down on the blog of security researcher Brian Krebs. His analysis of the attack revealed that the attack had been carried out mainly by Internet routers and security cameras. The underlying explanation was that a vulnerability in the operating system “Mirai”, commonly used in such devices, had allowed hackers to take over millions of these devices. It was the largest bot army the world had ever seen.

And so Krebs had no choice but to look for cover with Google’s proprietary server infrastructure, designed for precisely that purpose, and commonly known as “Project Shield”. A platform operated, incidentally, by Jigsaw, the Google spin-off think tank founded by Jared Cohen.28

So the inconvenient truth behind “cyber” is that it is not at all the state that is at the center of events, but the platforms. The platforms provide the infrastructure that comes under attack, and more importantly, they are increasingly becoming targets themselves. Most importantly, the platforms are the only players with sufficient technical capacity and human resources to fend off these kinds of attacks, or prevent them to save the day.29 Either way – if the worst comes to the worst, the state might have no choice but to slip under the umbrella of a welcoming platform, just like Brian Krebs did. “Cyber Sovereignty” on a state level still remains a pipe dream at present.

Conclusion

Platforms are already holding a prominent position within the social order, which in itself is becoming more and more digitalized. Platforms regulate critical infrastructure for the whole of society, and provide protection and order on the Internet. Increasingly the platform is in direct competition with the state, which generates dependencies that could turn out to be a threat for nation states.

Whether the state will maintain its independence and sovereignty in the long term or not, will depend on its ability to operate and maintain digital infrastructure on its own. In the long run, the state needs to become a platform provider itself.30

Platforms, on the other hand, would be well advised to look at the democratic institutions of states that have evolved over time, to address their own domestic net policy issues. Even a rudimentary rule of law instead of generic “Terms of Service”, even the most tentative embrace of transparency, checks and balances, and the possibility of appeal in all actions, would make the platforms‘ fight against hate speech and fake news more credible and fair, and most certainly more successful.31

In short: platforms need to become more like nation states, and states need to become more like platforms.

In the meantime both sides, the state and the platform, don’t have much choice but to cultivate their mutually critical-cooperative relationships and collaborate in all three departments – domestic net policy, foreign net policy, and security net policy. It should be noted that competition between the two might even be advantageous for the citizen (or user) in the long run. While the state is trying to protect me from the overbearing access of the platforms, platform providers are trying to protect me from the excessive data collection of the state.

  1. Alex Heath: Speculation is mounting that Mark Zuckerberg wants to serve in government, in Business Insider, http://www.businessinsider.de/speculation-mounting-that-mark-zuckerberg-wants-work-government-2017-1?r=US&IR=T, 01/05/2017.
  2. Michael Seemann: Das Neue Spiel – Strategien für die Welt nach dem digitalen Kontrollverlust, Freiburg 2014, p. 204 ff.
  3. Grewal, David Singh: Network Power – The Social Dynamics of Globalization, p. 9.
  4. Michael Seemann: Das Neue Spiel – Strategien für die Welt nach dem digitalen Kontrollverlust, Freiburg 2014, p. 223.
  5. Markus Beckedahl: NetzDG: Fake-Law gegen Hate-Speech, in Netzpolitik https://netzpolitik.org/2017/netzdg-fake-law-gegen-hate-speech/, 06/30/2017.
  6. See: http://netzpolitik.org.
  7. On the one hand, this is due to the fact that these are still profit-oriented companies, and this kind of regulation doesn’t generate any more turnover, but a lot of additional costs instead. On the other hand, most company founders and employees in Silicon Valley hail from the startup culture largely dominated by libertarian thought – a context in which any intervention into current debates is interpreted as an assault against freedom of speech.
  8. Mark Zuckerberg: Building Global Community, https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634, 02/16/2017.
  9. Kerry Flynn: Twitter just took its biggest stance yet on hate speech, http://mashable.com/2017/10/17/twitter-hate-speech-abuse-new-rules-women-boycott/#vfo7gOJrokqD, 10/17/2017.
  10. Dan Seitz: Google Is Cracking Down On Fake News In Search Results, http://uproxx.com/technology/google-search-results-fake-news/, 03/13/2017.
  11. David Ingram, Joseph Menn: Internet firms shift stance, move to exile white supremacists, https://www.reuters.com/article/us-virginia-protests-tech/internet-firms-shift-stance-move-to-exile-white-supremacists-idUSKCN1AW2L5, 08/16/2016.
  12. Kerry Flynn: Facebook’s ‘Trust Indicators’ is apparently a gift to select media partners, http://mashable.com/2017/11/16/facebook-trust-indicators-fake-news-problem/, 11/16/2017.
  13. Julia Fioretti: EU increases pressure on Facebook, Google and Twitter over user terms, https://www.reuters.com/article/us-socialmedia-eu-consumers/eu-increases-pressure-on-facebook-google-and-twitter-over-user-terms-idUSKBN1A92D4, 07/24/2017.
  14. Hillary Clinton: Statement: Hillary Clinton on internet freedom, https://www.ft.com/content/f0c3bf8c-06bd-11df-b426-00144feabdc0, 01/21/2010.
  15. Wikipedia: Jared Cohen, https://en.wikipedia.org/wiki/Jared_Cohen.
  16. Ewen MacAskill: US confirms it asked Twitter to stay open to help Iran protesters, https://www.theguardian.com/world/2009/jun/17/obama-iran-twitter, 06/17/2009.
  17. Charles Arthur: Google and Twitter launch service enabling Egyptians to tweet by phone, https://www.theguardian.com/technology/2011/feb/01/google-twitter-egypt, 02/01/2011.
  18. Since 2013 Google and Twitter, for instance, have been contesting a number of secret court orders on the highest levels of jurisdiction. See for example: Sam Byford: Google challenges US government’s private data demand in court, https://www.theverge.com/2013/4/5/4185732/google-fights-national-security-letter, 04/05/2013.
  19. Wikipedia: FBI-Apple encryption dispute, https://en.wikipedia.org/wiki/FBI–Apple_encryption_dispute.
  20. Not to mention the constant conflicts regarding interfaces, standards and market shares, even though these also have political weight, of course.
  21. Katie Zawadski: American Alt-Right Leaves Facebook for Russian Site VKontakte, https://www.thedailybeast.com/american-alt-right-leaves-facebook-for-russian-site-vkontakte, 03/11/2017.
  22. Wikipedia: Stuxnet, https://en.wikipedia.org/wiki/Stuxnet.
  23. The attacks went down in history as “Operation Aurora”. Wikipedia: Operation Aurora: https://en.wikipedia.org/wiki/Operation_Aurora.
  24. Axel Kannenberg: USA: Nordkorea steckt hinter Hackerangriff auf Sony Pictures, https://www.heise.de/newsticker/meldung/USA-Nordkorea-steckt-hinter-Hackerangriff-auf-Sony-Pictures-2504888.html, 12/19/2014.
  25. Handelsblatt: Zu Land, zu Wasser, in der Luft – und im Internet, http://www.handelsblatt.com/politik/deutschland/bundeswehr-erhaelt-cyber-truppe-zu-land-zu-wasser-in-der-luft-und-im-internet/13505076.html, 04/24/2016.
  26. Lead investigator of this incident was the Citizen Lab, which also published a detailed report on their findings. Citizen Lab: China’s Great Cannon: https://citizenlab.ca/2015/04/chinas-great-cannon/, 04/10/2015.
  27. Wikipedia: DDoS: https://en.wikipedia.org/wiki/Denial-of-service_attack#Distributed_attack.
  28. Brian Krebs: How Google Took on Mirai, KrebsOnSecurity, https://krebsonsecurity.com/2017/02/how-google-took-on-mirai-krebsonsecurity/, 02/03/2017.
  29. The main problem of states in this area is actually finding suitable staff. IT security experts are what might be called a rarity in human resources, and so the industry tempts them with exorbitant fees and career options. The state, and the military in particular, can hardly keep up with either of these.
  30. How that might work out was the topic of an opinion I presented the context of an expert hearing in the German Bundestag. See Michael Seemann: Stellungnahme: Fragenkatalog für das Fachgespräch zum Thema „Interoperabilität und Neutralität von Plattformen“ des Ausschusses Digitale Agenda am 14.12.2016, https://www.bundestag.de/blob/484608/b1dc578c0fdd28b4e53815cda384335b/stellungnahme-seemann-data.pdf, 12/12/2016.
  31. This suggestion of mine was first presented in my re:publica talk in 2016. See Michael Seemann: Netzinnenpolitik – Grundzüge einer Politik der Plattformgesellschaft, https://www.youtube.com/watch?v=eQ-a13ZL33g, 03/11/2016.

Veröffentlicht unter Das Neue Spiel english extern Kontrollverlust Plattformpolitik
Hinterlasse einen Kommentar

Was ist Plattformpolitik? Grundzüge einer neuen Form der politischen Macht

/***** Für die Zeitschrift für sozialistische Politik und Wirtschaft (SPW) habe ich einen Artikel über Plattformpolitik geschrieben. Man kann ihn auch hier als PDF abrufen. *****/

Anfang des Jahres, kurz nach der Inauguration Donald Trumps, verbreitete sich das Gerücht, der Facebook-Gründer Mark Zuckerberg plane seinerseits, 2020 als Präsidentschaftskandidat ins Rennen zu gehen. Nun ist es nachvollziehbar, dass man nach Trumps Sieg alles für möglich halten kann, jedoch beruhte die Spekulation lediglich auf der sogenannten „listening tour“, Zuckerbergs Reise durch die USA, bei der er „Facebook-User“ persönlich treffen wollte.1

Das Gerücht ist nur Anzeichen des allgemeinen Nichtverstehens unserer Zeit. Denn Zuckerberg ist längst ein Politiker. Er hat enormen Einfluss auf das tägliche Leben von zwei Milliarden Menschen. Er trifft Entscheidungen, die beeinflussen, wie diese Menschen sich zusammenschließen, wie sie miteinander umgehen, sogar wie sie die Welt sehen. Zuckerberg ist der vielleicht mächtigste Politiker der Welt. Jeder Job in der klassischen Politik – inklusive dem des US-Präsidenten – wäre eine Degradierung.

In diesem Text möchte ich versuchen zu benennen und zu analysieren, wie Plattformen Politik machen, wie sie ihre Machtbasis organisieren und in welchen Feldern ihre Politik heute bereits die Welt verändert. Doch zunächst müssen wir drei Mißverständnisse über Plattformpolitik ausräumen.

Drei Mißverständnisse zur Plattformpolitik

1. Plattformen sind nicht (nur) Objekte der Politik, sondern auch mächtige politische Subjekte.

Wenn wir von “Plattformpolitik” oder Plattformregulierung sprechen, dann denken wir uns Plattformen meist als Gegenstand von Regulierung und von Politik. Das ist soweit auch nicht falsch, aber es verdeckt den viel wesentlicheren Punkt, dass Plattformen heute selber mächtige Regulierer und politische Akteure sind.

Plattformen sind die Infrastrukturen unseres digitalen Zusammenlebens – wobei die Betonung auf „Struktur“ liegt. Denn diese Struktur ist weder beliebig noch neutral. Die Struktur von Kommunikation vorzugeben ist bereits eine politische Tat, sie ermöglicht bestimmte Interaktionen und verringert die Wahrscheinlichkeit von anderen Arten der Kommunikation; sie ist bereits ein tiefer Eingriff ins Zusammenleben und damit politisch.

Es macht also Sinn, über Plattformen nicht nur als Firmen nachzudenken, die im Internet Services anbieten, sondern sie als politische Entitäten, gar Institutionen zu betrachten.2 Ihr Einfluss auf die politische Debatte, auf unser Zusammenleben und somit auf politische Entscheidungen steht dem der klassischen Medien in nichts nach. Plattformen können als die 5. Gewalt verstanden werden. Allerdings sind sie im Gegensatz zu den anderen vier Gewalten nicht national organisiert, sondern denken und agieren per se global. Und im Gegensatz zu den anderen Institutionen versuchen sie ihre gesellschaftspolitische Rolle nicht überzubetonen, im Gegenteil; politische Verantwortung zu haben ist schließlich schlecht fürs Geschäft. Vielmehr neigen Plattformen dazu, ihre eigene politische Macht herunterzuspielen und die Verantwortung nicht anzunehmen zu wollen. Sie sind politische Akteure wider Willen.

2. Plattform üben eine andere Form von Macht aus.

Einer der Gründe, warum Plattformen als politische Akteure immer noch nicht ernst genommen werden, ist das allgemeine Unverständnis für die Mechanik ihrer Macht.

Wenn Politiker gegenüber Plattformen auftreten, legen sie gern das Gewicht ihrer politischen Legitimation in die Waagschale. Sie sprechen dann vom „Primat der Politik“, als wollten sie sich und anderen ihrer Handlungsmacht versichern. Dieses Primat wird daraus abgeleitet, dass der Politiker durch eine souveräne, kollektive Entscheidung in sein Amt kam. Doch auch Plattformen generieren eine Form von Legitimation durch kollektive Entscheidungen, die allerdings leicht anders funktioniert.

David Singh Grewal zeigt in seinem Buch Network Power, wie die Adaption von Standards als kollektive Entscheidung verstanden werden kann.3 Standards sind Möglichkeitsbedingungen von Interaktion, weswegen jede Entscheidung für oder gegen einen Standard immer schon eine gesellschaftliche Tragweite hat. Und nur weil diese Entscheidungen nicht gleichzeitig getroffen werden wie bei einer Wahl, sondern zeitlich versetzt („aggregiert“), mindert das nicht ihre gesellschaftliche Macht.

Die Macht dieser aggregierten kollektiven Entscheidungen ist nichts Neues. Sie gilt für die Sprache, die wir sprechen, für die Umgangsformen, die wir pflegen oder akzeptieren und natürlich auch für die Entscheidung, über welche Netzwerkdienste wir erreichbar sein wollen. Wir sind schließlich nicht auf Facebook wegen seiner tollen Produktqualität, sondern weil alle unsere Freunde auf Facebook sind.

In der Ökonomie nennt man das Phänomen „Netzwerkeffekt“, aber Grewal hat völlig recht, es als politischen Machtfaktor zu verstehen. Ab einer bestimmten Ausbreitung eines Standards sind die Kräfte auf das Individuum so groß, dass ihm kaum etwas anderes übrig bleibt, als den Standard ebenfalls zu adaptieren. Die Alternative ist häufig sozialer Ausschluss. Wir akzeptieren diese „Netzwerkmacht“, den Standards ausüben, denn sie kann schließlich nicht von Einzelnen in Stellung gebracht werden. Das gilt zumindest für offene Standards. Niemand kann mich davon abhalten, Russisch zu lernen, oder einen Server mittels offener Protokolle wie TCP/IP im Internet bereit zu stellen. Der soziale Druck geht immer von der ganzen Gemeinschaft aus und kann nicht von Einzelnen gerichtet werden.

Zur “Plattformmacht” wird die Netzwerkmacht aber, wenn der adaptierte Standard zentrale Mechanismen des Ausschlusses kennt. Facebook kann mir den Zugang zu meinen Freunden jederzeit wieder nehmen, oder auch nur temporär oder lokal einschränken. Die Kontrolle über den Zugang zu dem Standard ist einerseits der Schlüssel zum Geschäftsmodell der Plattformen, andererseits auch die Grundlage ihrer politischen Macht.

Kurz: Plattformmacht = Netzwerkmacht + Kontrolle des Zugangs.

3. Plattformregulierung erhöht die Macht der Plattformen

Natürlich ist in der klassischen Politik bereits angekommen, dass Plattformen über eine unheimliche Macht verfügen, aber weil Politiker diese Macht nicht verstehen, verschlimmern sie diesen Zustand noch. Sie glauben, es bei Google, Facebook, Apple und Co. schlicht mit der typischen Konzernmacht zu tun zu haben, die sie von Siemens und der Deutschen Bank her kennen. Entsprechend greifen sie zum Rezeptbuch der politischen Regulierung, um dieser Macht entgegenzutreten.

Doch Plattformanbieter sind nicht einfach nur große Firmen und ihre Macht beruht auf weit mehr als nur Geld und weltweite Ausdehnung. Plattformen stehen Staaten vielmehr als Systemkonkurrenten gegenüber – obwohl das beide nicht wahrhaben wollen.

Anstrengungen der klassischen Politik, Plattformen zu regulieren, enden deswegen in einem Paradox. Während die Politiker ihre Fäuste gen Google und Facebook recken, schanzen sie den Plattformen immer neue hoheitliche Kompetenzen zu. Jede Auflage, die sich die Politik ausdenkt, stärkt die politische Handlungsmacht und Legitimation von Plattformen. Als Beispiel kann das Urteil des EuGH zum Recht auf Vergessenwerden gelten, das Google dazu anhält, Suchergebnisse zu Personen nach einem sehr schwammigen Katalog von Kriterien zu bereinigen.4 Ein anderes Beispiel ist das neuere Netzwerkdurchsetzungsgesetz des Justizministers Heiko Maas, das Facebook und andere Plattformen dazu anhält „offensichtlich rechtswidrige Inhalte“ zu löschen.5 In beiden Fällen gibt der Staat Rechtssprechungs- sowie Rechtsdurchsetzungskompetenzen an die Plattformen ab. Das ist einerseits durchaus sinnvoll, weil Plattformen durch ihre Plattformmacht und tiefen, datenreichen Einsichten die logischen Ansprechpartner zur Regulierung des Digitalen sind. Es ist aber auch fatal, weil der Staat damit das Regime der Plattformen stärkt. Er macht sich abhängig von seinem eigenen Systemkonkurrenten.

Drei Ressorts der Plattformpolitik

Der politische Einfluss von Plattformen ist vielfältig. Ohne Anspruch an Vollständigkeit möchte an dieser Stelle drei Politikbereiche voneinander abgrenzen, in denen Plattformen heute schon sehr einflussreich sind und in Zukunft noch an Einfluss gewinnen werden: Netzinnenpolitik, Netzaußenpolitik und Netzsicherheitspolitik.

Netzinnenpolitik

Der Begriff „Netzpolitik“ hat sich vor allem in Deutschland – ausgehend vom gleichnamigen Blog für politische Fragen – rund um das Netz etabliert.6 Er versammelt Fragen des Datenschutzes, der Netzneutralität, der Zensurfreiheit und andere netzrelevante Politikbereiche. Wesentlich ist, dass das Netz immer als Gegenstand der Netzpolitik gesehen wird.

Der Begriff „Netzinnenpolitik“ soll nun zunächst das Eingeständnis signalisieren, dass es dieses innen/außen und Objekt/Subjekt-Verhältnis so nicht mehr gibt und dass die politischen Fragen des Netzes zunehmend aus dem inneren des Netzes selbst erwachsen. Und dass sie auch nur im inneren des Netzes zu lösen sind. Zu nennen wären die vielzitierten Probleme mit Hate Speech, Trolling und Fake News, aber auch ältere Probleme wie Identitätsdiebstahl oder Doxxing (das Veröffentlichen von persönlichen Informationen gegen den Willen der Betroffenen) fallen in die Kategorie.

Da diese Probleme meistens auf Plattformen entstehen, ist es es folgerichtig, von ihnen auch die entsprechenden Gegenmaßnahmen zu erwarten. Das geschieht zwar durchaus, allerdings immer noch in allgemein als unzureichend empfundenem Maße. Tatsächlich zeigen Plattformen eine gewisse Regulierungsaversität. Sie scheuen zurück, die politische Macht, die sie faktisch haben, tatsächlich einzusetzen, um zum Beispiel strengere Regeln aufzustellen und durchzusetzen.7

Nichtsdestotrotz scheint sich ein gewisses Problembewusstsein durchgesetzt zu haben. Facebooks neues Missionstatement von Februar deutete bereits darauf hin8, von Twitter9 und Google10 gab es ähnliche Signale. Nach dem eskalierten Nazi-Aufmarsch in Charlottesville haben viele Plattformanbieter gehandelt und rechtsradikale Accounts und Websites von ihren Services verbannt. Twitter und LinkedIn suspendierten etliche Accounts von „White Suprimacists“, Facebook, Google und GoDaddy (ein populärer Domainhändler) sperrten Domains und Gruppen, die Hass verbreiten. Vor allem das Naziportal Daily Stormer war betroffen und flog sogar beim Content Delivery Network Cloudflare raus.11

Offen bleibt allerdings, ob solche Maßnahmen geeignet sind, die zitierten Probleme in den Griff bekommen. Bisherige Ergebnisse geben wenig Anlass zur Hoffnung.12

Netzaußenpolitik

Während Facebooks Umgang mit Hate Speech und Fake News der Netzinnenpolitik zugeordnet werden kann, wäre das bereits erwähnte Netzwerkurchsetzungsgesetz von Heiko Maß Gegenstand der Netzaußenpolitik. In der Netzaußenpolitik treffen vor allem (aber nicht nur) Plattformen und Staaten aufeinander und handeln ihre gegenseitigen Interessen miteinander aus. Der Standardfall ist dabei natürlich der Versuch der Regulierung von Plattformen durch Staaten, wie wir es bereits besprochen haben. So hat die EU gleich mehrere Verfahren gegen Facebook und Google am laufen, aber auch die Konflikte zwischen US-Regierung und den Plattformen treten immer deutlicher zu Tage.13

Doch die Beziehungen zwischen Plattformen und Staaten waren in der Vergangenheit nicht immer so schlecht. Insbesondere das amerikanische Außenministerium unter Hillary Clinton nutzte Plattformen für außenpolitische Zwecke. In ihrer bedeutenden Rede von 2010 zu Internet und Freiheit bezeichnete sie die Plattformberteiber als wichtige Partner, wenn es darum geht, Demokratie und Menschenrechte in die Welt zu tragen.14

Eine besondere Scharnierfunktion erfüllte dabei Jared Cohen.15 Er kam noch unter Condoleezza Rice ins Außenministerium, stieg aber vor allem unter Clinton auf. Als 2009 in Iran eine Revolution auszubrechen drohte, rief er bei Twitter an und überzeugte sie, ihre geplante Wartungsdowntime aufzuschieben.16 Twitter spielte eine wichtige Rolle zur Koordination der Aufstände.

Als dann Anfang 2011 der Arabische Frühling ausbrach, arbeitete Cohen bereits bei Google und half verschiedene Projekte zwischen Plattformen zu koordinieren. Facebook, Twitter und Google versuchten jeweils auf ihre Weise die Aufstände zu unterstützen und teilweise kooperierten sie dazu auch. Wie zum Beispiel beim Dienst speak2tweet, bei dem Google eine Telefonnummer bereit stellte, bei der Menschen aus Ägypten aufs Band sprechen konnten. Das Gesagte wurde dann per Twitter veröffentlicht und so die Internetabschaltung umgangen.17

Spätestens seit den Snowden-Enthüllungen von 2013 ist das Verhältnis zwischen Silicon Valley und Washington deutlich abgekühlt. Plattformen versuchen sich seitdem eher stärker gegen Eingriffe des Staates abzugrenzen und zu schützen. Dies geschieht vor allem durch die allgemeine Verbreitung von verschlüsselten Verbindungen und die Erhöhung der technischen wie rechtlichen Sicherheit.18 Die USA sehen diese Entwicklung ihrerseits mit wachsendem Unbehagen, insbesondere den Trend zu immer besser kryptografisch abgesicherten Systemen.

Im Frühjahr 2016 eskalierte der Streit schließlich anhand des iPhones, das FBI-Ermittler bei dem Attentäter von San Bernardino gefunden hatten. Es war gesperrt und verschlüsselt und die Ermittler verlangten von Apple Kooperation bei der Entschlüsselung. Apple verweigerte. Für das Aufschließen hätte Apple in die Sicherheitssoftware eine Schwachstelle einbauen müssen. Aus Apples Sicht ein gefährliches Unterfangen, das die Sicherheit aller anderen Geräte und damit der Nutzer gemindert hätte. Am Ende musste das FBI mit einer externen Sicherheitsfirma zusammenarbeiten, um das iPhone zu entsperren.19

Neben den Kooperationen und Konflikten der Plattformen mit Staaten gibt es natürlich noch die Beziehungen der Plattformen untereinander.20 Politisch greifbar ist die Entwicklung, dass Facebook zunehmend Nutzer und Nutzerinnen aus dem rechten und rechtsradikalen Milieu an VKontakte verliert.21 VKontakte ist Facebooks russisches Pendant, hat aber ganz andere Richtlinien. Während man zum Beispiel auf Facebook Probleme bekommt, wenn man homophobe Postings veröffentlicht, bekommt man auf VKontakte Probleme, wenn man die Regenbogenfahne veröffentlicht.

Eine Spaltung der Gesellschaft entlang unterschiedlicher Plattformen und ihrer Policies erscheint durchaus als ein plausibles Szenario und bietet zukünftig viel Stoff für netzaußenpolitische Konflikte.

Netzsicherheitspolitik

Seit einiger Zeit wird in politischen Kreisen vermehrt von „Cyberwar“ und „Cybersicherheit“ gesprochen. Gemeint ist eine neue Form des Krieges mit digitalen Mitteln. Die USA und Israel hatten 2010 vorgelegt und mittels einer hochgerüsteten „Cyberwaffe“ – einem speziell entwickelten Computerwurm – Urananreicherungsanlagen im Iran zerstört.22 Der Stuxnet-Shock war der Beginn eines allgemeinen Wettrüstens in Sachen Hacking-Kapazitäten weltweit. Cyberangriffe sind seitdem Alltag geworden, seien es die Angriffe von China auf Google23, Nordkorea auf Sony-Pictures24 oder Russlands auf die amerikanische Wahl. „Cyber“ wird gerne damit erklärt, dass das Militär verschiedene Einsatzgebiete kennt: Boden (Armee), Wasser (Marine) und Luft (Luftwaffe) und nun komme eben mit „Cyber“ ein neues Einsatzgebiet hinzu, für das man entsprechende Kapazitäten aufbauen muss.25

Doch das wesentliche Mißverständnis bei dem Thema ist, davon auszugehen, dass Cyberwars vornehmlich zwischen Staaten stattfinden. Das ist bereits heute nicht der Fall. Zum einen ist fast jeder „Cyber-Angriff“ zumindest auch ein Angriff auf eine Plattform. Sei es, auf das Betriebssystem von Microsoft (wie bei Stuxnet und viele anderen Fällen) oder auf Services (der Angriff aus China auf Google richtete sich auf Gmail-Postfächer, der russische Hack des Postfaches von John Podesta betraf ebenfalls Gmail). Fast immer ist eine Software oder Dienst betroffen, der von einem Plattformanbieter bereitgestellt wird.

Zum Anderen zielen schon heute viele Angriffe konkret auf Plattformanbieter als Primärziel. Am prominentesten ist vielleicht der Angriff Chinas auf die Entwicklerplattform GitHub von 2015. GitHub ist eine Plattform, auf der Softwareentwickler ihren Code versionieren und gleichzeitig mit anderen teilen können. Beinahe alle populären Open Source Projekte sind dort zu finden, unter anderem auch eines mit dem Titel „The Great Fire“. Als „Great Firewall“ wird für gewöhnlich die mächtige Internetzensurarchitektur Chinas bezeichnet und so war „The Great Fire“ ein eigens bereitgestelltes Toolkit zur Umgehung eben dieser. Das gefiel der chinesischen Regierung natürlich nicht.

Nun ist es für China nichts ungewöhnliches, unliebsame Dienste einfach mittels der Great Firewall auszusperren, doch GitHub bildet hier eine Ausnahme. Den einheimischen Entwicklern den Zugang zu Github zu versperren, käme einer Komplettaufgabe der chinesischen Softwarebranche gleich. Etwas, das sich nicht mal China leisten kann. Da aber China in ihrer Zensurinfrastruktur Millionen von Anfragen pro Sekunde ins Leere laufen lassen muss, kamen sie auf die Idee, die geblockten Anfragen stattdessen auf ein Ziel im Internet zu richten. Das ist die Idee hinter “The Great Cannon”.26

Was auf GitHub einprasselte waren Millionen und Abermillionen Abfragen aus ganz China, die den Service bis weit an die Belastungsgrenze führte. In IT-Sicherheitkreisen nennt man das einen DDoS Angriff – einen “Distributed Denial of Service”-Angriff.27

Plattformen sind aber nicht nur Ziel von Angriffen, sondern immer öfter auch die letzte Verteidigungslinie für Angegriffene. 2016 traf ein DDoS-Angriff das Blog des Sicherheitsforschers Brian Krebs. Bei einer Analyse des Angriffs stellte sich heraus, dass der Angriff vornehmlich von Internetroutern und Sicherheitskameras ausgegangen war. Der Grund: Ein Fehler im Betriebssystem „Mirai“, das in solchen Geräten oft zum Einsatz kommt, hatte es Angreifern erlaubt, viele Millionen von ihnen virtuell in Besitz zu nehmen. Es war die größte Bot-Armee, die die Welt je gesehen hat.

Krebs wusste sich nicht anders zu helfen, als unter Googles eigens für solche Fälle eingerichtete Serverinfrastruktur namens „Project Shield“ zu schlüpfen. Das – nebenbei bemerkt – von Jared Cohens aus Google ausgegründetem Thinktank Jigsaw betrieben wird.28

Die unbequeme Wahrheit hinter „Cyber“ ist, dass nicht Staaten im Mittelpunkt des Geschehens stehen, sondern Plattformen. Sie sind es, die die Infrastruktur bereitstellen, die angegriffen wird. Sie sind sehr häufig auch Ziele der Attacken. Vor allem aber sind sie derzeit die Einzigen, die über die technischen Kapazitäten und menschlichen Ressourcen verfügen, Angriffe abzuwehren, ihnen vorzubeugen und am Ende den Tag zu retten.29 So oder so. Den Staaten wird im Falle des Falles nichts anderes übrig bleiben, wie Brian Krebs unter den Schutzschirm der Plattformen zu schlüpfen. Staatliche “Cybersouveränität” bleibt derzeit ein unrealistischer Traum.

Fazit

Plattformen besetzen bereits heute zentrale Stellen der gesellschaftlichen Ordnung, die selbst zunehmend zur digitalen wird. Sie regulieren für die Gesellschaft kritische Infrastruktur und bieten Schutz und Ordnung im Internet. Sie stehen damit in Konkurrenz zu Staaten und generieren Abhängigkeiten, die für die Staaten bedrohlich werden könnten.

Ob Staaten in dieser Hinsicht auf lange Frist ihre Unabhängigkeit und Souveränität aufrecht erhalten werden, wird davon abhängen, ob es ihnen gelingt, eine eigene digitale Infrastruktur ins Werk zu setzen. Der Staat muss auf lange Sicht selbst zum Plattformanbieter werden.30

Plattformen dagegen täten gut daran, sich bei den Staaten die gewachsenen, demokratischen Institutionen abzuschauen, um mit ihren netzinnenpolitischen Problemen fertig zu werden. Ein wenigstens rudimentäres Recht statt Terms of Service, einen Ansatz von Gewaltenteilung, Transparenz und Einspruchsmöglichkeiten bei allen Verfahren, würden den Kampf gegen Hate Speech und Fake News glaubhafter, gerechter und mit Sicherheit auch erfolgreicher machen.31

Kurz: Plattformen müssen mehr werden wie Staaten und Staaten mehr wie Plattformen.

Beiden, d.h. Staaten und Plattformen bleibt derweil nicht viel anderes übrig, als ein kritisch-kooperatives Verhältnis zu pflegen und in allen drei Feldern – Netzinnenpolitik, Netzaußenpolitik und Netzsicherheitspolitik – zu kooperieren. Es bleibt noch darauf hinzuweisen, dass das Konkurrenzverhältnis zwischen beiden im Zweifel für den Bürger, bzw. User sogar gewinnbringend sein kann. Während mich der Staat vor dem überbordenden Zugriff der Plattformen zu beschützen sucht, versucht der Plattformanbieter mich vor dem Datenzugriff des Staates zu schützen.

  1. Alex Heath: Speculation is mounting that Mark Zuckerberg wants to serve in government, http://www.businessinsider.de/speculation-mounting-that-mark-zuckerberg-wants-work-government-2017-1?r=US&IR=T, 05.01.2017.
  2. Vgl. Michael Seemann: Das Neue Spiel – Strategien für die Welt nach dem digitalen Kontrollverlust, 2014, S. 204 ff.
  3. Grewal, David Singh: Network Power – The Social Dynamics of Globalization, S. 9.
  4. Michael Seemann: Das Neue Spiel – Strategien für die Welt nach dem digitalen Kontrollverlust, 2014, S. 223.
  5. Markus Beckedahl: NetzDG: Fake-Law gegen Hate-Speech, https://netzpolitik.org/2017/netzdg-fake-law-gegen-hate-speech/, 30.06.2017.
  6. Siehe netzpolitik.org.
  7. Das ist einerseits damit zu erklären, dass es sich immer noch um gewinnorientierte Firmen handelt und solche Art Regulierung keinen Umsatz, aber einen ganzen Rattenschwanz an Kosten verursacht. Andererseits kommen die Firmengründer und Angestellten aus der weitgehend von libertären Denkweisen dominierten Startup-Kultur des Silicon Valleys, wo jegliche Eingriffe in Debatten als Eingriffe gegen die Redefreiheit interpretiert werden.
  8. Mark Zuckerberg: Building Global Communiy, https://www.facebook.com/notes/mark-zuckerberg/building-global-community/10154544292806634, 16.02.2017.
  9. Kerry Flynn: Twitter just took its biggest stance yet on hate speech, http://mashable.com/2017/10/17/twitter-hate-speech-abuse-new-rules-women-boycott/#vfo7gOJrokqD, 17.10.2017.
  10. Dan Seitz: Google Is Cracking Down On Fake News In Search Results, http://uproxx.com/technology/google-search-results-fake-news/, 13.03.2017.
  11. David Ingram, Joseph Menn: Internet firms shift stance, move to exile white supremacists, https://www.reuters.com/article/us-virginia-protests-tech/internet-firms-shift-stance-move-to-exile-white-supremacists-idUSKCN1AW2L5, 16.08.2016.
  12. Kerry Flynn: Facebook’s ‚Trust Indicators‘ is apparently a gift to select media partners, http://mashable.com/2017/11/16/facebook-trust-indicators-fake-news-problem/, 16.11.2017.
  13. Julia Fioretti: EU increases pressure on Facebook, Google and Twitter over user terms, https://www.reuters.com/article/us-socialmedia-eu-consumers/eu-increases-pressure-on-facebook-google-and-twitter-over-user-terms-idUSKBN1A92D4, 24.07.2017.
  14. Hillary Clinton: Statement: Hillary Clinton on internet freedom, https://www.ft.com/content/f0c3bf8c-06bd-11df-b426-00144feabdc0, 21.01.2010.
  15. Wikipedia: Jared Cohen, https://en.wikipedia.org/wiki/Jared_Cohen
  16. Ewen MacAskill: US confirms it asked Twitter to stay open to help Iran protesters, https://www.theguardian.com/world/2009/jun/17/obama-iran-twitter, 17.06.2017.
  17. Charles Arthur: Google and Twitter launch service enabling Egyptians to tweet by phone, https://www.theguardian.com/technology/2011/feb/01/google-twitter-egypt, 01.02.2011.
  18. Google und Twitter zum Beispiel bekämpften seit 2013 etliche Geheimgerichtsbeschlüsse bis zur letzten Instanz. Siehe zum Beispiel: Sam Byford: Google challenges US government’s private data demand in court, https://www.theverge.com/2013/4/5/4185732/google-fights-national-security-letter, 05.04.2013.
  19. Wikipedia: FBI-Apple encryption dispute, https://en.wikipedia.org/wiki/FBI–Apple_encryption_dispute.
  20. Auf die ständigen Konflikte zu Schnittstellen, Standards und Marktanteile will ich gar hier allerdings gar nicht eingehen, obwohl auch diese natürlich politisches Gewicht haben.
  21. Katie Zawadski: American Alt-Right Leaves Facebook for Russian Site VKontakte, https://www.thedailybeast.com/american-alt-right-leaves-facebook-for-russian-site-vkontakte, 11.03.2017.
  22. Wikipedia: Stuxnet, https://en.wikipedia.org/wiki/Stuxnet.
  23. Die Angriffe gingen unter dem Namen “Operation Aurora in die Geschichte ein. Wikipedia: Operation Aurora: https://en.wikipedia.org/wiki/Operation_Aurora.
  24. Axel Kannenberg: USA: Nordkorea steckt hinter Hackerangriff auf Sony Pictures, https://www.heise.de/newsticker/meldung/USA-Nordkorea-steckt-hinter-Hackerangriff-auf-Sony-Pictures-2504888.html, 19.12.2014.
  25. Handelsblatt: Zu Land, zu Wasser, in der Luft – und im Internet, http://www.handelsblatt.com/politik/deutschland/bundeswehr-erhaelt-cyber-truppe-zu-land-zu-wasser-in-der-luft-und-im-internet/13505076.html, 24.04.2016.
  26. Federführend bei der Untersuchung des Vorfalls war das Citizen Lab, das auch einen ausführlichen Bericht darüber geschrieben hat. Citizen Lab: China’s Great Cannon: https://citizenlab.ca/2015/04/chinas-great-cannon/, 10.04.2015.
  27. Wikipedia: DDoS: https://en.wikipedia.org/wiki/Denial-of-service_attack#Distributed_attack
  28. Brian Krebs: How Google Took on Mirai, KrebsOnSecurity, https://krebsonsecurity.com/2017/02/how-google-took-on-mirai-krebsonsecurity/, 03.02.2017.
  29. Das Hauptproblem von Staaten in diesem Bereich ist tatsächlich das Personal. IT-Sicherheitsexperten gehören zu einer personellen Rarität und werden in der Industrie entsprechend mit astronomischen Honoraren und Karrierechancen bedacht – bei beidem können Staaten – vor allem das Militär – schlecht mithalten.
  30. Wie das gehen könnte, habe ich im Rahmen einer Expertenanhörung des Deutschen Bundestags ausgeführt. Siehe Michael Seemann: Stellungnahme: Fragenkatalog für das Fachgespräch zum Thema „Interoperabilität und Neutralität von Plattformen“ des Ausschusses Digitale Agenda am 14.12.2016, https://www.bundestag.de/blob/484608/b1dc578c0fdd28b4e53815cda384335b/stellungnahme-seemann-data.pdf, 12.12.2016.
  31. Das habe ich 2016 in einem Vortrag auf der re:publica vorgeschlagen. Siehe Michael Seemann: Netzinnenpolitik – Grundzüge einer Politik der Plattformgesellschaft, https://www.youtube.com/watch?v=eQ-a13ZL33g, 11.03.2016.

Veröffentlicht unter Das Neue Spiel extern Kontrollverlust Plattformpolitik
13 Kommentare

Digital Tribalism – The Real Story About Fake News

Text by: Michael Seemann / Data Visualization by: Michael Kreil


[Download as PDF]
[(original) German Version]

The Internet has always been my dream of freedom. By this I mean not only the freedom of communication and information, but also the hope for a new freedom of social relations. Despite all the social mobility of modern society, social relations are still somewhat constricting today. From kindergarten to school, from the club to the workplace, we are constantly fed through organizational forms that categorize, sort and thereby de-individualize us. From grassroots groups to citizenship, the whole of society is organized like a group game, and we are rarely allowed to choose our fellow players. It’s always like, „Find your place, settle down, be a part of us.

The Internet seemed to me to be a way out. If every human being can relate directly to every other, as my naive-utopian thought went, then there would no longer be any need for communalized structures. Individuals could finally counteract as peers and organize themselves. Communities would emerge as a result of individual relationships, rather than the other way around. Ideally, there would no longer be any structures at all beyond the customized, self-determined network of relations of the individual.1

The election of Donald Trump was only the latest incident to tear me rudely from my hopes. The Alt-Right movement – a union of right-wing radical hipsterdom and the nihilistic excesses of nerd culture – boasts that it „shitposted“ Trump into the White House. This refers to the massive support of the official election campaign by an internet-driven grassroots meme campaign. And even though you can argue that the influence of this movement on the election was not as great as the trolls would have you believe, the campaign clearly demonstrated the immense power of digital agitation.
But it wasn’t the discursive power of internet-driven campaigns that frightened me so badly this time. This had been common knowledge since the Arab Spring and Occupy Wall Street. It was the complete detachment from facts and reality unfolding within the Alt-Right which, driven by the many lies of Trump himself and his official campaign, has given rise to an uncanny parallel world. The conspiracy theorists and crackpots have left their online niches to rule the world.

In my search for an explanation for this phenomenon, I repeatedly came across the connection between identity and truth. People who believe that Hillary and Bill Clinton had a number of people murdered and that the Democratic Party was running a child sex trafficking ring in the basement of a pizza shop in Washington DC are not simply stupid or uneducated. They spread this message because it signals membership to their specific group. David Roberts coined the term „tribal epistemology“ for this phenomenon, and defines it as follows:

Information is evaluated based not on conformity to common standards of evidence or correspondence to a common understanding of the world, but on whether it supports the tribe’s values and goals and is vouchsafed by tribal leaders. “Good for our side” and “true” begin to blur into one.2

New social structures with similar tribal dynamics have also evolved in the German-speaking Internet. Members of these „digital tribes“ rarely know each other personally and often don’t even live in the same city or know each other’s real names. And yet they are closely connected online, communicating constantly with one another while splitting off from the rest of the public, both in terms of ideology and of network. They feel connected by a common theme, and by their rejection of the public debate which they consider to be false and „mainstream“.

It’s hardly surprising, then, that precisely these „digital tribes“ can be discovered as soon as you start researching the spread of fake news. Fake news is not, as is often assumed, the product of sinister manipulators trying to steer public opinion into a certain direction. Rather, it is the food for affirmation-hungry tribes. Demand creates supply, and not the other way around.

Since the present study, I have become convinced that „digital tribalism“ is at the heart of the success of movements such as Pegida and the AfD party, as well as the Alt-Right and anti-liberal forces that have blossomed all over the world since 2016.

For the study at hand, we analysed hundreds of thousands of tweets over the course of many months, ranking research question by research question, scouring heaps of literature, and developing and testing a whole range of theories. On the basis of Twitter data on fake news, we came across the phenomenon of digital tribalism, and took it from there.3 In this study, we show how fake news from the left and the right of the political spectrum is disseminated in practice, and how social structures on the Internet can reinforce the formation of hermetic groups. We also show what the concept of „filter bubbles“ can and cannot explain, and provide a whole new perspective on the right-wing Internet scene in Germany, which can help to understand how hate is developed, fortified, spread and organized on the web. However, we will not be able to answer all the questions that this phenomenon gives rise to, which is why this essay is also a call for further interdisciplinary research.

Blue Cloud, Red Knot

In early March 2017, there was some commotion on the German-language Twitter platform: the German authorities, it was alleged, had very quietly issued a travel warning for Sweden, but both the government and the media were hushing the matter up, since the warning had been prompted by an elevated terrorism threat. Many users agreed that the silence was politically motivated. The bigger picture is that Sweden, like Germany, had accepted a major contingent of Syrian refugees. Ever since, foreign media, and right-wing outlets in particular, have been claiming that the country is in the grip of a civil war. Reports about the terrorism alert being kept under wraps fed right into that belief.
For proof, many of these tweets did in fact refer to the section of the German Foreign Office website that includes the travel advisory for Sweden.4 Users who followed the link in early March did in fact find a level-3 (“elevated”) terrorism alert, which remains in place to this day. The website also notes the date of the most recent update: March 1, 2017. What it did not mention at the time was that the Swedish authorities had issued their revised terrorism alert a while back – and that it had been revised downwards rather than upwards, from level 4 (“high”) to level 3 (“elevated”).
After some time, the Foreign Office addressed the rumors via a clarification of facts on its website. Several media picked up on the story and the ensuing corrections. But the damage was done. The fake story had already reached thousands of people who came away feeling that their views had been corroborated: firstly, that conditions in Sweden resembled a civil war, and secondly, that the media and the political sphere were colluding to keep the public in the dark.
What happened in early March fits the pattern of what is known as fake news – reports that have virtually no basis in fact, but spread virally online due to their ostensibly explosive content.5 Especially in the right-wing sectors of the web, such reports are now “business as usual”.6
Data journalist Michael Kreil took a closer look at the case. He wanted to know how fake news spread, and whether corrections were an effective countermeasure. He collected the Twitter data of all accounts that had posted something on the issue, and flagged all tweets sharing the fake news as red and all those forwarding the correction as blue. He then compiled a graphic visualization of these accounts that illustrates the density of their respective networks. If two accounts follow each other and/or follow the same people, or are followed by the same people, they are represented by dots in greater proximity. In other words, the smaller the distance between two dots is, the more closely-knit the networking connections are between the accounts they refer to. The dot size corresponds to the account’s follower count.

The result is striking: where we might expect to see one large network, two almost completely distinct networks appear. The disparity between the two is revealed both by the coloring and by the relative position of the accounts. On the left, we see a fairly diffuse blue cloud with frayed edges. Several large blue dots relatively close together in the center represent German mass media such as Spiegel Online, Zeit Online, or Tagesschau. The blue cloud encompasses all those accounts that reported or retweeted, which is to say, forwarded, the correction.
On the other side, we see a somewhat smaller and more compact red mass consisting of numerous closely-spaced dots. These are the accounts that disseminated the fake news story. They are not only closely interconnected, but also cut off from the network represented by the large blue cloud.
What is crucial here is the clear separation between the red and blue clusters. There is virtually no communication between the two. Every group keeps to itself, its members connecting only with those who already share the same viewpoint.

That’s a filter bubble”, Michael Kreil said when he approached me, graph in hand, and asked whether I wanted to join him in investigating the phenomenon. Kreil is on the DataScience&Story team at Tagesspiegel, a daily newspaper published in Berlin, which puts him at the forefront of German data journalism. I accepted, even though I was skeptical of the filter bubble hypothesis.
At first glance, the hypothesis is plausible. A user is said to live in a filter bubble when his or her social media accounts no longer present any viewpoints that do not conform to his or her own. Eli Pariser coined the term in 2011, with view to the algorithms used by Google and Facebook to personalize our search results and news feeds by pre-sorting search results and news items.7 Designed to gauge our personal preferences, these algorithms present us with a filtered version of reality. Filter bubbles exist on Twitter as well, seeing as that every user can create a customized small media network by following accounts that stand for a view of the world that interests them. Divergent opinions, conflicting worldviews, or simply different perspectives simply disappear from view.
This is why the filter bubble theory has frequently served as a convincing explanation in the debate about fake news. If we are only ever presented with points of view that confirm our existing opinions, rebuttals of those opinions might not even reach us any more. So filter bubbles turn into an echo chamber where all we hear is what we ourselves are shouting out into the world.

Verification

Before examining the filter bubble theory, however, we first tried to reproduce the results using a second example of fake news. This time, we found it in the mass media.
On February 5, 2017, the infamous German tabloid BILD reported on an alleged „sex mob“ of about 900 refugees who had allegedly harassed people on New Year’s Eve in Frankfurt’s Fressgass district. This was explosive news because similar riots had occurred in Cologne on New Year’s Eve the previous year. The BILD story quickly made the rounds, mainly because it gave the impression that the Frankfurt police force had kept the incident quiet for more than a month.
In fact, the BILD journalist had been told the story by a barkeeper who turned out to be a supporter of the right-wing AfD party, and BILD had printed it immediately without sufficient fact-checking. As it turned out, the police were unable to confirm any of this, and no other source could be found for the incident. Even so, other media outlets picked up on the story, though often with a certain caution. In the course of the scandal, it became clear that the barkeeper had made up the entire story, and BILD was forced to apologize publicly.

This time, we had to take a slightly different approach for evaluation, because this particular debate on Twitter had been far more complex, and so many reports couldn’t be clearly assigned to either of the two categories, „fake news“ or „correction“. We needed a third category in between. We collected all the articles on the topic in a spreadsheet and flagged them as either spreading the false report (red), or just passing it on in a distanced or indecisive manner (yellow). Phrases such as „According to reports in BILD…“, or indications that the police could not confirm the events, were sufficient for the label „indecisive“. Of course, we also collected articles disproving the fake news story (blue). We also assigned some of the tweets to a fourth category: Meta. The mistake the BILD newspaper made sparked a broader debate on how a controversial, but well-established media company could become the trigger point of a major fake news campaign. These meta-debate articles were colored in green.8
Despite these precautionary measures, it is obvious even at first glance that the results of our first analysis have been reproduced here. The cloud of corrections, superimposed with the meta-comments, is visible in blue and green, brightened up here and there by yellow specks of indecision. Most noticeably, the red cluster of fake news clearly stands out from the rest again, in terms of color and of connectivity. Our fake news bubble is obviously a stable, reproducible phenomenon.

The Filter Bubble Theory

So we were still dealing with the theory that we were seeing the manifestation of a filter bubble. To be honest, I was skeptical. The existence of a filter bubble is not what our examples prove: The filter bubble theory makes assertions about who sees which news, while our graph only visualizes who is disseminating which news. So to prove the existence of a filter bubble, we would have to find out who does or doesn’t read these news items.

This information, however, is also encoded in the Twitter data, and can simply be extracted. For any given Twitter account, we can see the other accounts it follows. In a second step, we can bring up all the tweets sent from those accounts. Once we have the tweets from all the accounts the original account follows, we can reconstruct the timeline of the latter. In other words, we can peer into the filter bubble of that particular account and recreate the worldview within it. In a third step, we can determine whether a particular piece of information penetrated that filter bubble or not.
In this manner, we were able to retrieve the timelines of all accounts that had spread the fake news story, and scan them for links to the correction. The result is surprising: Almost all the disseminators of the false terrorism alert story – 89.2 percent, to be exact – had also had the correction in their timelines. So we repeated that test for the Fressgass story, too. In our first example, 83.7 percent of fake news distributors had at least technically been reached by the counterstatement.

This finding contradicts the filter bubble theory. It was obviously not for technical reasons that these accounts continued to spread the fake news story and not the subsequent corrections. At the very least, the filter bubbles of these fake news disseminators had been far from airtight.
Without expecting much, we ran a counter-test: What about those who had forwarded the correction – what did their timelines reveal? Had they actually been exposed to the initial fake news story they were so heroically debunking? We downloaded their timelines and looked into their filter bubbles. Once again, we were surprised by what we found: a mere 43 percent of the correction disseminators could have known about this fake news on Sweden. This result does suggest the existence of a filter bubble – albeit on the other side. In the case of the Fressgass story, the figure is even lower: at 33 percent. So these results do indicate a filter bubble, but in the other direction.

To sum up, the filter bubble effect insulating fake news disseminators against corrections is negligible, whereas the converse effect is much more noticeable.9 These findings might seem to turn the entire filter bubble discourse upside down. No, according to our examples, filter bubbles are not to blame for the unchecked proliferation of fake news. On the contrary, we have demonstrated that while a filter bubble can impede the dissemination of fake news, it did not inoculate users from being confronted with the correction.

Cognitive Dissonance

We are not dealing with a technological phenomenon. The reason why people whose accounts appear within the red area are spreading fake news is not a filter-induced lack of information or a technologically distorted view of the world. They receive the corrections, but do not forward recurring topicsthem, so we must assume that their dissemination of fake news has nothing to do with whether a given piece of information is correct or not – and everything to do with whether it suits them.

Instead, this resembles a phenomenon Leon Festinger called „Cognitive Dissonance“ and investigated in the 1950s.10 Acting as his own guinea pig, Festinger joined a sect that claimed the end of the world was near, and repeatedly postponed the exact date. He wanted to know why the members of the sect were undeterred by the succession of false prophecies, and didn’t simply stop believing in their doomsday doctrine. His theory was that people tend to have a distorted perception of events depending on how much they clash with their existing worldview. When an event runs counter to our worldview, it generates the aforementioned cognitive dissonance: reality proves itself incompatible with our idea of it. Since the state of cognitive dissonance is so disagreeable, people try to avoid it intuitively by adopting a behavior psychologists also call confirmation bias – this means perceiving and taking seriously only such information that matches your worldview, while disregarding or squarely denying any other information.
In this sense, the theory of cognitive dissonance tells us that the red cloud likely represents a group of people whose specific worldview is confirmed by the fake news in question. To test this hypothesis, we extracted several hundred of the most recents tweets from our Twitter user pools, both from the fake news disseminators and from those who had forwarded the correction, and compared word frequencies amongst the user groups. If the theory was correct, we should be able to show that the fake news group was united by a shared, consistent worldview.
We subjected a total of 380.000 tweets to statistical analysis, examining the relative frequencies of words in both groups and compiling a word ranking. The expressions at the top of each list are the ones that appear more commonly in tweets from the one group and more rarely in the other’s.11

After eliminating negligible words like “from”, “the”, etc., and prominent or frequently mentioned account names, the ranking for the fake news group shows a highly informative list of common words. In descending order, the sixteen most important terms were: “Islam”, “Deutschland”, “Migranten”, “Merkel”, “Maas”, “Politik”, “Freiheitskampf” (struggle for freedom), “Flüchtlinge” (refugees), “SPD” (the German Social Democratic Party), “NetzDG”12, “NPD” (the extreme-right National Party), “Antifa”, “Zensur” (censorship), “PEGIDA”13, “Syrer”, “Asylbewerber” (asylum seeker).


(The word size corresponds to the relative frequency of the terms.)

Certain thematic limitations become obvious at first glance. The narrative being told within this group is about migration, marked by a high frequency of terms like Islam, migrants, refugees, Pegida, Syrians. The terms “Merkel” and “Deutschland” are probably also associated with the “refugee crisis” narrative or with the issue of migration in general. A manual inspection of these tweets showed that the recurring topics are the dangers posed by Islam and crimes committed by individuals with a migrant background, and refugees in particular.
A second, less powerful narrative concerns the self-conception of the group: as victims of political persecution. The keywords “Maas”, “Freiheitskampf”, “SPD”, “NetzDG”, “Antifa”, and “Zensur” clearly refer to the narrative of victimization, in particular with view to the introduction of the so-called “network enforcement act”, NetzDG for short, promoted by German Federal Minister of Justice, Heiko Maas (SPD). This new law regulating speech on Facebook and Twitter is obviously regarded as a political attack on freedom of speech on the right.

The extent of thematic overlap amongst the spreaders of fake news is massive, and becomes even more apparent when we compare these terms to the most common terms the correction tweeters used:

“NEWS”, “Nachrichten” (news), “Trump”, “newsDE”, “Infos”, “BreakingNews”, “SPIEGEL”, “DieNachrichten”, “online”, “noafd”, “May” (as in Theresa May), “Donald”, “Piraten”, “tagesschau”, “pulseofeurope”, “Reichsbürger”.

First of all, what is noticeable is that the most important terms used by correctors are not politically loaded. It is more about media brands and news coverage in general: NEWS, news, newsDe, Infos, BreakingNews, SPIEGEL, DieNachrichten, tagesschau, Wochenblatt. Nine out of 16 terms are but a general reference to news media.
The remaining terms, however, do show a slight tendency to address the right-wing spectrum politically. Donald Trump is such a big topic that both his first and his last name appear in the Top 16. In addition, hashtags like „noafd“ or the pro-European network „pulseofeurope“ seem to be quite popular, while the concept of „Reichsbürger“ (people who feel that Germany isn’t a legitimate nation state and see themselves as citizens of the German Reich) is also discussed.

We can draw three conclusions from this word frequency analysis:

  1. The fake news group is more limited thematically and more homogeneous politically than the corrections group.
  2. The fake news group is primarily focused on the negative aspects of migration and the refugee crisis. They also feel politically persecuted.
  3. The right-wingers have no unified political agenda, but a slightly larger interest in right-wing movements.

All of this would seem to confirm the cognitive dissonance hypothesis. Our fake news example stories are reports of problems with refugees, which is the red group’s core topic exactly. Avoidance of cognitive dissonance could explain why a certain group might uncritically share fake news while not sharing the appropriate correction.

Digital Tribalism

When comparing the two groups in both examples, we already found three essential distinguishing features:

  1. The group of correctors is larger, more scattered/cloudy and includes the mass media, while the fake news group is more closely interwoven, smaller and more concentrated.
  2. The filter bubble of the corrector group is more impermeable to fake news than vice versa.
  3. The fake news group is more limited in topics than the users tweeting corrections.

In short, we are dealing with two completely different kinds of group. Whenever differences at the group level are so salient, we are well advised to look for an explanation that goes beyond individual psychology. Cognitive dissonance avoidance may well have a part in motivating the individual fake news disseminator, but seeing as we regard it as a conspicuous group-wide feature, the reasons will more likely be found in sociocultural factors. This again is a subject for further research.
In fact, there has been a growing tendency in research to embed the psychology of morals (and hence, politics) within a sociocultural context. This trend has been especially pronounced since the publication of Jonathan Haidt’s The Righteous Mind: Why Good People are Divided by Politics and Religion in 2012. Drawing on a wealth of research, Haidt showed that, firstly, we make moral and political decisions based on intuition rather than reasoning, and that, secondly, those intuitions are informed by social and cultural influences. Humans are naturally equipped with a moral framework, which is used to construct coherent ethics within specific cultures and subcultures. Culture and psychology “make each other up”, as the anthropologist and psychologist Richard Shweder put it.14
One of our innate moral-psychological characteristics is our predisposition for tribalism. We have a tendency to determine our positions as individuals in relation to specific reference groups. Our moral toolbox is designed to help us function within a defined group. When we feel we belong to a group, we intuitively exhibit altruistic and cooperative behaviors. With groups of strangers, by comparison, we often show the opposite behavior. We are less trusting and empathetic, and even inclined to hostility.
Haidt explains these human characteristics by way of an excursion into evolutionary biology. From a rational perspective, one might expect purely egoistic individuals to have the greatest survival advantage – in that case, altruism would seem to be an impediment.15 However, at a certain point in human evolution, Haidt argues, the selection process shifted from the individual to the group level. Ever since humanity went down the route of closer cooperation, sometime between 75.000 and 50.000 BCE, it has been the tribes, rather than the individuals, in evolutionary competition with one another.16 From this moment on, the most egoistic individual no longer automatically prevailed – henceforth it was cooperation that provided a distinct evolutionary edge.17

Or perhaps we should say, it was tribalism: The basic tribal configuration not only includes altruism and the ability to cooperate, but also the desire for clear boundaries, group egotism, and a strong sense of belonging and identity. These qualities often give rise to behaviors that, as members of an individualistic society, we believe we have given up, seeing as they often result in war, suffering, and hostility.18

However, for some time there have also been attempts to establish tribalism as a positive vision for the future. Back in the early 1990s, Michel Maffesoli founded the concept of “Urban Tribalism”.19 The urban tribes he was referring to were not based on kinship, but more or less self-chosen; mainly the kind of micro-groups we would now call subcultures. From punk to activist circles, humans feel the need to be part of something greater, to align themselves with a community and its shared identity. In the long run, Maffesoli argued, this trend works against the mass society.
There are more radical visions as well. In his grand critique bearing the programmatic title Beyond Civilization, the writer Daniel Quinn appeals to his readers to abandon the amenities and institutions of civilization entirely and found new tribal communities.20 Activists organizing in so-called neo-tribes mainly reference Quinn on their search for ways out of the mass society, and believe the Internet might be the most important technology in enabling these new tribal cultures.21

In 2008, Seth Godin pointed out how well-suited the Internet was to the establishment of new tribes. Indeed, his book, boldly titled Tribes: We Need You to Lead Us, was intended as a how-to guide.22 His main piece of advice: The would-be leader of a tribe needs to be a “heretic”, someone willing to break into the mainstream, identify the weakest link in an established set of ideas, and attack it. With a little luck, his or her possibly quite daring proposition will attract followers, who may then connect and mobilize via the Internet. These tribes, Godin argues, are a potent source of identity; they unleash energies in their members that enable them to change the world. Many of Godin’s ideas are actually fairly good descriptions of contemporary online group dynamics, including the 2016 Trump campaign.
To sum up: Digital tribes have traded their emphasis on kinship for an even stronger focus on a shared topic. This is the tribe’s touchstone and shibboleth, a marker of its disapproval of the “mainstream”, by whose standards the tribe members are considered heretics. This sense of heresy produces strong internal solidarity and homogenization, and even more importantly, a strict boundary that separates the tribe from the outside world. This in turn triggers the tribalist foundations of our moral sentiments: everything that matters now is “them against us”, and the narratives reflect this basic conflict. AfD, Pegida, the Alt-Right etc. see themselves as rebels, defying a mainstream that in their view, has lost its way, has been corrupted or at best, blinded by ideology. They feel they are victims of political persecution and oppression. Narratives like the “lying mainstream press” and the “political establishment” serve as boundary markers between two or more antagonistic tribes.23 Our visualization of disconnected clusters within the network structure, as well as the topical limitations revealed by our word frequency analysis are empirical evidence of precisely this boundary.

Counter Check: Left-Wing Fake News

Our assumption so far is that the red cloud of fake news disseminators is a digital tribe, and that the differences setting it apart from the group of correction disseminators result from its specific tribal characteristics. These characteristics are: close-knit internal networks, in combination with self-segregation from the mainstream (the larger network); an intense focus on specific topics and issues; and, by consequence, a propensity to forward fake news that match their topical preferences. Our tribe is manifestly right-leaning: the focus on the refugee issue is telling. Negative stories about refugees go largely or entirely unquestioned, and are forwarded uncritically, even when very little factual reporting is available and the story is implausible. Some users even forward stories they know to be untrue.
However, none of this shows the non-existence or impossibility of very diverse digital tribes within the German-speaking Twitterverse. In this sense, our identification of this particular tribe based on its circulation of fake news is inherently biased. The fake news events function as a kind of flashlight that lets us shine a light onto one part of the overall network, but only a rather limited part. Right-wing fake news reports let us detect right-wing tribes. Assuming that our theory is correct so far, we should similarly be able to discover left-wing tribes by flashing the light from the left, that is to say, by examining left-wing fake news reports, if they exist.
To cut a long story short: tracking down left-wing fake news reports was not easy, in any case, more difficult than finding their right-wing counterparts. This is interesting to note, especially since the last U.S. election campaign presented us with a very similar situation.24 One example that caught our attention early on was the so-called Khaled case, even though it is not an exact match for our definition of fake news.25
On January 13, 2015, the Eritrean refugee Khaled Idris was found stabbed to death in Dresden. This was during the peak of the Pegida demonstrations, which had been accompanied by acts of violence on several other occasions before. With the official investigation still ongoing and the authorities careful not to speculate, many Twitter users and primarily leftist media implied that the homicide was connected to the Pegida demonstrations. Some slammed the work of the investigators as scandalously ineffective (“blind on their right eye”), and many prematurely condemned the killing as a racist hate crime. Eventually the police determined that Idris had been killed by a fellow resident in his asylum seekers’ hostel.

We applied the same procedure as in the first example, again coding the fake news reports (or, let us say, incorrect speculations) in red and the corrections in blue. The meta-debate, which played an oversized role in this example, is green.
At first glance, the resulting visualization looks quite different from our first graph. Still, some structural features seem oddly familiar – only in this case, they seem to have been inverted. For instance, the division into two clusters that appear side by side, and the relative proportions of those clusters are instantly recognizable. This time, though, the spatial arrangement and distribution of colors is different.
Once again, we can distinguish two groups, represented by a cloud and a knot respectively. This time, however, the cloud appears on the right and the knot on the left.26 As before, one group is smaller, more closely linked, and more homogeneous (which, as per our theory, suggests tribal characteristics), while the other, represented by a larger, more disperse cloud, includes the mass media. Again, the smaller group is far more homogeneous in its coloring. But instead of red, it is now predominantly green and blue. Now the red dots are part of the larger cloud, which isn’t homogeneously or even predominantly red here; it is as though the cloud mostly consists of corrections (blue) and the meta-debate (green), and just like the knot, merely has occasional sprinkles of red.
These findings did not match our expectations. We thought it would be possible to detect a leftist tribe, but this rather unambiguous answer to our question took us by surprise. We could not detect a leftist tribe similarly driven by fake news; a tribal structure like before could be identified, but this time it was populated by accounts that had forwarded the correction. The cloud does include fake news disseminators as well, but there is no self-segregation, so the disseminators of the fake news are still interspersed with the correctors and the meta-debaters.

In order to increase the robustness of our analysis, we also investigated a second example:

On January 27, 2016, a volunteer of the Berlin refugee initiative „Moabit Hilft“ published a Facebook post telling the story of a refugee who, after standing in line for days in front of the LAGeSo (Landesamt für Gesundheit und Soziales, the agency assigning refugees to shelters in Berlin), collapsed and later died in hospital. The news spread like wildfire, not least due to the fact that conditions at the LAGeSo in Berlin had been known to be chaotic since mid-2015. Many social media accounts took up the case and scandalized it accordingly. The media also picked up on events, some of them rather uncritically at first. Over time, however, when no one else confirmed the story and the author of the original Facebook post had not responded to inquiries, reporting became more and more sporadic. The same day, the news was also officially debunked by the police. In the following days, it turned out that the helper had made up the entire story, apparently completely bowdlerizing it under the influence of alcohol.

We proceeded as we did in the Khaled case, with the difference that we used yellow as a fourth color to distinguish impartial reports from more uncritical fake news disseminators.
The example confirms our findings from the Khaled case. Once again, the spreaders of fake news are dispersed far and wide – there is no closely-connected group. Again, a segregated group that is particularly eager to spread the correction is recognizable. This supports our thesis that this is exactly the same group as the one that was so susceptible to right-wing fake news in the previous examples: our right-wing digital tribe.
To substantiate this conjecture, we „manually“ sifted through our pool of tweets and randomly tested individual tweets from the tribal group.
To put it mildly, if we had introduced a color for malice and sarcasm, this cluster would stand out even more from the rest of the network. Even if the tweets include corrections or meta-debate, the gloating joy about the uncovering of this fake news story is unmistakable. Here are some examples from the LaGeSo case:




(Translation:
Tweet 1: What the press believes immediately: / – Stabbing of left party politician / – Dead refugee at #lageso / What the press denies: / – Case of the thirteen-year-old Lisa / #lyingpress
Tweet 2: How desperate does one have to be to invent a dead refugee to prop up this web of lies? #lageso #moabithilft
Tweet 3: Let’s see when people start calling the dead #lageso refugee an art form to raise awareness for the suffering. Then everything will be alright again.
Tweet 4: Do-gooders cried a river of tears. For nothing. The “dead refugee” is a hoax. #lageso #berlin)

These tweets also reveal a deep mistrust in the mass media, which is expressed in the term “Lügenpresse” (“lying press”). By exposing left-wing fake news, the tribe seems to reinforce one of its most deeply-rooted beliefs: that the media are suppressing the truth about the refugees, and that stories of racist violence are invented or exaggerated.

To sum up our findings: Based on leftist fake news about refugees, we were unable to detect a leftist equivalent of the right-wing tribe. On the contrary, it turns out that disseminators of leftist fake news are highly integrated into the mainstream.27 The latter (represented by the cloud) plays an ambivalent role when it comes to leftist fake news, as it includes the fake news tweets, but also the corrections and meta-debate. Everything comes together in this great speckled cloud. By contrast, the right-wing tribe is recognizable as such even in the context of leftist fake news. Again, it appears as a segregated cluster that is noticeably homogeneous in color, though this time on the side of the correction disseminators. (It’s worth mentioning that most of the tweets forwarding the correction were mockeries.) But that does not mean that leftist tribes do not exist at all – they might be visible when investigating topics other than migration and refugees. Nor does it mean that there isn’t a multiplicity of other tribes, perhaps focused on issues that are devoid of political significance or not associated with a political faction. The latter in fact is quite probable.28

Tribal Epistemology

Some findings of our study of leftist fake news are unexpected, but that only makes them even more compelling as confirmation of our hypothesis concerning the right-wing tribe. The latter tribe is real, and it is hardly surprising that it is involved in the dissemination of all those news reports concerning refugees, be it in the form of fake news or of subsequent corrections.29
Yet the theoretical framework describing tribalism that we have outlined so far is still too broad to grasp the specificity of the phenomenon considered here. People like to band together, people like to draw lines to exclude others – this is hardly a novel insight. In principle, we might suspect that tribalism is the foundation of every informal or formally organized community. Hackers, Pirates, “Weird Twitter”, the net community, Telekommunisten, “Siff-Twitter”, 4chan, preppers, furries, cosplayers, gamers, bronies, etc: all of them digital tribes? Maybe. But when we make that claim without carefully examining the communities in question, we risk broadening the concept to such an extent that it no longer adequately captures our observations.30
What sets our phenomenon apart is a specific relation to reality, that is also visibly reflected in the network structure. Worldview and group affliation seem to coincide. This is the defining characteristic of tribal epistemology.
In recent years, a growing body of research has further inspected the connection between group identity and the psychology of perception. Dan M. Kahan and his team at the Cultural Cognition Project have made important contributions in this field.31 In a range of experimental studies, they demonstrated how strongly political affiliation influences people’s assessment of arguments and facts.
In one study, Kahan presented four versions of a problem-solving task to a representative group of test subjects.32 Two of the four versions were about the results of a fictitious study of a medical skin cream; the subjects were asked to tell whether the cream was effective or not, based on the data provided. In the first version, the data indicated that the new cream helped reduce rash in patients; in the second version, it showed that the medication did more harm than good. In the two other versions of the task, the same numbers were used, but this time to refer not to the effectiveness of a skin cream, but to the effects of a political decision to ban the carrying of concealed handguns in public. In one version, the data indicated that banning handguns from public settings had tended to decrease crime rates; in the other, it suggested the opposite.

Before being given the problem, all subjects were given a range of tests to measure their quantitative thinking skills (numeracy) as well as their political attitudes. It turned out that subjects’ numeracy was a good predictor of their ability to solve the task correctly when they were asked to interpret the data gauging the efficacy of a skin cream. When the same numbers were said to relate to gun control, political attitude trumps numeracy. So far, so unsurprising.
More strikingly, with a polarizing political issue at stake, numeracy actually had a negative effect. On the gun-control question, superior quantitative thinking skills in partisans were correlated with misinterpretation of the data. In Kahan’s interpretation, when it comes to ideologically contentious issues, individuals do not use their cognitive competences to reconcile their own attitudes with the world of fact, but instead maintain their position even when confronted with evidence that contradicts it.
The conclusion Kahan and his colleagues draw from their study is disturbing: culture trumps facts. People are intuitively more interested in staying true to their identity-based roles than in forming an accurate picture of the world, and so they employ their cognitive skills to that end. Kahan’s term for the phenomenon is “identity-protective cognition”: Individuals utilize their intellectual abilities to explain away anything that threatens their identities.33 It is a behavior that becomes conspicuous only when they engage with polarizing issues or positions that solicit strong—negative or positive—identification. Kahan was able to reproduce his findings with problems that touched on issues such as climate change and nuclear waste disposal.
Based on his research, Kahan has outlined his own theory of fake news. He is skeptical of the prevailing narrative that motivated parties such as campaign teams, the Russian government, or enthusiastic supporters of candidates plant fake news in order to manipulate the public. Rather, he argues that there is a “motivated public” that has high demand for news reports that corroborate its viewpoints—and that demand is greater than the supply.34 Fake news, as it were, are simply filling a market gap. Information is a resource in the production not so much of knowledge than of identity—and its utility for this purpose is independent of whether it is correct or incorrect.
The right-wing tribe in our study is one such “motivated public.” Its members’ assessment of a factual claim hinges on its usefulness as a signal of allegiance to the tribe and rejection of the mainstream, much more so than on truthfulness or even plausibility. Our findings therefore do not support the notion that new information might prompt the tribe’s members to change their minds or mitigate their radicalism. In this climate, corrections to the fake news story may have had an opportunity to be noticed, but they were not accepted and certainly not shared any further.

The Rise of Tribal Institutions in the USA

In his text on tribal epistemology, David Roberts makes another interesting observation, which can also be applied to Germany. While the mass media, science and others regard themselves as non-partisan, this claim is denied by the tribal right. On the contrary, the right assumes that these institutions are following a secret or not-so-secret liberal/left-wing agenda and colluding with the political establishment. In consequence, not only the opposing party, but the entire political, scientific and media system is rejected as corrupt.35
As I’ve said before, these conspiracy theories are false at face value, but relate to a certain truth.36 There is in fact a certain liberal basic consensus in politics, science and media that many citizens are sceptical of.37

However, instead of reforming the institutions or opposing their bias, new communication structures such as the Internet (but not exclusively) have made it possible to establish an alternative media landscape and new institutions at low cost. This is exactly what has been happening in the USA since the 1990s. A parallel right-wing media landscape with considerable reach and high mobilization potential has formed. The crucial point is, since this alternative media landscape does not recognise the traditional mainstream media’s claim of non-partisanism, it doesn’t even attempt to reach this ideal in the first place. We can also see similar developments in Germany, but they are far less advanced.

In the USA, the secession of alternative tribal institutions began with local talk radios radicalizing more and more towards the right. Rush Limbaugh, Sean Hannity and Glenn Beck were the early provocateurs of this flourishing scene. Since the mid-1990s, the new TV station Fox News has hired some of these „angry white men“ and set out to carry right-wing radical rhetoric into the mainstream, becoming the media anchor point for the Tea Party movement in the process.38 But during the pre-election campaign of 2015/16, Fox News itself was overtaken by a new player on the right: the Breitbart News website, which had zeroed in on supporting Donald Trump early on.
Not surprisingly, this division in the media landscape can be traced in social media data. In a highly acclaimed Harvard University study, researchers around Yochai Benkler evaluated 1.25 million messages that had been shared by a total of 25,000 sources on Twitter and Facebook during the U.S. election campaign (from April 1, 2015 to the ballot).39

Explanation: The size of a node corresponds to how often the contents of a Twitter account was shared. The position of the nodes in relation to one another shows how often they were shared by the same people at the same time. The colors indicate whether they are mostly shared by people who routinely retweet either Clinton (blue) or Trump (red), or both/neither (green).

Even though the metrics work quite differently in both cases, their analysis is strikingly similar to ours. Networking density and polarization are as clearly reflected in the Harvard study as in ours, albeit somewhat differently. Almost all traditional media are on the left/blue side, whereas there is hardly any medium older than 20 years in red. The most important players are not even 10 years old.
The authors of the study conclude, not unlike we do, that this asymmetry cannot have a technical cause. If it were technical, new, polarizing media would have evolved on both sides of the spectrum. And left-wing media would play just as important a role on the left as Breitbart News on the right.
However, there are also clear differences to our results: the right side, despite its relative novelty and radicalism, is more or less on a par with the traditional media on the left (the left side of the graph, not necessarily in terms of politics). So it is less a polarization between the extreme left and the extreme right, than between the moderate left and the extreme right.
Since the left-hand side of the graph basically reflects what the public image has represented up to now – the traditional mass media at the center with a swarm of smaller media surrounding it, such as blogs and successful Twitter accounts, it may be wrong to speak of division and polarization altogether. This should more correctly be called a „split-off“, because something new is being created here in contrast to the old. The right-wing media and their public spheres have emerged beyond the established public sphere to oppose it. They have split off not because of the Internet or because of filter bubbles, but because they wanted to.
The authors of the study conclude, as we do, that this separation must have cultural and/or ideological reasons. They analyse the topics of the right and reveal a similar focus on migration, but also on the supposed corruption of Hillary Clinton and her support in the media. Here, too, fake news and politically misleading news are a huge part of the day-to-day news cycle, and are among the most widespread messages of these outlets.
We have to be cautious of comparing these two analyses too closely, because the political and media landscapes in Germany and the US are too different. Nevertheless, we strongly suspect that we are dealing with structurally similar phenomena.
An overview of the similarities between the US right-wing spin-off media group and our fake news tribe:

  1. Separation from the established public.
  2. Traditional mass media remain on the „other“ side.
  3. Focus on migration and at the same time, their own victimization.
  4. Increased affinity to fake news.
  5. Relative novelty of the news channels.

We can assume that our fake news Twitter accounts are part of a similar split-off group as the one observed in the Harvard study. One could speculate that the United States has merely gone through the same processes of secession earlier than we have, and that both studies show virtually the same phenomenon at different stages. The theory would be: A parallel society can emerge out of one tribe if it creates a media ecosystem and institutions of its own, and last but not least, a truth of its own.
What we have seen in the United States could be called the coup of a super-tribe that has grown into a parallel society. Our small fake news tribe is far from there. However, it cannot be ruled out that this tribe will continue to grow and prosper, and will eventually be just as disruptive as its American counterparts.
In order for that to happen, however, it would have to develop structures that go well beyond the individual network structure of associated Twitter accounts. This can already be observed in some cases. Blogs like “Politisch Inkorrekt”, “Achgut” or “Tichys Einblick” (Tichy’s Inside View), and of course, the numerous successful right-wing Facebook pages and groups surrounding the AfD party and Pegida in particular, can be seen as „tribal institutions“. These are still far from having the same range and are often not as radical as their American counterparts, but this may merely be a matter of time.

Conclusion

The dream of the freedom of networks was a naive one. Even if all our social constraints are removed, our presupposed patterns of socialisation become all the more evident. Humans have a hard-wired tendency to gather into groups they identify themselves with, and to separate themselves from others as a group. Yes, we enjoy more and greater freedoms in our socialisation than ever before, but this does not lead to more individualism – quite the contrary, in many cases it paradoxically leads to a even stronger affinity for groups. The tribal instinct can develop without constraint and becomes all the more dominant. And the longer I think about it, the more I wonder whether all the strangely limiting categories and artificial group structures of modern society are a peacemaking mechanism for taming the tribal side in us that is all too human. Or whether they were. The Internet is far from finished with deconstructing modern society.40

We learned a lot about the new right. It is not just one side of the political spectrum, but a rift, the dividing off of a new political space beyond the traditional continuum. The Internet is not to blame for this split-off, but it has made these kinds of developments possible and therefore more likely. Free from the constraints and the normalization dynamics of the traditional continuum, „politically incorrect“ parallel realities are formed that no longer feel the need to align themselves with social conventions or even factuality.

In The Righteous Mind, Jonathan Haidt writes that people need no more than one single argument to justify a belief or disbelief. When I am compelled (but do not want) to believe something, a single counterargument is enough to make me disregard a thousand arguments. 99 out of 100 climate scientists say that climate change is real? The one who says otherwise is enough for me to reject their view. Or I may want to hold on to a belief that runs counter to all available evidence: Even if all my objections to the official version of the events of 9/11 are refuted, I will always find an argument that lets me cling to my conviction that the attacks were an inside job.

This single argument is the reason why the right-wing tribe is immune to fake news corrections even if exposed to them. Its members always have one more argument to explain why they stand by their narrative, and their tribe. That is exactly what the phrase “lying mainstream press” was invented for. It does not actually imply a sweeping rejection of everything the mass media report, but justifies crediting only those reports that fit one’s own worldview and discounting the rest as inconsequential.41

If we follow the tribalist view of the media landscape (your media vs. our media), the traditional mass media, with their commitment to accuracy, balance, and neutrality, will always benefit the right wing about half the time, while right-wing media pay exclusively into their own accounts. Having faith in non-partisanship and acting accordingly turns out to be the traditional mass media’s decisive strategic disadvantage.

The right will undoubtedly respond that the tribalist tendencies and structures which are uncovered in this text equally apply to the left. But there is no evidence for this assertion. In any case, equating the groups observed is not an option, seeing as the data is unambiguous. This does not mean, however, that there is no tribalistic affect on the left, or that they are still forming in response to the right. On the left, there are belief systems with a similarly strong identity-forming effect, and if Dan Kahan’s theory of „identity-protective cognition“ is correct, similar effects should be observable on the left with the appropriate topics.

This essay is not a complete research paper, and can only be a starting point for additional research. Many questions remain unanswered, and some of the assumptions we have made have not been sufficiently proven yet. It’s clear, however, that we have discovered something that, on the one hand, has a high degree of significance for the current political situation in Germany. On the other hand, and this is my assessment, it also says something fundamental about politics and society in the age of digital networking.

All politics becomes identity politics, as far as polarizing issues are concerned. Identity politics can be described as highly motivating, uncompromising and rarely or never fact-based.

A few research questions remain unanswered:

  1. Can we really prove that this permanently resurfacing tribe on the right is structurally identical (and at least largely account-identical) with all the phenomena we observe? We’ve got some good leads, but hard evidence is still missing. We would have to measure the entire group and then show that the fake news accounts are a subset of that group.
  2. In our research, we compared two special groups: right-wing fake news distributors and those who spread the corrections to fake news. The differences are significant, but the comparison is problematic: the group of correctors was also selected, and is therefore not representative. Also, the groups differ in size. Therefore it would be more meaningful to compare the fake news group with a randomized control group of the same size.
  3. Facebook. Facebook is a far more influential player in the German political debate than Twitter. Can we find the same or similar patterns on Facebook? I would assume so, but it’s hasn’t been looked into yet.
  4. To justify the term „digital tribe“, further tribes should be identified and researched. This requires a digital ethnology. Are there other tribes in the German-speaking world with other agendas, ideologies or narratives displaying a similar level of separation and perhaps radicalization? Possible candidates are the digital Men’s Rights movement, “Sifftwitter” and various groups of conspiracy theorists. Also: How do digital tribes differ from others? Which types of tribes can reasonably distinguished, etc.? Or is tribalism more a spectrum on which to locate different groups; are there different „degrees“ of tribalism, and which metrics would apply in that case?

I know there are already a few advances in digital ethnology, but I feel that an interdisciplinary approach is needed when it comes to digital tribes. Data analysts, network researchers, psychologists, sociologists, ethnologists, programmers, and maybe even linguists, will have to work together to get to the bottom of this phenomenon.

Another, less scientific question is where all this will end. Is digital tribalism a sustainable trend and will it, as it has happened in the United States, continue to put pressure on the institutions of modern society, and maybe even destroy them in the long term?

So what can we do about it?

Can you immunize yourself against tribalist reflexes? Is it possible to warn against them, even to renounce them socially? Or do we have to live with these reflexes and find new ways of channeling them peacefully?

Haidt’s answer is twofold. He compares our moral decision-making to an elephant and its rider: The elephant intuitively does whatever it feels like at any moment, while the rider merely supplies retrospective rationalizations and justifications for where the elephant has taken him. A successful strategy of persuasion, then, does not consists in confronting the rider with prudent arguments, but in making sure that one retains the elephant’s goodwill (for instance by establishing relationships of personal trust that will then allow a non-confrontational mode of argumentation, which can help individuals break free of the identity trap).
The other half of Haidt’s answer is actually about the power of institutions. The best institutions, he writes, stage a competition between different people and their worldviews, so as to counterbalance the limitations of any one individual’s judgment. Science, for instance, is organized in such a way as to encourage scientists to questions one anothers’ insights. The fact that a member of the scientific community will always encounter someone who will review his or her perspective on the world acts as containment against the risk that he or she will become overly attached to any personal misperceptions.
Our discovery of digital tribalism, however, points to the disintegration of precisely these checks and balances, and the increased strength of unchecked tribal thinking. I am inclined to think that the erosion of this strategy is precisely the problem we are struggling with today.

Paul Graham also made a suggestion on how to deal with this phenomenon on a personal level, but without reference to research. In a short essay entitled „Keep Your Identity Small“ from 2009, he advises exactly that: not to identify too much with topics, debates, or factions within these debates.42 The fewer labels you attach to yourself, the less you allow identity politics to tarnish your judgement.

However, this advice is difficult to generalise because there are also legitimate kinds of identity politics. Identity politics is always justified and even necessary when it has to be enforced against the group think of the majority. Often, the majority does not consider the demands of minorities on its own. Examples are homosexual and transsexual rights, or the problems caused by institutional racism. In these cases, those affected simply do not have the choice of not identifying with the cause. They literally are the cause.

A societal solution to the problem would presumably need to rebuild faith in supra-tribal and nonpartisan institutions, and it is doubtful whether the traditional institutions are still capable of inspiring such faith. People’s trust in them was always fragile at heart and would probably have fallen apart long ago given the right circumstances. In Germany, this opportunity arose with the arrival of the Internet, which dramatically lowered the costs of establishing alternative media structures. New institutions aiming to reunify society within a shared discursive framework will need to take into account the unprecedented agility of communication made possible by digital technology, and will even devise ways of harnessing it.

Of course, another response to the tribalism on the right would be to become tribalist in turn. A divisive “us against them” attitude has always been part of the important anti-fascist struggle, and an ultimately necessary component of the Antifa political folklore. However, I suspect that an exclusive focus on the right-wing digital tribe might inadvertently encourage the other side—which is to say, large segments of mainstream society—to tribalize itself, drawing cohesion and identification from the repudiation of the “enemy.” In doing so, society would be doing the right-wing tribe a huge favor, turning its conspiracy theory (“They’re all in cahoots with each other”) into a self-fulfilling prophecy.

The problem of digital tribalism is unlikely to go away anytime soon. It will continue to have a transformative effect on our debates and, by consequence, our political scenes.

Footnotes:

 

  1. Or at least, as Felix Stalder – somewhat less naive – has suggested as „networked individualism“, according to which „… people in western societies(…) define their identity less and less via the family, the workplace or other stable communities, but increasingly via their personal social networks, i. e. through the collective formations in which they are active as individuals and in which they are perceived as singular persons. Stalder, Felix: Kultur der Digitalität, p. 144.
  2. Roberts, David: Donald Trump and the rise of tribal epistemology, https://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology (2017).
  3. We have chosen Twitter for our analysis because it is easy to process automatically due to its comparatively open API. We are aware that Facebook is more relevant, especially in Germany and especially in right-wing circles. However, we assume that the same phenomena are present there, so that these findings from Twitter could also be applied to Facebook.
  4. Website of the German Foreign Office, travel and safety advisory for Sweden: Auswärtiges Amt: Schweden: Reise- und Sicherheitshinweise, http://www.auswaertiges-amt.de/sid_39E6971E3FA86BB25CA25DE698189AFB/DE/Laenderinformationen/00-SiHi/Nodes/SchwedenSicherheit_node.html (2017).
  5. For a closer examination of the term “fake news”, consider (in German): Seemann, Michael: Das Regime der demokratischen Wahrheit II – Die Deregulierung des Wahrheitsmarktes. http://www.ctrl-verlust.net/das-regime-der-demokratischen-wahrheit-teil-ii-die-deregulierung-des-wahrheitsmarktes/ (2017).
  6. The German project ‘Hoaxmap’ has been tracking a lot of rumors concerning refugees for a while: http://hoaxmap.org/.
  7. See Pariser, Eli: The Filter Bubble – How the New Personalized Web Is Changing What We Read and How We Think (2011).
  8. However, the data basis of this example is problematic in several respects.
    – The original article in BILD newspaper has been deleted, as have the @BILD_de tweet and all of its retweets. It’s no longer possible to reconstruct the articles and references that have been deleted in the meantime.
    – Some of the articles have been changed over time. For example, Spiegel Online’s article was much more sensational at the beginning, and we would probably have categorized it as clearly fake news in the beginning. At some point, when it turned out that there were more and more inconsistencies, it must have been amended and defused. We don’t know which other articles this has happened to.
  9. Eli Pariser, the inventor of the filter bubble, recently said in an interview regarding the situation in the USA: „The filter bubble explains a lot about how liberals didn’t see Trump coming, but not very much about how he won the election.“ https://backchannel.com/eli-pariser-predicted-the-future-now-he-cant-escape-it-f230e8299906 2017 The problem is, according to Parisians, that the leftists lose sight of the right, not the other way round.
  10. See Festinger, Leon: A Theory of Cognitive Dissonance, (1957).
  11. A special vocabulary (or slang) probably also distinguishes a digital tribe. Relative word frequencies have also been used to identify homogeneous groups in social networks. Cf. study: Bryden, John / Sebastian Funk / AA Jansen, Vincent: Word usage mirrors community structure in the online social network Twitter, https://epjdatascience.springeropen.com/articles/10.1140/epjds15 (2012).
  12. Short for Netzwerkdurchsetzungsgesetz, a controversial law designed to prevent online hate speech and fake news that went into effect on October 1, 2017.
  13. Patriotic Europeans Against the Islamisation of the West – an extreme-right wing movement founded in Dresden, Germany in 2014.
  14. Richard Shweder, quoted in Jonathan Haidt, The Righteous Mind: Why Good People Are Divided by Politics and Religion (New York: Pantheon Books, 2012), 115.
  15. So the thesis of the Homo Oeconomicus from economics has always been founded on that assumption.
  16. In this context, John Miller and Simon Dedeo’s research is also compelling. They developed an evolutionary game theory computer simulation in which agents have memory so that they can integrate other agents’ behavior in the past into their own decision-making process. Knowledge is inherited over generations. It turned out that the agents began to see recurring patterns in the behavior of other agents, and shifted their confidence to the homogeneity of these patterns. These evolved „tribes“ with homogeneous behavioural patterns quickly cooperated to become the most successful populations. However, if small behavioural changes were established over many generations through mutations, a kind of genocide occurred. Entire populations were wiped out, and a period of instability and mistrust followed. Tribalism seems to be an evolutionary strategy. Cf. Dedeo, Simon: Is Tribalism a Natural Malfunction? http://nautil.us/issue/52/the-hive/is-tribalism-a-natural-malfunction (2017).
  17. The thesis of group selection (or „multi-level selection“) was already brought into play by Darwin himself, but it is still controversial among biologists. Jonathan Haidt makes good arguments as to why it makes a lot of sense, especially from a psychological point of view. Cf. Haidt, Jonathan: The Righteous Mind – Why Good People are Divided by Politics and Religion, 210 ff.
  18. Cf. Greene, Joshua: Moral Tribes – Emotion, Reason, and the Gap Between Us and Them (2013), S. 63.
  19. Maffesoli, Michel: The Time of the Tribes – The Decline of Individualism in Mass Society (1993).
  20. Quinn, Daniel: Beyond Civilization – Humanity’s Next Great Adventure (1999).
  21. See NEOTRIBES http://www.neotribes.co/en.
  22. Godin, Seth: Tribes – We Need You to Lead Us (2008).
  23. For example,“Lügenpresse“ appears 226 times in our analysis of the fake news spreaders’ tweets, vs. 149 for the correctors. „Altparteien“ even appears 235 times vs. 24.
  24. “We’ve tried to do similar things to liberals. It just has never worked, it never takes off. You’ll get debunked within the first two comments and then the whole thing just kind of fizzles out.” That’s what Jestin Coler says to NPR. He is a professional fake-news entrepreneur who made a lot of money during the US election campaign by spreading untruths. His target audience are Trump supporters; liberals are not as easy to ensnare, he claims. Ct. Sydell, Laura: We Tracked Down A Fake-News Creator In The Suburbs. Here’s What We Learned. http://www.npr.org/sections/alltechconsidered/2016/11/23/503146770/npr-finds-the-head-of-a-covert-fake-news-operation-in-the-suburbs (2016).
  25. The difference to fake news lies mainly in the fact that these were not allegations made against better knowledge, but rather assumptions that were mostly recognizable as presupposed.
  26. The orientation of the graph is based on where the distribution of fake news is centered.
  27. Even though it remains unclear whether it may be possible to provide a digital tribe with leftist fake news on other topics. For example, there is widespread use of false quotations or photo montages of Donald Trump. However, such examples have attracted particular attention in the English-speaking world, and it would be hard to achieve a clean research design for the German-language Twittersphere. An interesting example to investigate from the German election campaign would be the fake news (which was probably was meant as satirical hoax) that Alexander Gauland, head of the right-wing AfD, expressed his admiration for Hitler. Cf. Schmehl, Karsten: DIE PARTEI legt AfD-Spitzenkandidat Gauland dieses Hitler-Zitat in den Mund, aber es ist frei erfunden, https://www.buzzfeed.com/karstenschmehl/satire-und-luegen-sind-nicht-das-gleiche (2017).
  28. There are quite a lot of texts on the tribalist political situation in the USA, in which authors also assume tribalist tendencies in certain left-wing circles. See Sullivan, Andrew: America Wasn’t Built for Humans. http://nymag.com/daily/intelligencer/2017/09/can-democracy-survive-tribalism.html (2017) and Alexander, Scott: I Can Tolarate Anything Except The Outgroup http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything-except-the-outgroup/ (2014).
  29. This is further evidenced by the fact that similar phenomena were already detectable in other events. See, for example, the tweet evaluations of the Munich rampage last year. See Gerret von Nordheim: Poppers nightmare, http://de.ejo-online.eu/digitales/poppers-alptraum (2016).
  30. A candidate for a tribe that has already been investigated is „Sifftwitter“. Luca Hammer has looked at the network structures of this group, and his findings seem to be at least compatible with our investigations. See Walter, René: Sifftwitter – Dancing around the data fire with the trolls. http://www.nerdcore.de/2017/05/09/sifftwitter-mit-den-trollen-ums-datenfeuer-tanzen/ (2017).
  31. Cultural Cognition Project (Yale Law School), http://www.culturalcognition.net/.
  32. Dan M. Kahan, “Motivated Numeracy and Enlightened Self-Government,” Yale Law School, Public Law Working Paper, no. 307, September 8, 2013, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992, accessed November 7, 2017.
  33. Dan M. Kahan, “Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition,” Cultural Cognition Project Working Paper Series, no. 164, May 24, 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2973067, accessed November 7, 2017.
  34. Which in turn proves the studies which classify fake news as a commercial phenomenon are right. See e. g. Allcott/Gentzkow: Social Media and Fake News in the 2016 Election, https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211 (2017).
  35. Roberts, David: Donald Trump and the Rise of Tribal Epistemology, https://www.vox.com/policy-and-politics/2017/3/22/14762030/donald-trump-tribal-epistemology (2017).
  36. Seemann, Michael: The Global Class – Another World is Possible, but this time, it’s a threat, http://mspr0.de/?p=4712. (2016).
  37. Recently Richard Gutjahr called for the reflection of this bias. The fact that media representatives actually have a filter bubble against right-wing thought building is proven in our data. http://www.gutjahr.biz/2017/05/filterblase/ (2007).
  38. In fact, Seth Godin actually refers to the example of Fox News in his 2008 book „Tribes“. Fox News has managed to create a tribe for itself. Cf. God, Seth: Tribes – We need you to lead us, 2008, p. 48.
  39. Berkman Center: Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election, https://cyber.harvard.edu/publications/2017/08/mediacloud (2017).
  40. In this context, we should also look at the appalling events accompanying the introduction of the Internet to the general public in developing countries. In connection with the anti-Muslim resentment in Burma with Fake News and the incredibly rapid popularization of the Internet, see for example Frenkel, Sheera: This Is What Happens When Millions Of People Suddenly Get The Internet, https://www.buzzfeed.com/sheerafrenkel/fake-news-spreads-trump-around-the-world?utm_term=.loeoq9kBX#.qiVgaEX7r (2017), and how rumors on the Internet could have led to a genocide in South Sudan: Patinkin, Jason: How To Use Facebook And Fake News To Get People To Murder Each Other, https://www.buzzfeed.com/jasonpatinkin/how-to-get-people-to-murder-each-other-through-fake-news-and?utm_term=.ktQP8vaJN#.jh8Dd6Xyj (2017).
  41. Ct. Lobo, Sascha: Die Filterblase sind wir selbst, http://www.spiegel.de/netzwelt/web/facebook-und-die-filterblase-kolumne-von-sascha-lobo-a-1145866.html, (2017).
  42. Graham, Paul: Keep Your Identity Small, http://www.paulgraham.com/identity.html (2009).

Veröffentlicht unter Das Neue Spiel Digital Tribalism english Weltkontrollverlust
6 Kommentare