The Intellectual Dark Web: A tangled mess of content-moderation Battles.

The rise of the Intellectual Dark Web's biggest Battle: Content Moderation.

"Internet" by .hd. is licensed under CC BY-NC-SA 2.0.
Freedom of Speech is key to the IDW. But is it a disguise for hate speech? “Free speech = reason = progress” by sjgibbs80 is licensed under CC BY 2.0.

There is no place on the internet quite as contentious as the Intellectual Dark Web (IDW). First defined by Bari Weiss in his seminal 2018 New York Times article, the IDW is the convergence of various media personalities and intellectuals engaging in conversations about topics often disregarded by the mainstream media. When examining the dissemination of hate speech and harmful discourses on the internet, specifically via the Intellectual Dark Web,  content moderation should not be reliant on a single actor within the system. Rather, this responsibility should be shared between users, platforms, and regulatory institutions.


Key to understanding the complexities of the Intellectual Dark Web is understanding the context in which it has risen to prominence. Weiss (2018) recognises that a key goal of the IDW is to protect “free speech”, yet fails to outline what constitutes free speech within the community. Ben Shapiro once defined the IDW as “a network of friends who are willing to talk about things we disagree about” (Shapiro, 2020). This explanation gives insight into understanding the limits and impacts of speech within this context. Borrowing off the concepts set out by Reimer and Peter (2021), “free speech” on the IDW is established by finding a space whereby content is available to the masses, regardless of potentially harmful content. They also suggest that this content should not be subject to corporate or governmental censorship.

The Intellectual Dark Web capitalises on the ideas which founded the early internet in order to situate its movement as foundational to the protection of free speech. In doing so, the IDW has been responsible for shifting internet users’ understanding of where the line between free speech and harmful speech occurs. The existence of the IDW is not unfounded when considering the historical landscapes of the internet. Early ideas of the internet considered freedom, transparency, and the open flow of communication central to its performance (O’Hara & Hall, 2018). Essential to these foundations was the lack of structural governance, as spontaneous order would naturally occur in following notions of liberal ideals (Popiel, 2018).

What distinguishes the IDW from other forms of the internet community is that it has no singular tangible place of organisation. Rather, the IDW is conceptualised as a ‘movement’ that’s existence has been predicated on the principles which dictated the internet’s development (Weiss, 2018, Parks 2020). The introduction of Web 2.0 enhanced the idea of collaborative content generation and freedom of expression with the introduction of user-friendly platforms (Reimer & Peter, 2021). This enhanced the ability of online communities to establish meaningful connections with the content they consume, thus altering the media people consume.

Ben Shapiro is one of the Controversial Figureheads of the IDW and is outspoken about Free Speech. “Ben Shapiro” by Gage Skidmore is licensed under CC BY-SA 2.0.


Examining the ways in which Jordan Peterson and Ben Shapiro articulate their desires for freedom of speech highlights the ways in which the key interests of the IDW are in tension with the current values of a ‘safe internet’ in the wake of platformisation. The IDW is conceptualised as a movement that’s existence has been predicated on the principles which dictated the internet’s development (Weiss, 2018, Parks 2020). Despite the potential for diversity within the IDW, its key figures seem to be relatively similar; namely being white and male (Parks 2020). The diversity of the IDW seems to lie most predominantly within their political alignment (Weiss, 2018). Figures such as Ben Shapiro and Joe Rogan stand as conservative mouthpieces within the IDW, yet Liberal centrists such as Jordan Peterson still maintain a degree of dominance within the field (Weiss, 2018., Farrell, 2018). 

Issues of diversity and coordination of online communities are important to recognise, as it has the potential to influence content created by an individual, thus shaping the views of its consumers. The rise of ‘social bubbles’  due to algorithmic user-sorting on digital platforms (Reimer & Peter, 2021) has the ability to potentially place consumers in a microcosm of ideology and information access. The impacts of such digital landscapes has had become evident in how the broader internet conceptualises the IDW. Specifically, it has become recognised as a “gateway” for harmful, far-right extremist ideology (Lamoureux, 2019). Shapiro’s content specifically highlights this phenomenon, after rising to popularity depicting him as “destroying” opponents he debates with, speaking fastly and often over his adversaries (Shapiro, 2021). By framing their content as intellectual and nuanced (K. N. C. &A. M., 2019),  individuals such as Shapiro have enhanced the ability for algorithmic user-sorting on platforms to generate a social bubble that associates right-wing ideology with intelligence. 



Tensions between varying institutional interests have made moderation of the Intellectual Dark Web an ambiguous space to navigate. As the IDW is renowned for its apprehension to moderate the content of its most notable figures external regulation of content comes from two key institutions; the social media platforms which host them, and government regulation. Economic interests have been regarded as central to the perseverance of the intellectual dark web, and this protection is equated with the fight to protect modern free speech. By this framework, the rise of the Intellectual Dark Web, specifically in its reliance on platforms such as YouTube comes from a response to the platformisation of the internet (Oliva, 2020). 

Corporate regulation of content from digital platforms lies precariously between protecting the interests of its users, and its own economic profitability (Martin, 2019). Maintaining the original values of an open, free internet, (O’Hara & Hall, 2018) Section 230 of the United States’ Communications Decency Act has outlined the responsibilities of platforms in moderating content since 1996. Developed prior to the advent of the current digital platform landscape, the application of Section 230 to contemporary platforms potentially changes the implications it holds. Specifically, with the rise of algorithmic user and content sorting (Reimer & Peter, 2021), the implication that a platform is not responsible for the content it hosts can mean that at times, harmful or illegal content, not be moderated effectively (Oliva, 2020). Due to the shortfalls of Section 230 in moderating contemporary digital landscapes, Social media companies have begun to recognise a degree of corporate responsibility in regulating the distribution of harmful content on their servers (Oliva, 2020). Adoption of Terms of Service Agreements by platforms has created outlines as to what content is appropriate to distribute. 

The responsibility for content moderation online has become a difficulty due to free speech. “Free Speech for the Dumb” by Walt Jabsco is licensed under CC BY-NC-ND 2.0.

The ambiguities of responsibility regarding content moderation create a tension often explored by leaders on the IDW. Section 230 was ultimately designed by American lawmakers in order to minimise the discouragement of free speech on the internet (Tarleton, 2018). It, therefore, has become a reasonable deduction within IDW communities that the moderation of ‘harmful’ content poses a threat to the ability to freely express a political opinion, eroding the rights of these users (Parks 2020). This conflict of ideas has played out in varying degrees as the IDW becomes more widespread. The boycott of platforms such as Patreon by IDW users due to their increasingly restrictive terms of service has seen just how influential these tensions are to audiences consuming such content. Whilst Section 230 dictates no responsibility for platforms to moderate problematic content on their servers, self-regulation by platforms remains vague and is often done in order to protect economic interests (Popiel 2018). Moderating problematic content has therefore become a task of finding a way in which measures taken by both corporate and governmental institutions work effectively in maintaining the original ideals of the internet, and doing so with little economic impact. 

Examining the Intellectual Dark Web through the lens of the communications political economy reveals the extent to which problematic content on the internet has become a more prevalent issue than ever. As competing ideas of how the internet should be conceptualised occur with competing interests, a collaborative effort from various institutions may allow for meaningful discourse to examine how harmful content best be moderated in future digital economies.




Ernest, S. K. [@Scottkernest]. (2020, Feb 10). Jordan Peterson.. and most of the “intellectual
        dark web” tend to be gateways. [Twitter].

Farrell, H. (2018, May 10). The “Intellectual Dark Web,” explained: what Jordan Peterson has in
          common with the alt-right. Vox.

Hasan, M [@Mehdirhasan]. (2020, May 12). The president calls for the firing of a prominent
          journalist he doesn’t like [Twitter].

N. C., A. M., (2019, Mar 28) Inside the mind of Ben Shapiro, a radical conservative. The Economist.

Lamoureux, M. (2019, Aug 29) YouTube Commenters Shift From ‘Intellectual Dark Web’ Fans
        to the Far-Right, Study Shows. Vice News.

Martin, F. (2019) The Business of News Sharing. In Martin, F. & Dwyer, T. (Eds). Sharing News  
        Online: Commentary Cultures and Social Media News Ecologies. (pp. 91-127). Springer  
        International Publishing, Cham. 

O’Hara, K., Hall, W. (2018, December 7). Four Internets: The Geopolitics of Digital
          Governance. CIGI paper 206. Centre for International Governance Innovation.

Oliva, T, D. (2020). Content Moderation Technologies: Applying Human Rights Standards to
          Protect Freedom of Expression. Human Rights Law Review, 20(4). 607-640.

Parks, G. (2020). Considering the Purpose of “An Alternative Sense-Making Collective”: A
          Rhetorical Analysis of the Intellectual Dark Web. Southern Communication Journal,
          85(3), 178-190. https://doi-org./10.1080/1041794X.2020.1765006

Popiel, P.  (2018). The Tech Lobby: Tracing the Contours of New Media Elite Lobbying Power.
          Communication, Culture and Critique, 11(4), 566-585.

Reimer, K & Peter, S. (2021). Algorithmic audiencing: Why we need to rethink free speech on
        social media. Journal of Information Technology, 36(4). 409-426.

Rubin, D [The Rubin Report]. (2018). What is The Intellectual Dark Web? | DIRECT|  
        MESSAGE | Rubin Report. [YouTube]

Shapiro, B. [Ben Shapiro]. (2021, March 27) Ben Shapiro DESTROYS Megan Rapinoe and the
          gender pay gap. [Youtube Video].

Sommer, W  (2018, Dec 18). Stars of ‘Intellectual Dark Web’ Scramble to Save Their Cash
        Cows. The Daily Beast.

Tarleton, G. (2018). Regulation of and by Platforms. In. Burgess, J. E. (Ed.) the SAGE
          Handbook of Social Media. SAGE Publications, London.  

Twitter. (N.D). The Twitter Rules.

Weiss, B. (2018, May 8). Meet the Renegades of the Intellectual Dark Web. The New York Times.