What to get
right first.

Five things I wish Web2 had addressed early to protect human rights.

by Rebecca MacKinnon

Man rowing boat in water spilling from smart phone

Note to readers: The Starling Lab welcomed Rebecca MacKinnon as a Summer 2021 fellow to launch our own research on corporate governance and accountability in Web3. Her broad experience, as a bureau chief for CNN in China and Japan as well as a digital policy scholar for leading think tanks and university research centers, placed her at the epicenter of the internet’s most complex policy challenges. But it was her years holding internet giants to account for their lofty and largely failed commitments on human rights that was the most instructive. Both pragmatic and inspiring, she helped us set the frame and tone for a broader investigation into how ethics on a decentralized internet can realistically transform data integrity across Starling Lab’s domains of history, journalism, and law.  


Rebecca’s essay takes a page of history to outline missed opportunities from Web2 that we should not miss as we embark on the next chapter of the internet. Her chosen lens, human rights, is fascinating because it showcases the most ambitious goals the internet might accomplish and also where it has failed with breathless technical solutions. As Web3 aspires to move beyond what Rebecca and others have called a neo-feudal web run by cyberspace sovereigns, it’s worth noting that the “disruptions” that advanced IRL society beyond feudalism —such as enlightenment, industrialization, political sovereignty, democracy, or capitalism —did not solve all the world’s problems. Nor did they inexorably or directly give us the concept of human rights. Norms and human rights responsibilities only emerged in the late 1940s after two world wars compelled international coordination and society to affirm our common humanity. Setting human rights standards for private business came even later. The problems of our time are different, but Rebecca skillfully shows that if Web3 aspires to have some positive impact on human rights, at minimum it will need to adopt a similar approach of international cooperation, best practices, and consensus building to establish the responsibilities of Web3 corporations in society. We are not starting from scratch. There are clear lessons and best practices to build upon.


As Rebecca completes her fellowship in late September 2021, the reader is reminded the views expressed here are entirely her own. We wish her well as she continues to blaze important paths for digital civil society.


— J. Dotan, Founding Director, The Starling Lab



The decentralized web, known to its participants and proponents as Web3, is evolving quickly and attracting regulators’ attention. Mainstream media coverage of the nascent sector focuses heavily on the celebrity founders, evangelists, speculators, scam artists, and even criminals who have been early adopters of cryptocurrencies in ways that captivate news headlines. Policymakers and media pundits know little about the community of people working to apply distributed ledger technology to create cryptographic assets, utilities, and infrastructure to decentralize the architecture of the web to make it more egalitarian and innovative. Innovators in the Web3 space point to a better future in which internet users’ lives are no longer shaped by a handful of Big Tech monopolies’ drive to expand their market caps while being completely unaccountable to their users —or even their shareholders.


To most proponents, the decentralized web offers an exciting opportunity to hit the reset button on many things that went wrong with Web 2.0. While it promises many novel innovations, Web3’s grandest ambitions sound eerily familiar. The Web3 world has adopted a narrative that proclaims it will not only provide a new economy but also set a new course for human freedom. Without an ounce of cynicism, I can say I’ve heard this story before. 


I do not pretend to be an expert on Web3. But I have spent the past two decades in the Web2 trenches: researching and engaging with internet platforms— Web2 companies— about their impact on human rights. A decade ago, I wrote a book in which I challenged pervasive optimism of the time about social media’s liberating and democratizing potential among leaders in government, media, and activism across much of the world. Instead, I called for companies, government, and civil society to address the question of how technology should be designed, operated, and governed to support and sustain human rights for all people across the world. I warned that if the trend lines of 2012 continued without governments and companies taking responsibility and being held accountable for how technology was affecting human rights, the whole world was heading not towards greater democracy and freedom—but in the direction of networked authoritarianism.


Fast forward to late 2021. Suffice it to say, public opinion and dominant views among policymakers about internet platforms’ impact on society in general, and democracy in particular, has taken a dramatic and negative turn. Many proposed policy “solutions”—to the extent that there is consensus about the actual nature of the problems that internet platforms have created or exacerbated—threaten to kill positive aspects of Web2 along with the bad.


For the past 17 years I have been working with human rights groups, academic researchers, investors, and companies themselves on questions of how digital platforms can and should take responsibility and be held accountable for their impact on the rights and freedoms of people around the world.  The Web3 community has a unique opportunity to benefit from the lessons that all of us have learned along the way. 


While the list of feature requests and “bug fixes” for Web3 to remedy is long, the purpose of this document is to give some perspective on where to begin. Looking back, it’s now clear that many of the mistakes and shortcomings of Web2 could have been addressed—or at least better anticipated and mitigated—early on.


From the very beginning, Web2 set unnecessarily high expectations on what it could accomplish in advancing human rights. It is not realistic to expect any technology solution to result in zero harm, especially when deployed at scale. Part of being accountable is having humility to understand that, work actively to identify risks and mitigate if not completely prevent harms, and address the problems when they occur. History will repeat itself unless we face the reality that no technology or innovation is capable of saving humanity from itself.


We must actively take responsibility for building, operating, and governing technology in a way that supports and sustains the type of world we want to leave behind for our children and grandchildren. There is no silver bullet. Being human, working with and doing well by others, is always going to be hard work—no matter how brilliant and well-resourced our technology might be. Web3 cannot change that any more than Web2 could.


This essay sets forth several basic suggestions that I hope can serve as five starting points for further work.

  1. Recognize that if you think you are neutral, you are not.
  2. Work to understand what it really means for your business to make a meaningful commitment to respect and protect data integrity and human rights.
  3. Be proactive in identifying potential human rights risks

4. Consider the impact of business models and corporate incentives

  1. Therefore, establish effective impact assessment, stakeholder feedback, participation, and grievance mechanisms from the beginning.


Taken together these steps not only point to how Web3 companies might consider their own policies.  They also suggest how Web3 might contribute to a new kind of collective and regenerative digital civics that includes a role for users, technology, and society, that was always lacking from Web2.


This is especially important as Web3 is now beginning to face its own regulatory scrutiny. Web3 can do more than merely plan for the same by hiring legions of lobbyists, or eventually retro-fitting oversight solutions on top of a business model or corporate culture after they become socially and politically toxic. A more productive and successful way forward is to acknowledge forthrightly that like any nascent and fast-evolving sector Web3 certainly has blind spots and inevitably will face challenges in addressing human rights issues. The best way forward is for Web3 to be transparent and work proactively with a broad set of stakeholders, including governments, as it learns how to course correct for a better future.




Many Web2 companies were built on a founding myth that technologies and platforms are intrinsically neutral. When problems occur, it is the fault of human users—not the technology. Web3 companies must learn from the resulting mistakes.


In 1985, before the Web was even invented, technology historian Melvin Kranzberg gave a speech outlining “Kranzberg’s laws” of technology and history. His first law could not be more relevant today: “Technology is neither good nor bad; nor is it neutral.”


Unfortunately, the leaders of the Web2 age did not follow Kranzberg’s first law— or heed his warning against “unforeseen consequences when apparently benign technologies are employed on a massive scale.”  In a 2013 book, The New Digital AgeGoogle’s former Eric Schmidt and Jared Cohen (who has worked for both Google and the U.S. State Department as an advisor on internet freedom and counter-terrorism) declared what they saw as the “central truth of the technology industry—that technology is neutral but people are not.” Problems arise, they argued, when people blame technology for human shortcomings.


We now have carefully researched histories from the early Web2 days that shows how the pretense of “neutral” content moderation that failed to consider the power dynamics, inequalities and vulnerabilities of different types of users resulted in harms to many individuals and communities.


The 2020 findings from a Civil Rights Audit of Facebook’s content moderation practices underscored how leaders’ claims and even aspirations of neutrality can end up amplifying society’s biases and inequalities. While company executives touted their vision of the platform as a neutral arbiter for free speech, the company’s failure to consult or hire civil rights experts at senior levels resulted in actions that were “devastating” for minority and vulnerable users. Furthermore, in Facebook’s case, a platform that claims to be neutral actually ended up favoring users with superior resources, organization and online marketing acumen who can leverage the company’s targeted advertising business model. Similarly, Twitter executives claimed for years that their content moderation policies and practices are neutral. Yet scholars documenting the use of Twitter to advocate for the rights of marginalized groups found that in reality, activists frequently experienced how the company’s terms of service enforcement produced racist and sexist outcomes.


Today we are watching Web2 companies being called to account by lawmakers in the U.S., Europe and beyond for what many perceive to be negative social, political, and economic impacts of their businesses. How might things have turned out differently if the founders of major Web2 platforms had followed Kranzberg’s first law? 


What if Web2 leaders had understood that in an inequitable and unjust world, no institution can actually be “neutral” about the power dynamics and inequities among the people it serves, without ultimately—even if inadvertently—amplifying or perpetuating them?


As Web3 companies set forth to provide new fundamental protocols, there is a sense that the paradigmatic push for decentralization is both technical and philosophical. Web3 leaders are not necessarily claiming to be neutral —which is an improvement. However, the community’s prioritization on code-based trust models or autonomous organizations strays into the kind of techno-solutionism that was at the core of Web2.0 most naïve claims to neutrality. Technology can never really supersede a human process; it can merely augment it.


Web3 has the benefit of hindsight and clear indicators of the  human rights problems that this sector’s businesses will confront—and harms that they need to prevent and mitigate—as they evolve and scale in the coming decades. Web3 leaders and communities have an early opportunity to embed commitments into Web3 institutions, governance, and business practices.  Setting clear human rights standards for Web3 can help to avoid repeating Web2’s mistakes—and accompanying consequences for humankind.


2.  Understand what human rights really mean for your businesS


Web2 companies’ understanding of their impact on human rights was belated, reactive and ultimately far too narrow. Web3 companies that commit to respect and protect human rights need to be holistic and embrace a robust definition of human rights from the start. They should work with experts in the field of technology and human rights to understand the full range of ways that a company’s technologies and operations could potentially cause, contribute, or otherwise be linked to harms—to individuals as well as communities.


The U.N. Guiding Principles on Business and Human Rights, endorsed by U.N. member states in 2011, sets out the foundation for strong governance and oversight of human rights risks and impacts by all types of companies through its “protect, respect, and remedy” framework. In order to demonstrate respect for human rights, companies should make public commitments to human rights, conduct thorough due diligence to identify and mitigate human rights harms, and provide remedy to address negative consequences of any harms that the company might cause or contribute to. The basic principles apply as strongly to Web3 as to Web2. The challenge lies with operationalizing them across complex, fast-evolving industries and businesses. The globally distributed nature of Web3 makes their implementation especially challenging.


When companies commit to support and advance human rights, they need to make sure they fully understand what that means for their entire business and industry.  In the mid-2000’s when I started talking to Web2 companies about the human rights implications of their businesses, most people—not just companies but also activists and researchers including myself—focused on a narrow set of human rights harms caused by demands that governments around the world make on companies to censor content, hand over user data, and assist with the surveillance of customers. Companies faced blistering condemnation from lawmakers and human rights groups for their complicity in Chinese human rights abuses because they complied with censorship and surveillance demands. But the human rights dimensions of other aspects of Web2 company operations including content moderation, commercial data collection, and targeted advertising were not well understood and generally not addressed by company-targeted advocacy campaigns or regulatory proposals in Washington until much later.


Getting even a few major Web2 companies to accept responsibility for human rights violations against users caused by government demands—let alone anything else—remains an uphill battle. The challenges faced by Yahoo, Microsoft, Google, and other companies in China in the mid-2000’s eventually led to their public acknowledgment that they can and should be held accountable for where, when, how, and whether they respond to government demands around the world. In 2008 several of leading Web2 companies became founding members of the Global Network Initiative (GNI), a multi-stakeholder organization which requires companies to make commitments around how they will work to respect and protect users’ human rights in the face of government demands. They commit to conduct due diligence before entering new markets or rolling out new products and services and use risk analysis to decide whether to enter a market at all (one reason Facebook – which joined GNI in 2013 – is not actually in China). They commit not to respond to illegal or informal government demands, and to interpret demands as narrowly as possible. They commit to transparency about their policies and practices for responding to government demands (a commitment that spawned the widespread practice of transparency reporting after Google published its first transparency report in 2010). GNI members are required to undergo a pass-fail assessment overseen by a multi-stakeholder governing board including human rights groups, academics and investors to determine whether they are satisfactorily adhering to GNI principles.


As was painfully highlighted by the recent government-forced removal of a Russian election app from Android and Apple app stores, the GNI has certainly not eliminated violations of platform users’ rights by governments that abuse their censorship and surveillance powers. Yet GNI members’ commitments and underlying practices have nonetheless made a material difference for internet users around the world. Google’s transparency report describes thousands of cases in which the company has refused to remove content or hand over data, in compliance with its GNI principles and implementation guidelines. Having watched a number of leading Web2 companies develop practices for evaluating and reporting about government demands since 2005 when they had no coherent approach to protecting users’ rights, I have concluded that arbitrary government censorship and surveillance via Web2 platforms would be even worse than it is today by several orders of magnitude without baseline industry standards for platforms’ policies and practices in responding to government demands.


Unfortunately, however, the focus on government demands as the main threat to Web2 users’ human rights has proven to be much too limited. Over the past decade, events have highlighted how companies’ own business operations and processes are causing harms to individuals and communities. But because Web2 companies’ human rights policies and commitments—to the extent that they have existed at all—have tended to focus only on government demands, most company executives have not until recently even considered the human rights implications of a wide range of business practices, let alone how to identify, understand, prevent, or mitigate harms that those practices cause.


New America’s Ranking Digital Rights (RDR) research program sought to fill this gap by developing human rights standards and benchmarks for companies that build beyond GNI’s narrowly focused standards to include content moderation, commercial data collection, algorithms, and targeted advertising. Since 2015, there has been a significant and laudable increase in the number of companies covered by the RDR Corporate Accountability Index that make explicit commitments to respect and protect human rights. But as the most recent Index report points out, very few companies show any evidence of systematic processes that help them even define the scope of human rights implicated by their business, let alone identify and mitigate risks.


GNI is an example of how some leading Web2 companies recognized that they could not define, let alone solve their problems alone: they spent several years working with human rights groups, investors, and academic experts to hash out a set of principles and implementation guidelines. Unfortunately, however, GNI came together only after the companies faced human rights crises in China. 

Web3 companies need to be proactive from the start about mapping out their human rights risks, informed by collaboration with a range of other stakeholders including human rights groups and independent researchers.




Emboldened with a presumed sense of historic purpose, Web2 generally retrofitted human rights due diligence and risk assessments onto their operations in an ad-hoc and limited way. Many others conducted little due diligence at all, invoking human rights victories like the toppling of dictators in the Arab Spring as proof of the inevitability of progress. Web3 companies have an opportunity to be much more realistic and proactive from the beginning. Yet as companies try to champion early adopters as proof of the legitimacy it’s easy to get carried away. From a human rights standpoint, El Salvador’s adoption of Bitcoin as legal tender was not exactly a “success.” The Web3 community doesn’t have to make the mistake of promoting all examples of Bitcoin adoption as positive. From the top down, leaders have an opportunity to set the standards and public expectations for what good practice looks like to address human rights risk scenarios.


For Web3 companies seeking to ensure that human rights harms are addressed and mitigated when they are identified—whether by staff, members of their user communities, or external stakeholders—it is important to set up accountability mechanisms and corporate governance structures that impose consequences for ignoring or fail to act upon such information. This is not easy for any industry: a 2015 report by the Economist Intelligence Unit described the long and difficult road traveled even by companies with strong human rights commitments and due diligence processes.  Veterans of corporate efforts to respect and protect human rights around the world understand that while perfection is not possible, every step forward cumulatively makes a huge difference for millions of people around the world. 


Web3 companies should do what Web2 companies only started doing after being condemned widely for complicity in authoritarian human rights violations: human rights due diligence and formal human rights impact assessments. The good news is they will not have to start from scratch. In contrast to the early Web2 days, there are many excellent resources as human rights impact assessments become more commonplace across different business sectors. For example, the Danish Institute for Human Rights has published comprehensive guidance and a practical toolbox, and the California-based non-profit consultancy has also published a detailed guide. The UN Office of the High Commissioner for Human Rights also runs a tech industry-focused program called B-Tech, which has been working with industry and all of its stakeholders to develop comprehensive guidance for implementing the UN Guiding Principles, including impact assessment and remedy.


To understand what risk assessments, need to accomplish, it is instructive to understand what “human rights risks” have meant in the Web2 context and what some of the blind spots have meant for users and affected communities. In creating a methodology for evaluating whether Web2 companies’ policies and practices were compatible with human rights standards, Ranking Digital Rights (RDR) started by mapping out a set of human rights risk scenarios. An initial set of human rights risk scenarios describing how Web2 users’ freedom of expression and privacy can be violated were developed for earlier editions of the RDR Index, which focused primarily on direct harms to a individual users. Here are just two of many examples:


●        “The company removes content and fails to inform users that content was removed, why it was removed, and under whose authority (e.g., national law, the company’s terms of service) it was removed.”


●      “A legal jurisdiction requires the company to retain data about its users’ online behavior for a limited period of time. After the time period elapses, the company retains this data, due to neglect, commercial purposes, or other reasons. When a security breach occurs (either during or after the data retention period), the users’ personal data (IP addresses, websites visited, possibly messages sent) are made publicly available.”


Those scenarios were later expanded to cover individual as well as collective human rights harms associated with Web2 companies’ targeted advertising and algorithmic systems. A few examples:


●      Company A uses an algorithm on its platform to generate “affinity groups” that advertisers can use to target specific audiences. The algorithm recognizes certain patterns in user profiles, and determines that people who express hate toward Ethnic Group A are a valuable audience for advertisers. A hate group associated with Ethnic Group B uses this affinity group to spread hate speech targeting Group A to the users who are most likely to engage with that content.


●      Company A, which depends on advertising revenue, provides access to a subset of internet services (including its own platform and websites) to its users at no financial cost (“zero-rating”), thus incentivizing users to favor those services over competitors and ensuring that the company is better able to track users’ online activity and serve them more “relevant” ads. De facto limited access to a broad choice of online information sources creates enabling conditions for human rights violations.


●      A social media platform decides to scale up its algorithmically generated group recommendations in order to increase engagement among its users. The algorithms it applies operate opaquely and the company cannot predict recommendations in advance. Since social media algorithms are driven by engagement, the company’s group recommendation algorithm promotes the content that is most likely to be clicked on, without examining its impact on human rights. The group recommendation algorithm starts suggesting that users following anti-immigration and racist accounts join groups dedicated to discussing xenophobic and anti-immigrant views. As users act on the suggestion, the algorithm suggests the group to more and more users, including many who are statistically similar to group members but don’t follow any anti-immigration or racist accounts themselves. Over time, some users adopt xenophobic and anti-immigrant views, and even commit acts of violence against immigrants in their communities.


Using indicators developed from such scenarios, RDR’s evaluation found that GNI member companies scored better on indicators related to government demands than other companies. That is because these companies had clearly identified and worked to mitigate human rights risks caused by government demands. Yet most have failed to address the human rights risks associated with other areas of their businesses including content moderation, use of algorithms, or targeted advertising business models. Furthermore, GNI Web2 companies’ understanding of freedom of speech has centered on censorship, without taking into account how organized disinformation campaigns, hate speech, and harassment can affect peoples’ fundamental right to freedom of expression. As one respected human rights scholar recently pointed out, Article 19 of the International Covenant on Civil and Political Rights encompasses not only free speech, but also the right to access information and to formulate opinions without interference.


Furthermore, Web2 companies have generally failed to identify risks associated with other key human rights included in the only globally recognized set of human rights standards in existence today: the Universal Declaration of Human Rights and accompanying two UN Covenants on civil and political rights plus economic, social, and cultural rights. It has become clear in recent years that human rights affected by the operations of Web2 companies include the right to life, liberty, and security of person (UDHR Article 3), the right to non-discrimination (UDHR Article 7, Article 23); freedom of thought (UDHR Article 18); freedom of association (UDHR Article 20); and the right to take part in the government of one’s country, directly or through freely chosen representatives (UDHR Article 21). RDR’s evaluation of Web2 companies’ policies and practices reveals a widespread failure to make commitments to protect and respect these rights alongside freedom of expression and privacy.


For Web3, the due diligence process starts by mapping out all the possible ways that the industry might cause, contribute, or otherwise be linked to, harms to individuals as well as communities. 


Such an analysis would begin with an examination of how today’s Web3 users interact with and are affected by the technology, and what that means in turn for their ability to enjoy the full range of human rights.


For any business, comprehensive due diligence naturally includes an analysis of its relationship with its own workforce, suppliers and contractors. While this paper does not focus on labor issues, it is important to flag Web2’s struggles with workforce dissent, exploitation of contractors hired to carry out content moderation and process data used by algorithms, and the negative socioeconomic impacts of platform gig work. Web3 may have different problems related to labor exploitation and power, but to be sure a responsible business committed to uphold human rights needs to clearly identify and address such problems proactively.


Any analysis of a company’s human rights impacts must also examine how the power dynamics among different types of people within and across communities and nations might be affected as Web3 innovations achieve wider adoption. After Web2 platforms achieved widespread adoption and massive scale, they have had a powerful impact on the social, economic, and political dynamics of many countries, with massive implications for civil liberties and human rights. The use of social media to incite and justify genocide in Myanmar and the viral spread of medical misinformation across the world are only two of countless examples of how companies have failed to anticipate, prevent, or mitigate negative impact of their innovations on human society.


Given Web3’s focus on decentralized services, transactions, and compensation, a key question for human rights impact analysis revolves around how Web3 technologies will affect economic, social, and political dynamics within and across nations as these companies evolve and scale. What, ultimately, are the human rights implications of Web3 – not only for civil and political rights, but also economic and social rights? Observers of Web3 have pointed out that issues related to access, participation, and representation in Web3 governance have the potential not only to exacerbate not only social inequality, but power imbalances between different types of of people leading to exploitation and discrimination. In his paper on Web3 “cryptoeconomics,” Nathan Schneider argues that Web3 platforms tend to be designed and governed around economic incentives to the exclusion of all else that is important to human beings. The table is thus set for a range of harms to which Web3 enterprises will be blind unless proactive efforts are made.


Importantly, the Web2 experience teaches us that it is not enough for companies to consider their impact on the human rights of people who have actively chosen to use a platform or service, or participate in a specific community. Facebook, for example, failed to consider how Rohingya Muslim minorities in Myanmar who aren’t users of the service might end up being victims of genocide that was fomented and organized on the platform.  Web3 has the same responsibility to consider and mitigate potential harms to people who are affected by its technologies but don’t actively use it. Some scholars are concerned that the damage inflicted on democracy by Web2 could be overshadowed by Web3 if proactive efforts are not made to identify and mitigate potential harms. Schneider has warned of the dangers of Web3 cryptoeconomics which, as a “neoliberal aspiration for economics to guide all aspects of society,” could threaten “democratic governance and human personhood itself.”


It is also important to remember that no company—or industry—is capable of assessing its human rights risks and establishing processes to address them without collaborating with a broad ecosystem of external stakeholders and experts. Internal feedback and deliberation are important, but organizations and platforms—especially in their early days—tend to be self-selecting early adopters who do not reflect the broader range of people who will use the technology or be affected by it once it scales. Internal governance and feedback processes, even Web3’s innovative token-holder voting mechanisms, are not a substitute for expertise, lived experience of vulnerable groups not represented in the organization, and critical independent research necessary for Web3 companies to overcome their blind spots.


Other industries—from extractives to manufacturing to tech—have formed multi-stakeholder initiatives, processes and organizations in order to help them identify risks, develop mitigation strategies, and hold one another accountable. While the GNI is one example from the tech sector, examples from other sectors include the Extractives Industry Transparency Initiative and the Fair Labor Association. Again, none of these organizations are perfect by a long shot as critics have pointed out in detail, and none have managed to completely stop the violations they seek to prevent. Still, one can make a strong case that they have helped set a clear baseline for how companies should work with other stakeholders to improve their respect for human rights, even while the ideal “ceiling” remains out of reach. Company due diligence is also informed by independent research and benchmarks: While RDR covers digital rights, Know the Chain benchmarks company efforts to eliminate forced labor, while the Corporate Human Rights Benchmark offers benchmarks for commitment and practice across several sectors. Companies conducting risk assessments commonly refer to the indicators used by these benchmarks to help define risks and mitigation strategies. It is not too early for Web3 companies to start thinking together as an industry about how to be proactive in supporting and engaging with a broader ecosystem of stakeholders that will help them identify risks and harms, and help to hold them accountable.



4. Consider the impact of business models and corporate incentiveS


When the first Web2 companies that eventually became global behemoths were working with their investors toward an eventual IPO, the founders and investors ignored the question of whether their business models and governance structures might possibly lead to socially harmful outcomes. They failed to heed Kranzberg’s law of unintended consequences: when you scale, unforeseen harms and consequences will inevitably follow. The consequences of defying this law for Web3 will be no less than for Web2, and could potentially be even worse.


As Wikimedia’s research team recently pointed out, when a company’s business model is targeted advertising and its core purpose is to maximize value for shareholders, the result is a set of top-down content moderation policies and priorities designed to maximize content flow and engagement. Other business models can incentivize in different priorities. For non-profit platforms created to serve a public interest purpose, not only are the incentives different but the entire process of creating and enforcing rules is completely different. In Wikipedia’s case, the platform’s purpose is the free sharing of knowledge and its content policies are developed and enforced by decentralized communities of editors. Appeals and decision-making processes are also decentralized and implemented by people with knowledge of the cultural context and language related to the content in question.


Some scholars of the online world have concluded that the answer to the social harms caused by the Web2 giants is “digital public infrastructure” — non-profit platforms designed for and by the communities they serve and funded, perhaps, by taxing the Big Tech platforms. Others hope that increased anti-trust enforcement can help spur a proliferation of smaller for-profit platforms that offer alternatives to the Web2 giants, which serve smaller communities coming together for specific purposes, and whose business models are deliberately not based on targeted advertising — ideally including cooperatives, B-Corps and other types of businesses more deliberately grounded and organized around social purpose commitments.


Some Web2 companies serving smaller or more niche communities are experimenting with better ways to handle content moderation. For example, Cornell University’s Citizens and Technology Lab has worked with Reddit and other platforms to help their volunteer community moderators design rules, interventions, and enforcement mechanisms that help reduce harassment and bolster the effectiveness of fact-checking. As I suggested toward the end of a paper for the University of California National Center for Free Speech and Civic Engagement published last year, universities and other educational institutions have a role to play in this new ecosystem as well. In collaboration with cities and towns where they are located, universities as well as libraries are an ideal starting point for incubating and piloting new types of platforms that benefit society and all of its members — not just early-adopting elites. Such initiatives could help to incubate new businesses as well as non-profits.


Web3 businesses can certainly play a role in challenging the dominance of Big Tech, while also helping to build a new ecosystem in which business models not focused on maximizing advertising revenue enable communities to design content-related rules and enforcement mechanisms that are more tailored to specific communities and context, rather than applied and enforced globally for millions of users. Web3 governance models, generally based on voting by token-holders, would seem to lend themselves to rulemaking and problem solving by a community of user-owners. Many of the risks associated with Web2 businesses do not apply to Web3 businesses, given targeted advertising is not the economic engine of Web3 and cryptography with distributed systems have allowed users to have sovereignty over their data and keep their public interactions encrypted. However, Web3 should recognize that, just because you’ve solved for the targeted advertising problem and the problem of government demands to hand over or remove centrally-held data, you can still fail to anticipate and address other harms that you may cause or contribute to in the future.


Some Web3 companies are aware of the dangers of excessive homogeneity and insider group-think can be dangerous when voting by token-holders is the central form of governance. This has led to notable experimentation aimed at making their governance processes more inclusive, actively bringing in a broader set of stakeholder viewpoints. Andreesen Horowitz (a16z), which holds substantial stakes in protocols including Compound, Uniswap, and Celo, has announced a token delegation program through which it delegates voting power in decentralized autonomous organizations (DAOs) that govern those protocols. Recipients include university organizations and non-profits. a16z has committed to full transparency about how delegations are made and who the recipients are. This is a fascinating experiment in accountability and inclusion in how blockchain protocols evolve. It is a vector for expanding participation in decision-making to a much broader set of stakeholders than is the case with traditional listed corporations of any kind.


While it still does not necessarily help an organization evaluate its potential risks and human rights impacts in a systematic way, DAO voting could potentially help build support for such processes and mechanisms, and hold community members accountable to them, assuming that the invited stakeholders hold a shared commitment to and understanding of human rights in relation to their business. It might be helpful for token holders and their delegates to undertake a process of deliberative polling and other learning exercises in order to ensure that they have taken the time and effort to understand what they are voting on. Beyond voting, a group of scholars have recently proposed the development of an open standard for networked politics. “Modular Politics” would enable the creation of interoperable and easily duplicated technical modules for online community governance, that could be adapted and modified to suit different communities’ governance needs.


Still, it is important to remember that neither voting nor other more innovative models for community participation and governance will ensure that human rights standards are upheld by a Web3 company, or that people in the company or token-holder community who abuse power will be held accountable. Given that Web3 platforms not only operate around exclusively economic incentives but for the time being exist outside of most nations’ regulatory oversight frameworks, Schneider warns that they are “even more vulnerable to runaway feedback loops, in which narrow incentives overpower the common good.” Public human rights reporting, a high degree of transparency in ways that enable abuses of power and violations of rights to be held accountable, and credible human rights due diligence (increasingly required by law for companies over a certain size across Europe) all have a role to play in ensuring that companies’ business models, governance, and incentive structures have not produced disastrous blind spots.


5. Therefore, Web3’s key strategy to defend human rights is to develop meaningful grievance and feedback mechanisms from the beginning.


In retrospect the most rudimentary error that Web2 made from the start is that it let its hubris deny it a chance to get data on its errors as early as possible. For all the focus on big data, A/B testing and quantitative product development, Web2 leaders assumed that on balance more usage of their product would do more good than harm. Years later acting on rich and honest feedback from users is still not a part of the culture. As recent Wall Street Journal’s investigations on Facebook have shown, the company ignores even extensive evidence and warning from their own employees. More troublingly the reporting showed the company largely fails to address problems outside of the US because there are not enough local language employees and content moderators in developing markets to develop feedback mechanisms.


All industries have struggled to meet their responsibility to establish mechanisms for people who have suffered harm to file grievances and obtain redress—and to support governments in establishing appropriate legal and judicial mechanisms. Web3 can work to be proactive in establishing grievance and remedy mechanisms. 


An essential component of the UN Guiding Principles, grievance and remedy mechanisms are vital to understand risks and potential harms your business might cause or contribute to as it scales, and to make sure you are being held accountable by yourself and others. What does it actually mean for a company to have strong grievance and remedy mechanisms? The answer varies widely by industry. Take multinational oil companies, for example. Grievance and remedy mechanisms for the oil industry are expected to address violations of workers’ rights, violence committed by private security forces employed to secure oil facilities against local populations in the areas where companies conduct drilling operations, displacement and environmental degradation that harm local communities, etc.


In these scenarios drawn from the oil and gas sector, governments also have an obligation to regulate against harms and establish effective mechanisms for legal remedy. That is true for other sectors as well, including tech. But especially in areas where governments are weak, dysfunctional or themselves routinely violate human rights, or where lawmaking is corrupt, companies nonetheless have a responsibility to offer private grievance and remedy mechanisms. Plus, even in well-governed nations the law generally fails to keep pace with economic and technological developments, which means that companies committed to operate in a manner consistent with human rights norms need to be proactive in setting up policies and practices that enable them to prevent and mitigate human rights harms. Physical businesses face complex challenges; Web2 companies with billions of users across the world face a different dimension of human rights risks that relate to the digital rights of users.


Web2 companies notably failed to develop effective mechanisms for users and other affected parties to file grievances and seek meaningful remedy. When it launched in 2008, GNI’s members—companies, human rights organizations, researchers, and investors—committed to work together to develop effective grievance mechanisms for the sector. More than a decade later they have made little progress. The major U.S.-based Web2 companies, including GNI member companies, have fared poorly on RDR’s evaluation of whether they have put in place clear, accessible, and effective mechanisms through which users and affected communities can register grievances and obtain redress when the companies’ operations either cause, contribute or are linked to violations of their human rights.


For Web2, grievance and remedy processes should ideally cover the range of human rights risks faced by the sector, such as those described in the previous section and many others mapped out by RDR.  In practice, Web2 companies have focused primarily on developing appeals mechanisms for users who believe that they are victims of content moderation mistakes. But news media and researchers continue to report myriad cases in which the appeals mechanisms fail many ordinary users, while celebrities or people with contacts in the media are able to draw attention to their cases and get content or accounts reinstated.


To Facebook’s credit, its innovative Oversight Board (OB) was established to adjudicate user grievances about the company’s content moderation decisions. However as many critics have pointed out, the OB has limited remit and power. Many types of harm – especially to groups of people, and people who are not themselves Facebook users – lie completely outside of the OB’s scope. Nor is the OB empowered to address the company’s data practices. A short lived experiment in inviting users to vote on privacy policies was scrapped in 2012 due to insufficient participation, and no other mechanism (other than government regulation) has since been attempted to address privacy concerns on the platform.  Since then, the only meaningful mechanism for holding Web2 companies accountable for privacy abuses is government regulation, where that exists in human rights-compatible form. Violation of workers’ rights and failure to protect workers across the digital and physical supply chain is another area where private and judicial remedy has been grossly inadequate.


As recent media reporting has shown, many of the harms associated with Facebook’s content moderation systems and priorities were known to the company’s own staff. While there was an internal process for staff to express concerns, and even a research unit tasked with identifying issues, management was under no obligation to use the information or remedy the problems that staff and internal research had identified.  Companies’ whistleblowing mechanisms—reportedly also inadequate at major Web2 companies—are also vital.


What would comprehensive grievance and remedy mechanisms for Web3 even look like? It is too early to say, given that Web2 has not managed to expand its own grievance and remedy work much beyond content moderation. Much more experimentation and innovation will be required. But as discussed in the previous section, all companies and communities have their own internal blind spots that can only be overcome through broader collaboration across the industry and with external stakeholders including human rights organizations, academic researchers, investors, and potentially other categories of people such as members of the technical community and even government agencies or inter-governmental bodies with relevant expertise.   Examples of collaborative initiatives in the Web2 space that have emerged belatedly to address problems of online extremism are the industry-led Global internet Forum to Counter Terrorism and the government-led Christchurch Call, both of which include multi-stakeholder advisory networks.





For any type of business, making meaningful human rights commitments and actually implementing them in an effective, credible way is difficult, endless work. For Web3, the sector’s newness is both an extra challenge and an opportunity both to innovate and to prevent history repeating itself by putting commitments and processes in place from the start.


Be realistic, and collaborate: you are not alone and cannot meet your human rights commitments— or even understand them— without help from a broader community of stakeholders. Where progress has been made towards corporate accountability in Web2, it has happened as a result of collaboration across industry and with other stakeholders including civil society, researchers, socially responsible investors, and policymakers.


Unfortunately, most progress in Web2 has been spurred in response to crises caused or exacerbated by companies’ failure to understand the human rights implications of their businesses from the beginning. The challenge and opportunity for Web3 is to build companies that are committed to human rights from the start, and whose very business models and operations are consistent with human rights standards before they achieve massive global scale.


A final note about regulation and working with governments. Today, the Web3 community is struggling with the question of how to respond to regulatory scrutiny.  Having observed the experience of Web2 it is clear that the best way to invite ham-fisted, counter-productive regulation is to be defensive and dismiss critics as cretins and luddites who don’t understand the technology. A more productive and successful way forward is to acknowledge forthrightly that like any nascent and fast-evolving sector Web3 certainly has blind spots, and its growth will inevitably have unintended consequences. Make clear public commitments to serve the public interest and respect human rights. Acknowledge that you need help figuring out how to do so as you scale. Help create spaces and processes for shared learning, policy innovation, and essential innovations in corporate governance and accountability. It will not be easy, but future generations will thank you for being humble and taking responsibility for your creations.