“An old tradition and a new technology have converged to make possible an unprecedented public good.” – Budapest Open Access Initiative, 2002
Twenty years later, the unprecedented public good of knowledge sharing remains possible. We have made significant strides since the BOAI to cultivate systems of sharing knowledge that are open by default and equitable by design. Open research policies are becoming the norm for research funders, and governments are making significant investments in open educational resources. Higher education leaders are actively grappling with changes to incentives structures to better align with open scholarship. There is growing momentum behind models for open access publications, such as Diamond and Subscribe to Open, that center the needs of the academic community and avoid barriers to both authors and readers.
We have the opportunity to accelerate this progress, and to do so, we must engage with the rapid changes to the digital environment that intersect with knowledge sharing systems and that could impact our ability to prioritize the public interest. These intersecting issues include the appropriate development and deployment of artificial intelligence, protection from privacy-invasive business models, and the cultivation of diverse digital platforms to avoid control by platform monopolies. By addressing these issues proactively and connecting with the broader communities working in these areas, we can ensure that the future of our knowledge sharing systems reflects the best interests of the academic community.
On September 19-20, representatives from different knowledge sharing organizations met in Lisbon to grapple with these issues and lay a foundation for working collaboratively to address them. The Lisbon meeting built on three virtual workshops focused on these issues of privacy, AI, and platformization.
Across these discussions, a common theme emerged: the need for academic control of academic systems and effective governance structures to ensure this control. Each of these emerging challenges is rooted in the prioritization of commercial interests and in governance structures that diverge from the public interest, and each risks slowing progress toward fully realizing systems that support sharing knowledge as a public good.
Artificial Intelligence
The explosive growth of new AI-based tools presents the opportunity to empower researchers, educators, and the public with the ability to interact with scientific and scholarly knowledge in unprecedented new ways. As we work to empower the community to capitalize on this, we need to engage with the full lifecycle of AI development and use. Specifically, three key issues stand out as particularly important: governance of ways in which the knowledge commons is used for AI training in order to mitigate the risk of exploitation; curation and governance of datasets to address biases that then are replicated by AI systems and to counter loss of context and provenance; and, control and proper governance of AI tools that are deployed in research and education. While copyright may have a role to play, a broader set of strategies and mechanisms will be necessary to effectively address these issues. Knowledge sharing advocates should look toward regulatory interventions and making decisions to adopt systems contingent on effective governance.
Platformization
In order to prioritize the public good effectively and globally, our systems for sharing knowledge must themselves be diverse and reflect the local contexts across which knowledge is produced and used. The Global South presents successful examples of community-governed research infrastructure that avoid reliance on proprietary platforms and mirror the specific needs of their context. As platform monopolies form across many industries, we must protect and nurture diversity in our knowledge sharing systems and take proactive steps to avoid dominance by these proprietary platforms in research and education. These steps range from making more informed individual and institutional choices about which platforms to use to pursuing public policy and regulatory interventions, such as antitrust.
Privacy & Surveillance
The issue of privacy is deeply connected to those of AI and platformization. By better protecting privacy, we can both reduce the risks posed by business models that monetize personal data and limit the rise of proprietary platforms that rely on this data to grow themselves and limit competition. Institutions can negotiate more assertively to limit data collection and non-academic use of academic data. These negotiations can inform future decisions about what to support and to transition, over time, toward community-governed, privacy-protecting systems. Policymakers can expand legislation related to privacy and algorithmic transparency to more directly cover systems used in research and education.
Protecting privacy, ensuring a diversity of digital platforms, and realizing the benefits enabled by AI technologies while limiting potential harms are not the only broader trends impacting knowledge sharing today. In discussions before and during Lisbon, others were raised, including labor and broader antitrust and market concentration issues beyond platformization. Knowledge equity and environmental justice advocacy also offer lenses through which these issues can be usefully explored.
Advocacy for openness across many domains (such as research, data, education, and heritage) has, in the past, gained strength by integrating and seeking shared advocacy approaches. In the current environment, fostering closer collaboration with other movements and activists, from antitrust to privacy to platform regulation to labor activists, is essential. Ongoing cooperation among open movements will be even more relevant, and in all of this work, the next generation should be at the core.
This post also appears on the Open Future blog.