Publisher: Alexander von Humboldt Institut für Internet und Gesellschaft   (Total: 1 journals)   [Sort by number of followers]

Showing 1 - 1 of 1 Journals sorted alphabetically
Internet Policy Review     Open Access   (Followers: 2)
Similar Journals
Journal Cover
Internet Policy Review
Number of Followers: 2  

  This is an Open Access Journal Open Access journal
ISSN (Online) 2197-6775
Published by Alexander von Humboldt Institut für Internet und Gesellschaft Homepage  [1 journal]
  • Ad hoc network

    • Authors: Kelsie Nabben; Ellie Rennie
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.Definition of the term“Ad hoc” is Latin for “to this” meaning “for this” or “for this purpose”. The term “ad hoc network” refers to the ability for members of a network to establish a network connection between devices. Yet, ad hoc networks are relevant both in technical terms, as well as in terms of the social, political and economic modes of self-organisation they enable. They also depend on technical standards, as well as regulatory and policy frameworks in most settings.A network can be described as ad hoc when it is self-provided and not reliant on an installed base of pre-existing infrastructure, except where it connects to external services (such as internet gateways). Thus, the attribute of “ad hoc” in a network often pertains to decentralised networks that do not rely on a central point of control. Instead, the network is comprised of “peers” in a network and each peer operates as a “node” to forward packets of data to other nodes.Ad hoc networks require people to combine software and hardware tools to set up peer-to-peer infrastructure to provide access to temporary communication networks. Today, smartphone applications can create ad hoc networks through native Bluetooth or WiFi capabilities. This enables new network architectures for access and coordination through digital infrastructure. When long-standing, these can adapt to become local area networks.The combination of “ad hoc” networks with other technologies, such as blockchain, enables new social, economic, and political possibilities for self-organising. An example of an ad hoc network are temporary cryptocurrency economies which have proven adaptive and responsive for connecting people, information, and resources online, and in person, for time limited and specific purposes before disseminating. For example “Decentralised Autonomous Organisations” (DAOs), such as ConstitutionDAO, which collectively raised millions of dollars in an attempt to buy an original version of the U.S. Constitution, and UkraineDAO, which responded to raise millions of dollars in support of Ukrainian fighters in the conflict with Russia in a matter of days.OriginAd hoc networks would not have come about if it was not for a number of preceding developments in distributed communications networking research and development, unlicensed spectrum regulations, and open standards.Distributed computing emerged in the 1960s as a potential solution for more resilient networks against the threat of military attack. While working for military research organisation RAND Corporation in the 1960s and 1970s, Paul Baran authored 13 seminal papers “On Distributed Communications” (RAND Corporation, n.d.). Baran is credited for inventing the idea of “distributed networks”, that went on to inform some of the attributes of the internet and ad hoc networking (Yoo, 2018). Distributed networks require that all nodes be connected in a network by multiple links to make a system robust against physical attack. Through these ideas, “it is thus possible to visualise a new set of systems based upon a distributed organisation” (Baran, 1967, 21). The concepts of “packet switching” and “store and forward” data transfer were pioneered to make distributed networking possible. Baran proposed that data could be divided into individual packets termed “message blocks” that would travel independently through a network and be reassembled once they reach their destination (which later became known as “packet switching”, as termed by other independent, simultaneous inventors) (Yoo, 2018). The other fundamental innovation for distributed networking that applies to ad hoc networks is that network data traffic operates on a store and forward routing algorithm to eliminate the vulnerability of a single centralised point of control being targeted by a foreign attack and causing a communications failure across an entire network (Baran, 1967; 1965).From these origins, ad hoc architecture matters as both a technical architecture and political means for resilience and self-governance, rather than relying on existing infrastructure or third-party provision of infrastructure, as per the example of DAOs.EvolutionAd hoc networks have evolved in terms of usability, security, availability, complexity, and purpose.Baran's propositions were fundamental for the architecture of the modern-day internet, which was originally an internal network or “intranet” that only authorised parties could access to share information called the Advanced Research Projects Agency Network (ARPANET) (Abbate, 1999). The concepts of “message blocks” and “store -and -forward” concepts laid the foundation for distributed networks to automatically select routes for multi-hop communication between any two nodes on the network. However, the principle of non-hierarchical distributed networking was not adopted in ARPANET, as the attribute of survivability of the network was not a priority (Abbate, 1999). This emission had consequences in the central points of control that manifested in modern day internet architecture, which peer-to-peer decentralised technologies such as public blockchain networks seek to address.From the late 1960s, researchers at the University of Hawaii developed wireless networking innovations to allow them to send information across islands and to link to ARPANET. The ALOHAnet’s random access techniques formed the basis of Wi-Fi and mobile networking (Abramson, 2009). By the 1970s, the packet radio network (PRNET) proje...
      PubDate: Tue, 26 Apr 2022 09:10:59 +000
  • Independently-hosted web publishing

    • Authors: Daniel Villar-Onrubia; Victoria Marín
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DefinitionThe adjective independently-hosted is used here to describe online publishing practices that utilise the World Wide Web (hereafter the Web) as a decentralised socio-technical system, where individuals and communities operate as the owners or controllers of the online infrastructures they use in order to share content. Such practices may be adopted as an alternative of, or as a complement to, the use of centralised content-sharing systems that belong to and are entirely operated by third parties. The term “publishing” is used here in a rather inclusive way and refers to the act of making content available online, rather than being restricted to the editorial processes that characterise, for instance, academic publishing.It involves the use of server space, usually obtained from a web hosting provider, to create a static website or to install a content management system (CMS) such as in order to create a self-hosted site. On the contrary, a site that is not hosted independently could be exemplified by the use of a website builder entirely operated and controlled by a third party.Origin and evolutionIndependently-hosted web publishing is part and parcel of the Web as an information sharing infrastructure, with the first website and web server established in 1990 (CERN, n.d.). While the Web was originally pitched as a solution to the problem of information loss at CERN, it was more generally envisioned as a system to help scientists share and access information from distributed locations across the world (Berners-Lee et al., 1994; Berners-Lee, 1990). It was very soon adopted in other contexts, permeating other realms of life quicker than any other information and communication technology had ever done before, resulting in an exponential growth of internet users that went from less than 1% of the global population in 1990 to almost 50% in 2017 (International Telecommunication Union World Telecommunication, n.d.).As revealed by Bory et al. (2016), throughout the decade of the 1990s, the discourse of the “founding fathers” of the web shifted from originally depicting their invention as: “a technological tool made by servers and based on existing data which could be useful for specialised users”, to claiming that it was envisioned as “a new medium useful for all the people owning personal communication devices (computers) that would profit of a new living and global system of shared knowledge” (Bory et al., 2016, p. 1068-1069).By the mid-1990s, the Web had already expanded well beyond academia. At that time, what the net artist and theorist Olia Lialina (2005) calls the “vernacular web” started to flourish when people, acting as amateur web designers, learnt to express themselves in the incipient online public sphere. In that context, long before social media was established as a concept, forms of social networking materialised as websites connected to each other by means of hyperlinks, often listed as favourite links and sometimes forming circular clusters and virtual communities known as web-rings (Casey, 1998; Hess, 2007). While Geocities contributed to the rise of the vernacular web by enabling users, for the first time, to “create their own web pages without having to worry about the intimidating acronym soup of FTP, HTML, and the like” (Milligan, 2017, p. 137), much of that happened by means of independently-hosted websites and domains names purchased by their owners.Issues currently associated with the termOver the last three decades the Web has experienced significant socio-technical changes. And besides real shifts, a mythology of radical transformation embodied in the “discourse of versions” (Allen, 2013), from 1.0 to 2.0 and so on, has become widely accepted. However, the basic architectural principles underpinning the Web have remained fundamentally unchanged. As Kenneth Goldsmith, the founder of UbuWeb, a veteran website amassing avant-garde materials since 1996, reminds us:“There’s a commonly held idea that it is impossible to be independent on the web anymore … What we tend to forget is that the bedrock architecture of the web is the same as it was decades ago. Everything I did twenty years ago on UbuWeb I still do today in an identical way, using the identical programmes, languages and tools. What was possible for UbuWeb in the beginning is still possible today”. (Goldsmith, 2020, p. 22)Openness and decentralisation are two core principles of that architecture. In April 1993 CERN put the key software components of the Web (the basic line-mode client, the basic server and the library of common code) in the Public Domain and a new version of the server software was released as Open Source in November 1994: “CERN would retain the copyright to protect the software from appropriation as well as to secure attribution, but would grant to anyone the perpetual and irrevocable right to use and modify it, freely and at no cost” (Smith & Flückiger, n.d.).Beyond software licensing, openness is a broad concept often used to characterise other aspects of the Web. In this regard, the term ‘Open Web’ highlights both the practices and technical dimensions of the Web that make it operate as a global public resource “by and for all its users, not select gatekeepers or governments” (Surman, 2017). As a set of normative principles or values, it advocates for a Web that is accessible to as many people as possible an...
      PubDate: Tue, 26 Apr 2022 09:01:17 +000
  • Cypherpunk

    • Authors: André Ramiro; Ruy de Queiroz
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DefinitionCypherpunk refers to social movements, individuals, institutions, technologies, and political actions that, with a decentralised approach, defend, support, offer, code, or rely on strong encryption systems in order to re-shape social, political, or economic asymmetries.OriginsIn the 1980s, the computer industry was becoming the provider of the main apparatus central to private interconnected management systems and by extension to the United States government's administration. Beyond the optimisation of private and public services, sociopolitical concerns regarding privacy and data protection were already being addressed and gaining space among scholars and activists questioning the necessity of compulsory identification, unnecessary data collection and the formation of data centres, archives and dossiers about individuals (Lyon, 1994; Zuboff, 1988; Burhnham, 1983). The chilling effect, which reduces the expression potential of individuals, was potentially growing among civil society (Lyon, 1992).In parallel, despite the broadening of computer industry and its necessity to provide secure hardware and software that would equip the private sector, the restrictive administrative rules towards domestic use and exportation of encryption (initially listed as a war munition) was imposing an obsolete regulation because the continuing technological development required state-of-the-art security (Diffie & Landau, 2001). This distrust of data collection plus the anachronistic regulation resulted in the advocacy of encrypted technologies becoming to symbolise, at once, a market necessity and a resistance against growing surveillance ecosystems.The latter was a central concern of a 1985’s article, Security Without Identification: Transaction Systems to Make Big Brother Obsolete, by computer scientist and cryptographer David Chaum. He dreamed of a transaction model in which, through a strong and reliable encryption system, privacy would be preserved. The premise was that:“[c]omputerization is robbing individuals of the ability to monitor and control the ways information about them is used. (...) The foundation is being laid for a dossier society, in which computers could be used to infer individuals’ life-styles, habits, whereabouts, and associations from data collected in ordinary consumer transactions” (Chaum, 1985).
      Therefore, for Chaum and for the subsequent cypherpunk movement, the conclusion is that it would be necessary to implement decentralised public-key encryption systems (Diffie & Hellman, 1976; Rivest, Shamir & Adleman, 1978), in order to disrupt this fast-marching problem.In 1988, influenced by Chaum’s ideas and pushing the ideology forward, electronic engineer Timothy May, a then former Intel employee, distributed flyers of a first draft of what would become the Crypto Anarchist Manifesto. The manifesto was officially published in 1992 (May, 1992). In that same year, May and Eric Hughes gathered a group of cryptographers, mathematicians, engineers, and hackers for meetings to discuss how encryption communication systems could overcome state surveillance. According to Levy (2001), Jude Milhon, influenced by authors such as Neal Stephenson and William Gibson—known for cyberpunk novels with technological immersive scenarios, and rebellious characters—baptised them “cypherpunks” (a word-play with cipher, the central code of an encryption system). The group then adopted the label.Although Tim May could be considered the most prolific cypherpunk ideologist near the origin of the movement, and close to anarchist beliefs, it is crucial to place him among a varying spectrum of political views within the movement’s first founders. Eric Hughes (1993) has published the iconic A Cypherpunk’s Manifesto, stating that “cypherpunks write code (...) deplore regulations on cryptography” and “are actively engaged in making the networks safer for privacy”. The publication was a landmark for also establishing the concept of “cypherpunk” at the time, and it explored the value of privacy within personal data dynamics (for example, anonymization protocols) in expanded connected ecosystems. Then it highlighted the centrality of encryption for the society to achieve a reliable “social contract”. John Gilmore (1991), in a paper called “Privacy, Technology, and the Open Society” introduced at the First Conference on Computers, Freedom, and Privacy that year, predicted much of what would be explored by Eric Hughes two years later by combining emerging Internet rights, with a focus on data protection, to the full deployment of strong encryption:“What if we could build a society where the information was never collected' (...) That's the kind of society I want to build. I want a guarantee—with physics and mathematics, not with laws—that we can give ourselves things like real privacy of personal communications. Encryption strong enough that even the NSA can't break it” (Gilmore, 1991).
      After its inception the term was further crystallised by the creation of the “Cypherpunk Mailing List”, a forum-like discussion space with nearly a thousand people in the 1990s (Manne, 2011; Greenberg, 2012). The mailing list encompassed a range of people that went from anarcho-capitalists to socialists, leftists to rightists, political scientists and lawyers to developers and cryptographers (Rid, 2016), making it nearly impossible to classify the cypherpunks...
      PubDate: Tue, 26 Apr 2022 08:52:42 +000
  • Non-user

    • Authors: Selwa Sweidan; Karlynne Ejercito
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DefinitionA “non-user,” as the name suggests, refers to an individual who does not use a given product or system. Critical work on non-use elaborates a range of applications for the term we consider here. The variations of non-use under discussion encompass both voluntary and involuntary cases of non-use.CONTEXT FOR NON-USER DISCOURSEWhat broadly comprises “non-user discourse” is derived from user discourse. Commentary about the “user” originated in systems design, which emerged in the United States and Europe as part of a wider effort to advance the development of military technologies. As computing systems evolved, so too did the “user” for whom these technologies were designed.Early data processing systems originally responded to the needs of information intensive industries. User organisations in both public and private sectors oriented the design of information technologies to enhance the productive capacities of their respective operations (Yates, 1993). It is within the context of user-organisation that innovation studies introduced the concept of “lead users'' into user discourse. Research focused on single industries identified the “lead user” as an individual who proposes key innovations from outside the industry (Oudshoorn & Pinch 2003, 541; von Hippel, 2007; Graham, 2006). What distinguishes the lead user from ordinary users is a set of skills that exceed the given functions of a particular device (von Hippel, 1976).As demand for micro-electronics and personal computers surged in the 1980s, “user-centred” design and “user experience” re-oriented the design of systems to accommodate individual consumers (Oudshoorn & Pinch, 2003). With the convergence of information and communication technologies, models of human computer interaction turn their attention from the single user tethered to a single device to multiple users distributed across large networks.In contrast to their predecessors, these products incorporated the “holistic study of users from the viewpoint of the user” rather than the system (Dervin & Nilan, 1986; & Hartel 2007, p. 2; White & McCain, 1998). Harnessing cognitive psychology to improve how systems were designed, the study of “user experience” deepened the existing view of users by taking into account the “emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviours, and accomplishments” (ISO, 2009) that condition human computer interaction (Rheinfrank, 1995).Research on users in human-machine interaction, information science, and cognitive psychology (Cooper & Bowers, 1995; Kosara et al., 2003; von Hippel, 2005) since then, has provided a basis for critical work in the field of science, technology and society (STS). It is within this context that discourse on non-users takes shape.VARIATIONS OF NON-USEFrom the standpoint of HCI, non-users are a technical designation for “potential users'' (Satchel & Dourish, 2012, p. 9). Implicit in HCI’s model of non-use are a set of assumptions that elicit much debate outside the field. Studies in STS identify a range of cases for non-use: resistance, rejection, exclusion, expulsion, lagging adoption, disenchantment, disenfranchisement, displacement and disinterest (Wyatt et al., 2002; Satchell & Dourish, 2009).This spectrum of negative actions captures what makes non-use particularly difficult to define in positive terms. Because non-use is not observable in the way uses are, the study of it presents a formidable challenge for how scholars approach the topic. (Dourish, 2001, p. 56; Treem, 2014). For the purposes of this glossary entry, we organise the different types of non-use into two primary categories. The first encompasses cases of voluntary non-use, while the second circumscribes examples of involuntary non-use.VOLUNTARY NON-USEOpting-out of use is a singular action which belies a complex of subjective considerations and varies in relation to economic conditions and ideological commitments (Brubaker, Ananny et al., 2016).Insofar as voluntary non-use presumes a certain degree of individual choice, it refers to a set of economic conditions specific to market-based capitalism. Non-users who terminate their engagement with one company, for example, may opt into a platform belonging to a competitor. Scholarship on the attention economy (Crary, 2001) expands on the subjective dimensions intrinsic in the economic model of consumer choice. Such scholarship examines how individual attention is structured by the products and services which compete for it (Crawford, 2015; Davenport, 2001).Organised boycotts present a collectivised form of voluntary non-use. In these cases, a set of political and ethical commitments lend a social form to the decisions of individual non-users who reject the products of a given entity. This non-use as a form of consumer activism is based on the voluntary rejection of a user technology (Wyatt et al., 2002). The duration and degree to which non-users participate in the boycott varies: some partially and temporarily suspend use, while others may completely and permanently terminate their use of a particular good or service altogether.Individual cases of non-use that are not principally motivated by political concerns have their origins in nineteenth century bourgeois culture. With the expansion of cities and industrial processes came a rich body of literature that broadly envisioned different means of withdrawal from the increasingly oppressive cond...
      PubDate: Thu, 21 Apr 2022 11:17:22 +000
  • Intermediaries do matter: voluntary standards and the Right to Data

    • Authors: Matteo Nebbiai
      Abstract: IntroductionIn the last decades, the computing advancements of information and communication technologies heavily impacted the economic system through the expansion of states and firms’ capacity to gather, store and transfer digitised data (Shapiro and Varian, 1998; Mayer-Schönberger and Cukier, 2013; European Commission, 2020d). Concurrently, cyberspace emerged as a new domain where traditional state sovereignty can be challenged (Lessig, 2007; Johnson and Post, 1995; Leiser et al., 2016). The creation, manipulation and circulation of personal data are crucial drivers of the expansion of the digital economy (Srnicek, 2017; Posner and Weyl, 2018; Zuboff, 2019; Cohen, 2019). However, the data economy regulatory arena still consists of a complex and dispersed network of public and private initiatives such as data marketplaces (Carnelley et al., 2013; Koutroumpis et al., 2017), data pools (Mattioli, 2017, pp. 147–148), standard certifications (Lachaud, 2018), Personal information management systems (EDPS, 2016) and data collaboratives (Verhulst and Sangokoya 2015).In the European Union, the urgency to regulate the circulation of personal data beyond data protection is increasingly visible from a series of initiatives from EU bodies. The European Commission (2020a, pp. 16-18) Data Strategy aims to create a framework for “EU-wide common, interoperable data spaces in strategic sectors”, while the Business-to-government Data Sharing report by the European Commission (2020b, p. 42) calls for the establishment of “common standards aimed at ensuring interoperability across borders and sectors”. Concurrently, the Data Governance Act proposal (European Commission, 2020c) stresses the need for people, businesses, and the public sector to have control over personal data and introduces a regulatory framework for data intermediation services. Nevertheless, despite the various announcements, the regulation concerning data ownership and access is still a work in progress in the EU, presenting an inconsistent and not fully operable framework (Duch-Brown, 2017; Drexl, 2018; Martens, 2020).At the moment, one of the most relevant pieces of EU legislation regulating personal data access and circulation is the Right to Data Portability (hereafter RtDP), which was introduced by Article 20 of the General Data Protection Regulation (hereafter GDPR) (European Union, 2016). “Data portability” is the ability granted to an individual to port his or her personal data from a certain digital service to another (Article 29 Working Party, 2017, p. 63). As explained by Engels (2016, p. 4), “platforms have an incentive to collect, possess, process and utilise user data in an exclusive manner, since data is a significant asset in platform markets”. Depending on how it is applied, Article 20 could limit the exploitation of this type of competitive advantage (Lehtiniemi, 2017).The research aims to describe the understudied variety of voluntary regulatory schemes that set data portability standards, whose provisions are additional and complementary to the ones of the GDPR. Significantly, these regulatory schemes are voluntarily joined by data controllers1 and thus affect their compliance with the RtDP. In particular, the research proposes a theoretical framework to study these schemes and investigates their presence in the EU. These goals are addressed as follows.The first section presents a review of the GDPR Article 20, explaining why its formulation creates some “grey areas” that leave data controllers, data protection authorities and courts many choices concerning the application of the RtDP. It is shown that, instead of creating internal procedures, some data controllers decide to delegate such choices to voluntary regulatory schemes.The second section proposes a theoretical framework to describe such schemes, framing them as regulatory standard-setting (RSS) schemes that are settled by actors with the role of regulatory intermediaries. Firstly, Abbott and Snidal (2009b, 2010) define the “regulatory standard-setting” (RSS) schemes as voluntary standards of behaviour settled either by private, public, or non-governmental actors. Secondly, according to the RIT (regulator-intermediary-target) model developed by Abbott, Levi-Faur and Snidal (2017, p. 26), the actors that possess the “authority to make, interpret, and adapt rules” emanated by another regulator can be defined as “regulatory intermediaries”.The third section analyses the regulatory standard-setting schemes operating in the EU that act as intermediaries in the data controllers’ application of the RtDP. Firstly, the study surveys the RSS schemes implementing data portability that operated in the EU territory between 2000 and 2020. Secondly, the study employs the Abbott and Snidal (2009a) Governance Triangle to highlight if such RSS schemes are governed by private, public or non-governmental actors. Finally, the conclusions propose further areas of research arising from the empirical findings.This work contributes to the law and political science literature by examining the impact of non-state regulation forms on the application of the EU data protection framework. In other words, this paper assesses whether data portability policies are affected by the dynamics described by the literature on “private regulation” (Graz, 2012; Kobrin, 2002; Cafaggi and Renda, 2012), “decentred regulation” (Black, 2001), “Transnational Private Regulation” (Bar...
      PubDate: Tue, 12 Apr 2022 12:29:17 +000
  • Digitally-disadvantaged languages

    • Authors: Isabelle A. Zaugg; Anushah Hossain, Brendan Molloy
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DEFINITIONDigitally-disadvantaged languages face multiple inequities in the digital sphere including gaps in digital support that obstruct access for speakers, poorly-designed digital tools that negatively affect the integrity of languages and writing systems, and unique vulnerabilities to surveillance harms for speaker communities. This term captures the acutely uneven digital playing field for speakers of the world’s 7000+ languages.ORIGIN & EVOLUTION OF THE TERMThe term originates with Mark Davis, president and co-founder of the Unicode Consortium, a nonprofit that maintains and publishes the Unicode Standard.1 In 2015, Davis said, “The vast majority of the world’s living languages, close to 98 percent, are ‘digitally disadvantaged’—meaning they are not supported on the most popular devices, operating systems, browsers and mobile applications” (Unicode, 2015, n.p.). Computational linguist András Kornai (2013) similarly estimates that at most 5% of the 7000+ languages in use today will achieve “digital vitality,” while the other 95% face “digital extinction”. Gaps in language access are one facet of the digital divide (Zaugg, 2020).Critical digital studies scholar and co-author Isabelle Zaugg utilises the term digitally-disadvantaged languages in her work on language justice in the digital sphere (2017; 2019a; 2019b; 2020; forthcoming). Zaugg (forthcoming) proposes that digitally-disadvantaged language communities face three primary challenges: 1) gaps in equitable access; 2) digital tools that negatively impact the integrity of their languages, scripts and writing systems,2 and knowledge systems; and 3) vulnerability to harm through digital surveillance and under-moderation of language content.Digitally-disadvantaged languages overlaps and extends upon adjacent terms used in geopolitics and computational linguistics, i.e., natural language processing (NLP). While the category of digitally-disadvantaged languages includes many if not all minoritised languages, Indigenous languages, oral languages, signed languages, and endangered languages, it also includes many national and widely-spoken languages that enjoy robust intergenerational transmission.3 There is no sharp line that delineates whether a language is digitally-disadvantaged. Rather, the term captures a relative degree of disadvantage as compared to the handful of languages that enjoy the most comprehensive digital support and wider political advantages. That said, there are stark differences between the levels of support for languages such as English, Chinese, Spanish, and Arabic and even widely-spoken national and regional languages such as Amharic, Bulgarian, Tamil, Swahili, or Cebuano. However, digitally-disadvantaged is not a static state; it is possible for a language to “digitally ascend” (Kornai, 2013) through wide-reaching efforts to create digital support for the language and foster digital use among speakers. Cherokee, Amharic, Manding languages written in N’Ko, Fulani written in Adlam, and Sámi are a few languages whose digital ascent has been hastened by concerted advocacy efforts.The term also overlaps with and contrasts against low resource or under-resourced languages, NLP terms that refer to languages with sparse data available for analysis. A language may be digitally-disadvantaged in part because digital corpora are unavailable to develop machine translation and search functions. Digital corpora often do not exist due to lack of basic digital support like fonts and keyboards that allow speakers to develop online content—a vicious cycle. By focusing on resource deficits, NLP terms shift focus away from how power has shaped the techno-social imbalances that have rendered the vast majority of languages low resource in the first place.In contrast, the term digitally-disadvantaged languages captures how languages’ digital marginalisation represents how wider linguistic power dynamics map onto the digital sphere. The fact that the earliest digital technologies...
      PubDate: Sun, 10 Apr 2022 22:33:43 +000
  • Non-fungible tokens

    • Authors: Florian Idelberger; Péter Mezei
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DefinitionBlockchain-based NFTs (non-fungible tokens) are uniquely identifiable digital representations of physical or digital items. Usually, the tokens are indivisible into smaller units. NFTs represent structured metadata referring to physical or digital objects. The tokens act for a separate identifier and are often not tied to the objects. Their proponents claim they further the interoperable commercialisation of digital or physical goods.Origin and evolutionAlready back in 2012-2013, hashes of files or other data were incorporated into the Bitcoin blockchain to prove existence or authenticity from a specific point in time (de Beauchesne, 2021). This development was built upon to create so-called ‘Colored Coins’, tokens that are uniquely identified by adding metadata to Bitcoin transactions, and Namecoin, a separate blockchain that deploys tokens for registering domain names, to establish an alternative, decentralised top-level domain name system (Namecoin, 2022). A further experiment was Counterparty, which featured expanded capabilities for more general-purpose applications of NFTs on the Bitcoin blockchain and the first blockchain-based trading cards (, 2021).With the Ethereum blockchain launch, these experiments picked up steam and led to tokens functioning as collectables depicting digital pixel art such as CryptoPunks in 2017.The details of most current non-fungible tokens (NFTs) are described in a technical standard called ERC-721 (ERC-721 Non-Fungible Token Standard, 2018). This standard describes the required metadata of the NFT and the executable functions the underlying smart contract has to support to work with existing infrastructure such as trading websites and other interfaces. The standard refers to the Ethereum blockchain, the most popular one as of writing, but many other implementations are based on the Ethereum standard. ERC-721 is based on an Ethereum Improvement Proposal (EIP) and was finalised in 2018, shortly after Cryptokitties (CryptoKitties, 2021), a game to collect and multiply digital cats, first became popular in 2017.From 2018 on, the projects and companies expanded even more and diversified their operations. NFTs started reaching the fine art market regarding pricing, with Beeples ‘First 5000 Days’ selling for 69 million (Christie’s, 2021). Shortly thereafter, they were diversified further with the minting of, e.g. tweets (Howcroft, 2021), newspaper covers (The Economist, 2021) and even law review articles (Newsham, 2021). Fueled by venture capital, cryptocurrency investments and hype, marketplaces and surrounding infrastructure expanded massively (Mattei, 2021). At the end of 2021, a developer tokenised ‘Cryptogotchis’, the most expensive Tamagotchi clone ever (Cryptogotchi Home, 2021). As a result of this expansion, there have also been music songs, physical objects, academic papers, and much more put into NFTs. Sometimes these were just experiments, some were founders or investors looking for their own niche, yet others claim this process of tokenisation will bring about a new property system.As the evolution continued, the art world has been drawn into cooperations between established art world institutions like Art Basel and technology companies. These cooperations are partly driven by profit motives with cryptocurrency proponents promising improved artist remuneration, disintermediation and easier compliance with upcoming anti-money-laundering regulations (Brown, 2021; Ryan, 2021).CreationMinting is the act of creating an NFT. In this process, a user creates a new set of NFT data by sending a transaction to an underlying smart contract that supports NFTs, as described in ERC-721. It is assigned a blockchain contract address and a tokenId, which in combination form a globally unique identifier. Additional metadata can be (optionally) added. Crucially, the tokenised work is not necessary for minting, and not even a hash of the work has to be stored in the NFT (Guadamuz, 2021c; Bodó et al., forthcoming, 2022).There are three main types of NFTs, based on how they relate to the digital or physical asset they represent. First, for certain NFTs, the work is uploaded to the blockchain; this, for example, can happen with code generating art or vector art. This type of NFT is relatively rare due to the high costs of storing data on the blockchain. Secondly, other NFTs incorporate ownership rights, either by specifying them in the NFTs metadata or via a reference to external terms and conditions (such as on Mintable); in both cases, ownership can be transferred via blockchain transactions (Foo, 2021). Finally, the most used type of NFTs do not confer any rights or favour a commons-based licence such as CC0, which also does not confer rights on the token owner, as rights are granted publicly (Guadamuz, 2021b).IssuesNFTs raise several issues, the most relevant of which are the uncertainty about the legal rights and economic benefits they confer and the environmental impact of the underlying blockchain technology.The ease of creating ‘digital editions’ of either art or collectables in an open and economically liquid network made for value transfer has partially opened up new revenue streams for artists, museums (Willis, 2021) and companies. Some proponents also argue that “NFTs might be able to democratise art” (Gibson, 2021), as they allow a broad spectrum of people to disseminate their born-digital art and to be remunerated for such dissemination. Contrary to claims from NFT projects, however, there is currently no evidence that it improves artists’ struggles to earn a living (with some ...
      PubDate: Sun, 10 Apr 2022 22:25:53 +000
  • Permissionlessness

    • Authors: Kelsie Nabben; Michael Zargham
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DefinitionA technosocial system is deemed permissionless if it is possible to participate in the use, development, and governance of that system or infrastructure without requiring permission from an authority, by adhering to publicly stated procedures.OriginThe term ‘permission’ comes from the Latin word ‘permissio’—the act of permitting, in granting formal consent or authorisation (American Heritage Dictionary, 2000). In law, “permission” refers to the authority to act, as expressed or implied (Bouvier, 1856). The antithesis, ‘permissionless’, means without permission, or the ability to act without requiring another to allow that action. The notion of “permissionlessness” in relation to distributed technologies is both a technical attribute, and ideology, and a cultural value that emerged with the early internet. In a technical context, permissionlessness refers to the open technical specifications in the network layer of the underlying protocols of the internet that avoids the cost of “permissioning” when transmitting data packets. The higher-level protocols for displaying websites also adhered to open specifications (“Hypertext Transfer Protocol” or HTTP). This innovation means that anyone is free to read, write, and share digital information across interactive links without needing to seek permission from a central authority or gatekeeper, whereas prior to this, people were limited to local intranets on private networks. A culture of open source software development whereby anyone can verify or modify the underlying codebase helped enable permissionless protocols and innovation (Raymond, 2000).The technical attributes of permissionless systems interplay with ideological values around freedom and anti-authoritarianism. For example, the “Cypherpunk” contributors to the technical developments and political ideology of decentralised digital infrastructure state “We're free individuals, able to say what we wish, meet in secret meetings without the permission of the government, and learn anything we wish to” (May, 1992).In a sociological context, permissionlessness is also a cultural value that emerged in early internet culture. “Permissionless innovation” is a counterculture value from the 1960s and 1970s about no central ownership or control, and not having to ask anyone for permission (Naughton, 2014; Web Foundation, 2017). Computer scientist and credited inventor of the World Wide Web, Tim Burners-Lee states that the internet is a force for free and open creativity outside of walled gardens: “It was all based on there being no central authority that you had to go to to ask permission” (Brooker, 2018). Digital networked infrastructures can be described as both social and technical, as “infrastructures for communication, cooperation and common value creation…allow for permission-less interlinking of human co-operators and their technological aids” (Kostakis and Bauwens, 2014, 55). An ideological purity towards free access to decentralised technologies developed in parallel to these technical capabilities, with some arguing that “true distributed networks are permission-less” and “not dependent on powerful obligatory hubs” (Bauwens, 2009). ‘Permissionlessness’ has come to broadly refer to anyone being able to use the infrastructure as common property with no selection process to participation. These technical and cultural values were strongly amplified by adherents to influential technology communities, such as the free-software and open-source software movements (Stallman, 2002; Raymond, 2000). In these movements, the source code for computer programmes is available for users to modify it for their own use. Some principles of “permissionlessness” have also been defended against political and regulatory institutions by organizations such as the Electronic Frontier Foundation (EFF), which was formed in 1990 to define and protect internet based civil liberties, such as open access to “Pretty Good Privacy” (PGP) digital encryption to rallying against bans on cryptocurrencies (Electronic Frontier Foundation, 2021).EvolutionPermissionless protocols have required, and also enabled new forms of social organisation and governance to evolve, including “Transmission Control Protocol and the Internet Protocol” or TCP/IP, and “Simple Mail Transfer Protocol” or SMTP. An important evolution in permissionless distributed technologies is the establishment and continuous development of standards to govern permissionless systems and allow them to scale. Although the foundation of permissionless systems is free access for anyone, permissionless systems still need to be governed at higher levels of the technology stack to manage unintended, negative consequences of free access. For example, the ‘World Wide Web Consortium’ (W3C), directed by Tim Burners-Lee, was founded in 1994 to develop open standards to ensure the long-term growth of the Web (W3C, 2021). These consensus-based standards offer recommendations to guide the technical specifications of how the system architecture should be developed. Another example whereby permissionless systems still require governance mechanisms to function in practice is The Simple Mail Transfer Protocol (SMTP). SMTP is the protocol that facilitates email. A negative externality of permissionless email is the ability for anyone to freely send unsolicited junk mail, or ‘spam’ (Brunton, 2013). Spam is an example of the unint...
      PubDate: Sun, 10 Apr 2022 22:07:25 +000
  • Personal Information Management Systems

    • Authors: Heleen Janssen; Jatinder Singh
      Abstract: This article belongs to the Glossary of decentralised technosocial systems, a special section of Internet Policy Review.DefinitionPersonal Information Management Systems (‘PIMS’) provide technology-backed mechanisms for individuals to mediate, monitor and control how their data is accessed, used or shared.Their purported goal is to empower individuals with regards to their personal data (Abiteboul et al., 2015; EDPS, 2016; IAPP, 2019; Royal Society, 2019; Janssen et al., 2021). Given the discourse around how data is currently being extracted and used, the concept is growing in prominence in the research and commercial space (Janssen et al., 2020), as well as gaining policy attention (European Commission, 2020).ContextThere are growing concerns regarding the opacity concerning how data is being processed and (mis)used, where individuals typically lack meaningful transparency, visibility and control over what, how, why and by whom their data are captured, analysed, transferred, stored, or otherwise processed and used (Zuboff, 2015; Lehtiniemi 2017; Berners Lee, 2018). In response, and in line with the growing public discourse regarding data-related issues, PIMS as a concept generally aims to better inform and empower users with regards to the processing of their data (Royal Society, 2019). PIMS are a form of privacy enhancing technology (PET), representing an instance of an approach for privacy self-management—whereby users work to manage their own privacy interests (Solove, 2013; Solove, 2020).Key functionalityPIMS typically involve an ecosystem, which generally entails a platform providing the PIMS infrastructure. The platform provides users with some components for handling their personal data. Within this ecosystem, third parties seek to process user data (Janssen et al., 2020). PIMS employ technical, legal and organisational measures that enable users to manage and control their data, and to ensure and validate that the behaviours of third-parties accord with user and platform requirements. Though the specifics of which vary by offering, measures often include (to varying degrees) the ability to determine:(i) the data collected, captured, stored, or that otherwise available for processing;(ii) that computation, analytics or other processing performed over that data; as well as providing(iii) oversight measures to validate, review and audit what happens to their data.PIMS often enable decentralised data processing, where third-parties that wish to process user data will not directly access a user’s data (e.g. where user data are transferred to the third party). Instead, such mechanisms enable the third-party’s desired computation, analytics, or other processing to be brought to the user’s data (typically residing within a physical or virtual user-centric PIMS device), with only the results of that processing returned to the third-party (Janssen et al., 2021). This (as with other forms of processing) occurs in line with a user’s agreement, and only over certain data, as determined by the user.PIMS may be supported by other novel technologies, such as Distributed Ledgers (Zichichi et al., 2020; see separate entry regarding DLTs).Origins and coexisting uses/meaningsThe term PIMS is not novel; some older references to the term can be found, for instance, in Barreau, 1995; Jones & Thomas, 1997; Bergman et al., 2008. Nowadays, the term ‘PIMS’ broadly refers to a class of technology that provides users with means for managing their data vis-à-vis those wishing to process it. Note that PIMS is an ‘umbrella term’, and we see a range of related terms used including: personal data stores (World Economic Forum, 2013; De Montjoye et al., 2014; OpenPDS, 2017; Crabtree et al., 2018; Royal Society, 2019; Janssen et al., 2021); personal data vaults (Schluss, n.d.); personal information management services (ControlShift, 2014), or personal data spaces (European Commission, 2020). The concepts also bear a relationship with some forms of data intermediary (see separate entry regarding “Data intermediary”).PIMS have been proposed by actors in civil society (MyData movement, 2015); academia, where offerings such as OpenPDS or Databox were developed; the private sector (some examples include CozyCloud; Mydex; CitizenMe, or, or by actors in research environments with the PIMS developing into a commercial offering (Dataswift/Hub of All Things, or Solid/Inrupt, the latter being developed by Sir Tim Berners Lee). PIMS are increasingly gaining attention from policymakers, who currently consider mechanisms for regulating and advancing data intermediation services in general, of which PIMS are one example (e.g. European Commission Data Strategy, 2020; European Commission proposal for a Data Governance Act, 2020; German Bundestag bill for Consent Management Services, 2021; Centre for Data Ethics and Innovation (an expert body of UK’s government Department for Digital, Culture, Media and Sports, 2021)).DebatePIMS generally adopt an approach that is firmly grounded in the logic of privacy s...
      PubDate: Sun, 10 Apr 2022 22:00:51 +000
  • Systematic review: YouTube recommendations and problematic content

    • Authors: Muhsin Yesilada; Stephan Lewandowsky
      Abstract: IntroductionSocial media has many positive aspects, such as creating online friendships and communities (Allen et al., 2014). However, there has also been much concern about how social media can serve as a vector for problematic content, including misinformation and conspiracy theories, that may polarise or radicalise audiences with adverse consequences for society (Allcott & Gentzkow, 2017; Allcott & Gentzkow, 2018). A recent study showed that the likelihood of users aged 15-30 encountering online hate content relating to gender, ethnicity, political views, terrorism, and religion had tripled between 2013-2015 (Kaakinen, Oksanen, & Räsänen, 2018). Other studies that investigated the same age group showed that 37% of participants reported seeing extremist content on social media platforms like YouTube (Nienierza et al., 2021). This research indicates that problematic content might be increasingly accessible. Some researchers have focused on the accessibility of problematic content on YouTube, and the role its recommender system plays in facilitating extremist content pathways (O’Callaghan et al., 2013; O’Callaghan et al., 2015; Ribeiro et al., 2020). The YouTube recommender system provides users with further viewing options based on their search and personal viewing history, along with other information about the user. Since YouTube is a massive social media platform, the notion that its algorithms facilitate pathways to extremist content is concerning. These findings have raised questions about the balance between users actively seeking out problematic content, and recommender systems leading individuals towards content that they might not otherwise have encountered.At first glance, it may appear reasonable to assume that individuals deliberately seek content, such as videos on YouTube, that they are interested in and that is consistent with their attitudes (Knobloch, Westerwick, & Meng, 2009). On that view, YouTube users seek out content they are interested in, and the content creators supply the content to fulfil a need (Munger & Phillips, 2019). However, this simplistic view of consumer choice ignores the fact that YouTube, by default, automatically plays further videos that its recommender system deems to be of interest to the user. Even when the “autoplay” feature is turned off (itself a non-trivial undertaking), users are presented with suggested videos in a sidebar. The recommender system is integrated into YouTube’ primary structure, which organises content into videos and channels. A Channel is a unique user’s space on YouTube, where other users can find their publicly available videos. YouTube provides channel recommendations as well as video recommendations. The YouTube recommender algorithms utilise the user’s activity and the video producers’ interconnectedness to suggest, or automatically play, videos to users (Davidson et al., 2010, p. 293; Knuth, 1997). In consequence, the recommender system—as opposed to unguided user choice—is responsible for 30% of YouTube video views (Clement & Davies, 2021).The YouTube recommender system could direct users’ attention to video content that they otherwise might not have selected (Courtois & Timmermans, 2018; Pariser, 2011). For example, a recent study showed that users could reach conspiratorial content via the recommender system from videos about fitness, firearms, gurus, and even small houses (Alfano et al., 2020). Other studies have identified “filter bubbles” within the YouTube video network (O’Callaghan et al., 2013; O’Callaghan et al., 2015; Röchert, Weitzel, & Ross, 2020). A filter bubble refers to the algorithmically-curated presentation of homogenous content to users; this content is generally in line with the users’ interest, ideas, and beliefs (Pariser, 2011). However, the idea of a filter bubble has also been challenged (Bruns, 2019). Bruns argues that there is little empirical evidence for filter bubbles or the associated concept of political 'echo chambers’, pointing to the observation that social media users tend to be exposed to a very centrist media diet that is, if anything, more diversified than that of non-users.Concerns about the YouTube recommender system and filter bubbles are also reflected in case studies highlighting the potential negative consequences of the algorithms, such as acts of violence that were ostensibly inspired or triggered by videos with conspiratorial content. For example, a 26-year-old man from Seattle called Buckley Wolfe killed his brother with a sword because he believed that his brother was a shape-shifting alien reptile (Green, 2019). A journalist investigated this incident by analysing Buckley's “liking” behaviour on YouTube. Buckley initially liked videos predominately about martial arts and fitness. However, his “liking” behaviour eventually shifted towards alt-lite (a loosely-defined right-wing political movement that distances itself from ethnic nationalism but opposes political correctness, feminism, and Islam), conspiracy theories, and ultimately alt-right (far-right and white nationalist) content (View, 2019). These stories, although concerning, are anecdotal and do not constitute evidence that the YouTube recommender system facilitates pathways to problematic content. Thus, assessing the evidence on the YouTube recommender system and pathways to problematic content could shed light on the extent of the issue.There is a growing body of literature that has aimed to investigate the causal effects of social media and internet access on anti-democratic views and behaviour. For example, a study conducted in Germany and Italy showed that individuals who have greater access to broadband coverage at the municipality level were more likely to vote for populist parties (Schaub & Morisi, 2020)...
      PubDate: Thu, 31 Mar 2022 21:43:17 +000
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-