You are here

The anatomy of technology regulation

Feb 09,2022 - Last updated at Feb 09,2022

By Nicholas Davis, Mark Esposito and Landry Signe

PHOENIX — The 2020s will undoubtedly be characterised by new technology regulation. But while today’s technologies are global, the rules governing their development and use are not.

The resulting policy fragmentation is often attributed to differing values and political ideologies within key jurisdictions: The United States, the European Union and China. In this narrative, the US prefers digital laissez-faire; Europe opts for digital big-state socialism; and China pursues a politically motivated strategy of restricting some technologies and scaling up others to maintain social control.

But while there is evidence to support this narrative, such broad characterisations fail to explain the stark regulatory differences between countries that fall into the same ideological category. For example, consider Australia, New Zealand, Canada, the US and the United Kingdom. These Anglophone liberal democracies with colonial histories have strong ties and belong to a longstanding security and intelligence-sharing pact (The Five Eyes). But each has a unique approach to technology policy.

While Australia is charting its own course on everything from encryption laws and extremist content to power imbalances between digital platforms and older news media organisations, New Zealand is building international partnerships on many of the same issues, such as through the Christchurch Call initiative. Meanwhile, Canada is doing more listening than acting, with its most recent attempt to pass online legislation ensuring that Internet-era streaming companies face the same regulations as traditional broadcasters. The US has placed technology embargoes on China, but it has dithered on domestic regulation, even in the face of mounting abuses by Big Tech firms. And the UK is realigning with its ex-siblings in the EU.

As these examples show, several factors beyond ideology shape what we think of as the technology “policy space”. Each jurisdiction has its own limited set of options for guiding the effects of how new and existing technologies are developed and deployed. And these options, in turn, are circumscribed by at least three key barriers.

The first is a jurisdiction’s constitutional decision-making authority, legal precedents, and pre-existing agreements with other states or bodies. These factors create a “hard” boundary of legal limits that policymakers will find difficult — though not necessarily impossible — to circumvent. And a related, slightly softer boundary lies in conflicting policy priorities within the same jurisdiction, particularly where national security “red lines” are concerned.

The second barrier is a lack of political cohesion, public support, and consensus among key stakeholders, or disagreements between branches of government. Such limits are particularly common in systems where the legislative and executive branches can be controlled by different parties, or where different parties control each of two legislative bodies. In the absence of common ground, little can be done until the mix of decision makers shifts to favour one group or another. And a softer version of this limit can occur in democracies if the group in power eschews decisive action because it is worried about an upcoming election.

The third barrier is a government’s lack of capacity for effective policy implementation and enforcement. The most common reasons for this are budget constraints, shortages of qualified personnel, a targeted sector’s inability to bear the new compliance burden, or inadequate infrastructure.

While these potential barriers tend to exclude (or at least render ineffective) many potential policy proposals, technology policymaking is also shaped — and made more uncertain — by a confluence of incentives and trade-offs that operate at multiple levels within and across government. Here, we see five primary factors that can help to explain policy divergences among similar countries.

The first stems from a policy’s impact on and relationship to state power. Regulatory strategies tend either to centralise government power or devolve power to other bodies and groups. Centralisation is often achieved by increasing revenues and tightening control over the private sector and the public, whereas devolution usually involves legislating industry standards or de-regulating a sector entirely. The ability to alter this balance of power is an incentive in itself, because it entails a redistribution of resources among stakeholders — not least state bureaucracies, on one hand, and business lobbies, on the other.

The second factor is a policy’s likely potential impact on national output and productivity. Technology policies often seek to increase national economic power as part of a government’s broader development strategy, which itself can involve either protectionism or policies to open up markets.

Policy decisions thus can be motivated either by a desire to bolster domestic activity (to help a country’s producers or workers) or by a desire to promote activity internationally (to support domestic exporters). Given that technology policies tend to require compliance systems or create liability regimes that deter business creation or foreign investment, economic impact also must be factored into decision makers’ calculus.

Then there is national security, which can be affected by a wide range of technology policies. While laws authorising security services to override encryption can enhance these agencies’ capacity to address foreign and domestic threats, laws or judicial rulings upholding free speech and due process can make their jobs more complicated.

The fourth factor is a policy’s likely impact on consumer rights and protections. Technology policies often seek to ensure that new technologies expand choice, lower prices and support competitive markets. But consumer-protection policies tend to be unevenly enforced, owing to tensions between national and local powers, uncertainties about what consumers really prefer, and the difficulty of assessing problems like market concentration (particularly when goods or services appear “free” to end-users). For example, while some people are happy for tech platforms to track their behaviour in order to improve services, others prefer more privacy.

Finally, there is a policy’s likely effect on the decision maker’s own power. Policymakers will naturally be biased towards measures that could enhance their own positions, both current and future; but, by the same token, they will quickly abandon policies that prove to be unpopular with key stakeholders.

Taken together, these limits and incentives provide insights into the differences in technology policymaking across countries that otherwise appear similar. With these factors in mind, we can develop a more nuanced understanding of where technology policy is heading in what is sure to be a decisive decade.

 

Nicholas Davis is head of Society and Innovation at the World Economic Forum. Mark Esposito, co-founder of Nexus FrontierTech, co-directs the 4IR Research Initiative at the Thunderbird School of Global Management at Arizona State University. He is a policy associate at the University College London Institute for Innovation and Public Purpose and the co-author, most recently, of “The AI Republic: Building the Nexus Between Humans and Intelligent Automation” (Lioncrest Publishing, 2019). Landry Signé is a professor and managing director at Thunderbird School of Global Management and a senior fellow at the Brookings Institution. Copyright: Project Syndicate, 2022. 

www.project-syndicate.org

up
57 users have voted.
PDF