top of page
  • Tech for Society

Domestic and International Regulations on Artificial Intelligence

AI needs regulation

“Technological progress is the main driver of growth of GDP per capita, allowing output to increase faster than labor and capital” [1]. Artificial Intelligence (AI)-driven technology are being increasingly welcomed for its potential economic benefits. In addition, not only “public and private sector investments in basic and applied R&D on AI have already begun reaping major benefits to the public in fields as diverse as health care, transportation, the environment, criminal justice, and economic inclusion. The effectiveness of government itself is being increased as agencies build their capacity to use AI to carry out their missions more quickly, responsively, and efficiently” [2].

“Those economic benefits, however, will not necessarily be evenly distributed across society” [3]. For instance, AI-driven technological innovations have shifted the job demand towards higher skilled more educated professionals and relative pay of this group is also increased, causing rising socio-economic inequality. AI-driven automation has caused eliminations of low skilled repetitive labor jobs, evidenced in Amazon’s Flagships Fulfillment Center in Seattle, WA. “At BFI4 outside Seattle, the retailer uses algorithms and robots to ship more than a million packages a day - vastly changing the jobs of humans in the process” [4].

Besides the concern on job automation, there are growing concerns around the AI safety, fairness, and ethics. Potential misuse or unethical use or unintended consequences of AI can cause more harm than good to public. Examples include, to name a few, gender and racial bias issues stemming from the use of biased or deliberately fabricated data or bias algorithm, cyber security issues invading individuals’ privacy, the spread of fake news influencing elections through manipulating voices and videos, or autonomous weapons threating lives that operate without human control. “And mark my words. AI is far more dangerous than nukes. Far.”, warned visionary genius of Tesla and SpaceX founder Elon Musk at the South by Southwest (SXSW) tech conference in Austin, Texas in March 2018. ‘So why do we have no regulatory oversight? This is insane.”, continued Elon Musk, calling for regulatory oversight of artificial intelligence.

Government takes actions on regulating AI

US National Institute of Standards and Technology (NIST) Standards

Concerns around AI “have prompted efforts to examine and develop standards, such as the US National Institute of Standards and Technology (NIST) initiative involving workshops and discussions with the public and private sectors around the development of federal standards to create building blocks for reliable, robust, and trustworthy AI systems.” [5]. NIST’s work focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems through

  • Conducting fundamental research to advance trustworthy AI technologies and understand and measure their capabilities and limitations

  • Applying AI research and innovation across NIST laboratory programs

  • Establishing benchmarks and developing data and metrics to evaluate AI technologies

  • Leading and participating in the development of technical AI standards

  • Contributing to discussions and development of AI policies

The first workshop on NIST AI Risk Management Framework was kicked off on October 19, 2021 and lasted for 3 days.

US Policy Response to AI The Executive Office of the President published the second Future of AI Initiative report in December 2016 and the first report in October 2016. The second report suggests that policymakers should prepare for five primary economic effects, and it suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy. The five primary AI economic effects include:

  • Positive contributions to aggregate productivity growth;

  • Changes in the skills demanded by the job market, including greater demand for higher-level technical skills;

  • Uneven distribution of impact, across sectors, wage levels, education levels, job types, and locations;

  • Churning of the job market as some jobs disappear while others are created;

  • The loss of jobs for some workers in the short-run, and possibly longer depending on policy responses

The report suggests three broad strategies for addressing the impacts of AI-driven automation across the whole U.S. economy:

  • Strategy #1: Invest in and Develop AI for its Many Benefits

Invest in AI research and development Develop AI for cyberdefense and fraud detection Develop a larger, more diverse AI workforce Support market competition

  • Strategy #2: Educate and Train Americans for Jobs of the Future

Educate youth for success in the future job market

Expand access to training and re-training

  • Strategy #3: Aid Workers in the Transition and Empower Workers to Ensure Broadly Shared Growth

Modernize and strengthen the social safety net

Increase wages, competition, and worker bargaining power

Identify strategies to address differential geographic impact

Modernize tax policy

Preparing for all contingencies

The Executive Office of the President published the first Future of AI Initiative report in October 2016 with 23 recommendations [6].

The European Union sees “Ethical AI” as a Competitive Advantage In order to ensure fairness in economic competition, protect public safety, create Artificial Intelligence systems that are transparent, unbiased, and fair, there are several ethical considerations that must be evaluated. There is some progress within the European Union in moving toward a more rigorous and enforceable ethics guideline for artificial intelligence that is trustworthy. The European Union commissioned an expert panel that has published (December 2018) its initial draft guidelines for the ethical use of AI.

The UK’s Ethical AI Code and Recommendations The United Kingdom is taking a leading role in AI's ethical development with its strengths in law, research, financial services, and civic institutions. The United Kingdom House of Lords Select Committee on Artificial Intelligence has suggested creating an “AI Code” covering five basic principles. The proposed AI Code answer questions specifically around the impact of AI on the lives of people every day, the risks and implications of AI on society, and other ethical issues. The Chairman of the Committee, Lord Clement-Jones, said: “AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.” [7]. The Committee's suggested five principles for such a code are:

  • Artificial intelligence should be developed for the common good and benefit of humanity.

  • Artificial intelligence should operate on principles of intelligibility and fairness.

  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

The House of Lords Committee made several recommendations:

  • Restrictions and bans should beplaced on any Artificial Intelligence technology that has the potential to hurt, destroy, or deceive human beings.

  • Artificial Intelligence should be developed for the common good and benefit of humanity and operate on principles of intelligibility and fairness.

  • Citizens should be given the right to be educated to a level where they can flourish mentally, emotionally, and economically alongside AI technology in future jobs.

  • Restrictions should be placed on any Artificial Intelligence systems attempting to diminish the data rights or privacy of citizens

  • Consent is a crucial factor here – ensuring that people offer informed consent before their data is captured, used, or passed to third parties.

The OECD Principals on AI and Recommendations In May 2019, the member countries of OECD (The Organisation for Economic Co-operation and Development) adopted the OECD Principles on Artificial Intelligence. The OECD Principals on AI “promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values.” [8]. The Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.

  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

  • Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Consistent with these value-based principles, the OECD also provides five recommendations to governments:

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.

  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.

  • Ensure a policy environment that will open the way to deployment of trustworthy AI systems.

  • Empower people with the skills for AI and support workers for a fair transition.

  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

While these principles seem to make relative common sense, they will be difficult to enforce without being implemented through regulation or legislation.

Through regulation, AI can serve society better.


Recent Posts

See All

Bridge the Gap Between Technology and Public Policy

Policymaking must catch up with technology - before it's too late _ World Economic Forum.pdf More than ever, we live in a world that technologists make. At this point, it is almost impossible to imagi

Technology and Inequality

Technology and Inequality | MIT Technology Review Now, in the fall of 2021, David Rotman’s 2014 article, “Technology and Inequality,” is more relevant than ever before. The global pandemic and corresp


bottom of page