5 Core Principles of AI Ethics

The UK provides an artificial intelligence code of ethics that could be a framework for countries around the globe.

The UK government published a report last year on the ethics of artificial intelligence that many see as a strong framework for artificial intelligence (AI) adoption. A key recommendation from the report calls for a cross-sector AI code to be formed, a code that a could be adopted around the globe.

“An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse,” writes Lord Tim Clement-Jones, the chairman of the House of Lords Select Committee on AI that commissioned the UK report.

The report includes 5 Core Principles:

- Advertisement -

• AI should be developed for the common good and benefit of humanity.

• AI should operate on principles of intelligibility and fairness.

• AI should not be used to diminish the data rights or privacy of individuals, families or communities.

• All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

• The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

Here are some other important conclusions from the report:

Many jobs will be enhanced by AI, many will disappear and many new, as yet unknown jobs, will be created. Significant government investment in skills and training will be necessary to mitigate the negative effects of AI. Retraining will become a lifelong necessity.

Individuals need to be able to have greater personal control over their data, and the way in which it is used. The ways in which data is gathered and accessed needs to change, so that everyone can have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency. This means using established concepts, such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability and data trusts.

The monopolization of data by big technology companies must be avoided, and greater competition is required. The government, with the Competition and Markets Authority, must review the use of data by large technology companies operating in the UK.

The prejudices of the past must not be unwittingly built into automated systems. The government should incentivize the development of new approaches to the auditing of datasets used in AI, and also to encourage greater diversity in the training and recruitment of AI specialists.

Transparency in AI is needed. The industry, through an industry AI body, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.

At earlier stages of education, children need to be adequately prepared for working with, and using, AI. The ethical design and use of AI should become an integral part of the curriculum.

The government should be bold and use targeted procurement to provide a boost to AI development and deployment. It could encourage the development of solutions to public policy challenges through speculative investment.

 

About the Author(s)

Eve Tahmincioglu

Eve Tahmincioglu is the former editor-in-chief of Director & Boards .


This is your 1st of 5 free articles this month.

Introductory offer: Unlimited digital access for $20/month
4
Articles Remaining
Already a subscriber? Please sign in here.

Related Articles

Navigate the Boardroom

Sign up for the Directors & Boards weekly newsletter for the latest news, trends and analysis impacting public company boardrooms.