Aller au contenu

Charting the Path in an AI-Powered Workforce - Balancing Skills and Ethics

13.11.2023

Stefano Scarpetta, OECD Director for Employment Labour and Social Affairs

To harness the potential of Artificial intelligence (AI), workers and managers will need to develop an awareness of how AI systems can be integrated in the workplace and develop skills to work alongside AI. Moreover, companies should ensure that ethical considerations are embedded in the decisions AI professionals make when developing and adopting AI systems.

Artificial intelligence (AI) and robotics have the potential to profoundly impact labour markets, automating a wide range of tasks that are currently performed by humans. Many fear that automation will replace human workers, rendering their skills obsolete. However, a closer look reveals that while machines and algorithms are becoming increasingly proficient at performing many tasks, including cognitive ones that until recently were the exclusive prerogative of humans, others remain beyond the boundaries of what they can do. Performing a job often requires individuals to complete a large set of tasks, each requiring a different level of proficiency in a skill, and with a different exposure to automation. The first step for humans to integrate emerging technologies effectively, including AI, is to identify the tasks that technologies excel at, and those that they should perform often complementing AI systems.

Cultivating cognitive awareness to unleash AI’s potential in the workplace

Although still early days, empirical evidence suggests that the use of generative AI systems could greatly enhance the productivity of workers, but only when AI is used to complete tasks that are within the technology’s ability frontier. A fundamental difference between generative AI and previous technologies lies in AI’s capacity to automate tasks that were traditionally non-routine. Consequently, AI has made significant strides in domains that relied on people’s information processing, problem solving and deductive reasoning skills.

Cognitive awareness is the first skill that should be prioritised to make the most of developments in generative AI systems. Cognitive awareness reflects the ability of individuals to recognise their own level of proficiency in the skills that are needed to perform certain tasks and the level of proficiency of a collaborating partner, be this a fellow human or an AI system. Cognitive awareness is essential if workers are to be able to re-organise work in ways that enhance their productivity by deciding if, when, and how to integrate the output of AI systems.

There is a growing mismatch between the level of cognitive awareness workers possess and the level required by the changing technological landscape.

In the past, it was relatively easy to identify what machines could and couldn’t do. However, it is more difficult to determine the capabilities and limitations of generative AI today.For example, AI systems could be hallucinating or not have sufficient data to make reasonable predictions and yet still provide solutions to problems that, at first glance appear correct and convincing, even though they are not. Some tasks that are trivial for humans are not trivial for AI systems to solve but because humans generally apply models of human cognition to evaluate the capabilities of AI systems, they often fail to recognise which tasks are within the existing capabilities of AI systems, and which are not. Furthermore, as highlighted in the OECD Skills Outlook 2023, as many as one in four young people had low levels of text comprehension but still believed that they could easily understand difficult texts. This overconfidence is an important indicator of a lack of awareness about their own capabilities.

Prioritising ethical AI alongside skills needed for AI development

The second mismatch is the one between the set of skills companies at the frontier of AI adoption look for when they hire workers who are tasked with developing, adapting, and modifying AI systems, and the set of skills that we need to ensure an ethical use of AI.

In labour markets, there is an increasing recognition that AI could change the way in which work is monitored or managed, with important risks with respect to workers’ privacy and autonomy. AI could also introduce or perpetuate biases and currently many AI applications fall short of desired levels in terms of transparency, explainability, and accountability. While many of these issues are not new, AI has the potential to amplify them.

Evidence from the Skills Outlook 2023 indicates that the demand for professionals working in AI development and deployment increased markedly between 2019 and 2022, but nonetheless remains small: on average across the 14 OECD countries analysed, the share of online vacancies requiring workers with specific skills needed to develop, modify and adapt AI systems increased from 0.3% in 2019 to 0.4% in 2022.

Although these workers represent only a very small share of the labour force, they have the potential to importantly reshape our economies and societies. Yet, in the majority of countries less than 1% of all vacancies for professionals who would be required to develop, adapt and modify AI systems mentioned keywords associated with AI ethics, trustworthy AI, responsible AI or ethical AI. Although the share of jobs mentioning keywords related to ethics in AI increased sharply in the majority of countries between 2019 and 2022, more needs to be done to integrate ethical considerations during the design, development and adaptation of AI systems.

In 2019, OECD countries adopted the OECD AI Principles for a trustworthy use of AI. These principles include the requirement that AI developers and users embed ethical considerations into the design, development, adaptation and ultimate use of AI systems. It is important that governments and companies ensure the full implementations of these principles in a context in which there are growing concerns about the ethical risks associated with AI development and use.

Companies worldwide are increasingly signalling a commitment to the development and use of responsible and ethical AI in their operations. Translating these commitments for responsible and ethical AI into meaningful action requires companies to adapt hiring practices to reflect the need for ethical standards in the development, adaption and use of AI models, and ensure that training and professional development for existing workers equips them with skills to remain abreast of technological innovations but also to appreciate the economic and social implications of their work.

Read more:

Image ©Lightspring/Shutterstock


Le contenu de cet article est de la seule responsabilité de son auteur - OECD