Certainly. As I recently explained in a blog published on the World Economic Forum’s AI Governance Alliance on #ResponsibleGenerativeAI kick off on September 19th, Generative AI doubles/ triples the pace of automation that AI initially triggered. While AI has massive positive impact in many sectors, it certainly also poses greater risks and challenges.
This has two important implications:
It has accelerated them considerably. Given most organizations are not even ready with #ResponsibleAI, we need to start / progress on our respective journeys and collaborate effectively as soon as possible. The changes or additional requirements that are due to Generative AI will need to be handled on a case-bycase basis, depending on industry and company.
In addition to the typical risks with traceability, explainability, etc., I see the biggest risk with our education systems and talent transformation that needs to happen urgently. In other words, most education systems in the world are a ‘rinse & repeat’ what has been taught in the past centuries or decades. We are not looking forward to 2030 or 2040 and then reverse-engineering the curricula for the needs of today’s students …so that they can deal with all this tectonic change coming our way in jobs and automation. Why? Students will need to find jobs. Currently, there is a massive mismatch between the students’ skills and employers’ needs. There is a similar challenge (and opportunity!) with upskilling and reskilling employees or adult learners. The expected Return on Investment (ROI) in AI may be risky for some businesses, especially if they lack the domain expertise and trained personnel.
It is again talent. Responsible AI at scale needs to be based on a solid foundation of specialist skills and training. It also requires a mature culture of accountability. Since this cannot be achieved overnight, at Schneider Electric we are consistently upskilling our employees on matters of bias, ethical decision making, and accountability.
With regards to advice for leaders, I had written a Forbes article ‘Six Steps To Execute Responsible AI In The Enterprise’ based on my work on Responsible AI at Microsoft and Accenture in the last few years. I would suggest the following steps:
In terms of legislation, my view would be that we do our best to support and accelerate these efforts around the world but do not ‘outsource’ our responsibility. In other words, we use AI/ GenAI in our personal and professional lives already. The responsibility lies with every individual and team. If we see something wrong or suspect something could go wrong, we need to speak up and take action. We do not need to wait for law enforcement.
As a fellow on WEF AI Governance Alliance, we need to partner and accelerate our efforts on Responsible AI and Responsible Generative AI across the board – public sector, private sector, academia... We need to share best practices, tools, frameworks, trainings… and benefit from all similar synergies. Thank you for bringing us together.