THE WORKFORCE
WONDERS OF AI AT WORK!
Sanjeewaka Kulathunga explores the implications of AI on the workforce

AI is reshaping strategy, organisational design and the very nature of managerial work. For executives, boards and policymakers, the central question is no longer whether to adopt artificial intelligence but how to lead organisations so that it increases productivity, preserves human dignity and manages systemic risk.
It’s shifting the role of leaders from traditional commanders to curators of socio technical systems. Competitive advantage now depends on orchestrating human and machine capabilities.
As artificial intelligence automates knowledge, work and routine decision making, leaders must reconsider where human judgement remains essential.
Leaders can now design workflows with AI outputs that humans supervise, verify and interpret. Their task is to steward systems, data and human judgement rather than command from above.
And the workforce implications are profound.
AI displaces routine roles, and sharply increases demand for hybrid skills and domain experts – model stewards, and specialists in human and artificial intelligence interaction – who can work with it.
Labour markets are experiencing short-term disruption as workers are reallocated and reskilled alongside long-term wage realignments, where creativity, supervision and social judgement gain value.
GOVERNANCE Leadership requires investment in continuous learning systems, transition pathways and redesigning roles, so that talent is retained and businesses maintain their legitimacy in society.
Trust and accountability are becoming central economic assets.
AI systems often operate as opaque black boxes by creating information asymmetries and potential failures that can lead to reputational damage, regulatory consequences and direct financial losses.
Building governance mechanisms, conducting independent audits and creating incident response playbooks are essential steps in transforming trustworthy artificial intelligence into both a product advantage and public good.
The strategic landscape is also shifting.
AI reduces the marginal cost of expertise by undermining business models that rely solely on specialised knowledge. As it makes information increasingly abundant, value migrates towards orchestrating ecosystems, leveraging proprietary data and cultivating differentiated human insights.
Leaders must reallocate capital to data quality, continuous model improvement and platform capabilities, while preserving business areas where trust, relationships and tacit judgement remain competitive moats.
REGULATIONS The regulatory environment is becoming more complex as governments focus more on AI governance. Businesses now face a mixture of voluntary standards, industry codes and binding regulations, which relate to safety, data protection and liabilities.
Leadership requires building agile regulatory capabilities, engaging proactively with policy makers, and equipping boards with AI literacy and risk expertise. Without strong board oversight, strategy and risk management can rapidly fall out of alignment.
LEGITIMACY Ethical and social legitimacy concerns are equally urgent. Leaders must confront questions about what types of automation are socially acceptable and how AI deployment affects the communities in which business entities operate. Ethical frameworks must be integrated into core strategy rather than as an ‘after the fact’ audit.
Best practices include stakeholder impact assessments, community engagement and ethics review boards that are empowered to delay or halt deployments when harms outweigh benefits.
CHALLENGES Decision making under uncertainty remains a challenge. AI models produce probabilistic outputs, which could be wrong especially when the data environment shifts.
Leaders must cultivate humility, and enforce verification processes for high stakes decisions and continuous monitoring after deployment. Funding robust model monitoring infrastructure and creating escalation paths for anomalous behaviour are operational necessities.
VULNERABILITIES Cybersecurity and resilience enter a new phase in the age of AI.
Models introduce novel vulnerabilities – they can be poisoned, manipulated or reverse engineered. Similarly, reliance on a small number of cloud providers and shared artificial intelligence platforms increases systemic concentration risk.
Leaders must integrate AI specific resilience into enterprise risk management, invest in defensive capabilities and participate in industry wide stress testing to anticipate interdependent threats.
Culturally, organisations should redefine meaning and motivation. As AI takes over many cognitive tasks, leaders must design roles that preserve purpose, growth and human identity.
And jobs need to be restructured to leverage uniquely human contributions, and avoid deskilling that undermines morale and productivity.
Organisations should redefine meaning




