New artificial intelligence (AI) models are released daily while investors and corporations engage in an AI land grab. Companies rush to integrate AI into their operations, often with no goal other than to not be left behind. Meanwhile workers struggle to determine what these changes mean for them. Concerns about job displacement and the loss of control are becoming more prevalent as AI continues to reshape the workplace.
Even the basic scope of the transition is unclear. Is the proper analogy the internet, the industrial revolution or, perhaps, the Holocene or Cambrian explosion? The only point of clarity is that organizations and institutions aren't prepared for the ethical and technological challenges ahead.
Governments are beginning to grapple with these questions. While the EU jumped ahead with its Artificial Intelligence Act, the White House has entered the conversation with President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This directed the Department of Labor to establish a set of AI implementation principles. These emphasize worker empowerment, ethical AI development, clear governance, transparency, and the protection of labor and employment rights.
However, these principles serve more as conversation starters than clear regulatory or legal guidelines. As the rapid expansion of AI necessitates an equally swift rate of organizational and social innovation, companies should be prepared to be ethical innovators in applying AI to their workplaces. Businesses have the opportunity and burden of leadership, making it crucial to engage in these conversations now.
This blog will explore the ethical and compliance considerations that companies must navigate when adopting these AI principles, addressing the complexities of measuring AI's impact on workers, the ambiguity in governance and oversight, and the need for shared organizational innovations to effectively manage AI integration. And while these principles are focused on risks, we believe that the key is to focus on opportunities. Transitions are never neutral. The choice is between growth or decline; there isn’t an option to preserve the status quo.
Measuring AI’s Impact on Workers
While many of these principles focus on ameliorating impacts to workers, organizations often lack the basic tools for measuring these impacts. Most organizations struggle to even measure employee status, let alone the impact of factors like AI. Therefore, the prerequisite to all other steps (and something companies should have been doing anyway) is to develop robust organizational metrics. These may include:
- Job Quality Metrics: Evaluating changes in job roles, satisfaction, and overall work quality. This can involve regular surveys and feedback mechanisms to gauge employee sentiment and identify areas of improvement.
- Process and Skill Tracking: As new AI capacities appear, a prior understanding of organizational processes and required skills is essential. This allows faster incorporation of AI while smoothing transitions for the workers involved. It also supports reskilling efforts to help workers adapt to new roles.
- Organizational Network Analysis: These changes risk disrupting the networks that define our modern corporations. Monitoring the company's larger structures supports process improvements and reacts to shifts due to AI-related impacts.
- Equity and Inclusion: Even earlier machine learning models had issues with replicating existing biases rather than moving past them (e.g., banking models which reproduced prior redlining). With AI, this goes even deeper as issues such as choice of training data can dramatically affect model outputs in ways that are very difficult to tease apart in production.
Ambiguity in Governance and Oversight
The vague principles set forth by the government highlight the need for ethical AI, but they do not provide concrete guidance or even definitions of terms. Companies must develop their own governance frameworks that address their specific contexts. This could involve:
- Establishing Clear Guidelines: Creating detailed, actionable policies that reflect ethical AI use, transparency, and accountability. These guidelines should be regularly updated to keep pace with technological advancements and emerging ethical concerns. Given the rate of change, these should be frameworks for decision making rather than rigid rules.
- Regular Audits: Conducting frequent audits of AI systems to ensure compliance with ethical standards. This is itself vague, but we can start from existing regulations around data privacy and security. As future regulations will likely build off rules like GDPR and CCPA, leveraging these now can provide a head start on future AI compliance activities. Every company is now a data company.
- Stakeholder Engagement: Involving employees, customers, and other stakeholders in the governance process to ensure diverse perspectives are considered. This can be achieved through advisory boards, public consultations, and regular communication channels that don’t just focus on top-down approaches.
Organizational Innovation for Effective AI Integration
Establishing AI governance and oversight raises huge questions regarding where this oversight will live and how it will be administered. "Oversight" will mean entirely different things if it's handed to IT, Compliance, or HR. Without clarity, this is more a starting point for discussion than a concrete statement of principles.
However, we can operate under the safe assumption that effective AI integration requires not only technological advancement but also organizational and social innovation. Companies need to foster a culture that supports ethical AI use and continuous learning. This can include:
- Cross-Functional Teams: Establishing teams that include IT, HR, legal, compliance, and other departments to collaboratively oversee AI integration. These teams can ensure that all aspects of AI implementation, from technical functionality to ethical considerations, are adequately addressed.
- Ongoing Training Programs: Providing continuous education and training for employees to keep pace with AI advancements and ensure they are equipped to work alongside AI. This includes both technical training and ethical awareness programs.
- Ethical AI Leadership: Encouraging leadership to set the tone for ethical AI use, promoting transparency, and fostering a culture of accountability. Given the cross-cutting nature of AI, clarity and leadership must come from the highest levels of the organization.
Conclusion
As we move forward, companies and organizations face rapidly shifting pressures. Ethics isn’t about rigidity but instead having a framework capable of handling uncertainty and change. Frameworks of decision making are all the more important in moments of disruption. Critically, ethics isn't a matter of following regulations but rather must come from the values, culture, and goals that form every organization’s DNA.
As the line between tools and workers blurs over the coming years, our values will be tested. Engaging in these conversations and preparing now is not just a strategic move but a moral imperative. With proactive and ethical leadership, the journey towards successful integration of AI into our workplaces is within reach.
For more insights on the AI implementation principles and to stay updated on our latest capabilities, be sure to listen to this episode of the LRN Principled Podcast.