AI and Policy Landscape - 2025 Look Ahead
With agentic AI taking off within organizations, a new administration in Washington DC, and clearer guidance around the EU AI Act, 2025 promises to be exciting.
Var Shankar
Panel Participants:
- Jamaur Bronner, Co-Founder, The Foregrounds
- Monika Viktorova, Responsible Tech Product Manager, Logistics Industry
- Jason Green-Lowe, Executive Director, Center for AI Policy
- Moderator: Var Shankar, Chief AI and Privacy Officer, Enzai
VS Introduction:
- Introduces the panelists.
- The discussion focuses on AI policy and technology, anticipating a dynamic 2025.
Part 1: Where is AI Going in 2025?
MV:
- The Bad: Misuse of generative AI is increasing due to accessibility and lower costs. Examples include: voice cloning for scams, creation of CSAM, and large-scale disinformation. Bad actors are effectively exploiting the technology. Even in legitimate organizations, high-profile failures are occurring, like chatbots providing misinformation or offering solutions outside of company policy (e.g., Air Canada case). GenAI's propensity to hallucinate and tailor responses to user desires creates risks. A specific example is Character.AI, where pro-anorexia chatbots are influencing vulnerable teens, highlighting insufficient guardrails.
- The Good: GenAI is driving significant advancements in scientific discovery, particularly in science and medicine. Protein folding prediction (AlphaFold 2) exemplifies this, accelerating drug design and development.
- In Organizations: GenAI is being deployed widely across industries. Sales and marketing teams use it for content creation and personalized outreach. CRUD applications in finance and elsewhere see huge productivity gains when GenAI is used (with guardrails and human oversight) for automating reports. Retrieval augmented generation allows interaction with vast document repositories, aiding information management. However, user education about hallucinations and source verification is crucial.
- Key takeaway: Potential for productivity augmentation is vast, but robust guardrails, user education, and addressing hallucinations are essential. ROI and pricing models for large organizations remain a key question.
JGL:
- There's significant untapped potential for technological innovation even without further improvements in foundation models. Organizations need to focus on exploiting current models, exploring new market niches, and "unhobbling" interaction methods. Current interactions are limited (text-based, short prompts, instant responses), but future interaction could involve larger context windows, AI "thinking time", self-prompting, multimedia interaction (voice, video, music, vehicle control), and integration with robotics for real-world impact.
- Even a temporary slowdown in model scaling won't halt progress indefinitely. Bottlenecks like electricity, data, and funding may affect exponential growth, but linear growth is still expected. 2025 models are likely to be significantly more powerful than 2024 models. Reports of performance plateaus may be misleading, as companies are strategically quiet about their investments and true progress.
Part 2: Where is Policy Going in 2025?
JGL:
- US Policy Uncertainty: Little discussion of AI during the campaign; only a cryptic mention in the Republican platform about replacing Biden's AI executive order with AI built on "free speech and human flourishing."
- "Free speech" aspect: Likely pushback against disinformation concerns (perceived as censorship) and political correctness.
- "Human flourishing" aspect: Meaning unclear; potential links to Vatican comments on AI, Chicago School of Economics work on family welfare and suitable jobs.
- Diverse Advisors: Trump administration's AI approach uncertain so far, due to diverse cast of advisors.
- Hopes: Continued US leadership through initiatives like the bipartisan AI Safety Institute, collaboration with innovative companies on testing and safety standards, and implementing security guardrails.
MV:
- US vs. EU: US executive order focused on voluntary compliance, while EU AI Act has strict penalties (multi-million dollar fines). The EU approach could inspire similar penalties in future US policy.
- Global Regulatory Landscape: Patchy environment with regulations emerging in China, Brazil, Australia, etc., creating friction and challenges for product teams and executives. Multinational companies need to adapt product roadmaps, feature sets, underlying architecture, and models to comply with various jurisdictions.
- EU Enforcement: EU's focus on high-profile enforcement cases and significant fines under GDPR could be a model for the AI Act's enforcement, creating an impact on how organizations, particularly multinationals, approach product development and deployment.
Part 3: What Does This Mean for Organizations and Practitioners?
JB:
- Upskilling and Adaptation: Adaptation to AI is inconsistent and insufficient, causing concern. Many companies prioritize GenAI strategically, yet lack upskilling plans. Organizations need to focus on AI education and standardization.
- Measuring ROI: Productivity gains from tools like Cursor and GitHub Copilot need to be tracked and scaled organization-wide, not just utilized by individual developers.
- Risk Mitigation: Non-standardized use of GenAI exposes organizations to significant risks. Backwards testing, human oversight, and careful analysis are crucial.
- Shifting Skill Sets: Junior developers with access to GenAI tools become "AI-enabled developers," focusing on creative problem-solving and solution design. This necessitates adjustments in job descriptions, HR practices, and performance evaluations.
- Addressing Job Displacement Concerns: While some job functions may be reallocated, the focus should be on leveraging AI as an enabler. Responsible organizations should reimagine workplaces, workflows, and job descriptions to create new opportunities and effectively utilize human oversight.
- Organizational Implications/Best Practices: Many best practices are emerging in real-time, often from well-resourced organizations or case studies, rather than traditional consulting or business schools.
JB continued: Sources of Best Practices
- Foundation Model Providers: Educational initiatives like LLM University (Cohere) and similar efforts from OpenAI provide valuable resources.
- AI Education Partners: Organizations like The Foregrounds are compiling case studies, evaluating success and failures, and interviewing experts to understand and disseminate best practices.
- Internal AI Ethics/Governance Teams: These teams drive compliance, responsible deployment, and cross-functional alignment within organizations.
- Government and Government-Facing Institutions: Initiatives like Singapore's National AI Strategy and the work of the Center for AI Policy are influencing global best practices through stakeholder alignment.
JB continued: Promoting Entrepreneurship
- The incoming administration's focus on promoting entrepreneurship, coupled with potential increased investment in areas traditionally prioritized by conservative administrations (defense, advanced manufacturing) and the influence of figures like Elon Musk, suggests potential shifts in the entrepreneurial landscape.
- Defense: Companies like Anduril and Palantir are likely to see increased opportunities.
- Advanced Manufacturing and Robotics: Focus on onshoring manufacturing capabilities, with AI enabling rather than replacing blue-collar jobs. Companies like Figure and Elon Musk's ventures in humanoid robotics are examples. This also offers interesting opportunities from an industrial manufacturing standpoint.
- Creative Industries: A new wave of entrepreneurs and creators are leveraging AI tools for music generation, video production, image creation, etc. These individuals are becoming experts in these tools, similar to the rise of expertise in SaaS products, lowering the barrier to entry and fostering new entrepreneurship. This is achieved through streamlining the product development lifecycle, allowing for faster iteration and deployment.
- Healthcare and Education: While there is exciting potential in these sectors, it is uncertain whether the new administration will support widespread expansion of AI in these areas, given historical conservative approaches.
MV continued: Smaller Organizations
- GenAI Advantage for Large Organizations: Larger organizations with multinational footprints often possess more data and resources, giving them an edge in GenAI development. They face diverse and sometimes conflicting regulatory concerns (e.g., executive orders, the AI Act, and regulations in China, Brazil, Australia). Navigating this fragmented regulatory landscape adds complexity for product development and deployment.
- Strategies for Smaller Companies and Entrepreneurs:
- Understand GenAI's Capabilities: Focus on identifying what GenAI excels at and how it can specifically benefit their business.
- Build vs. Buy: Make informed decisions about whether to build AI solutions in-house or leverage off-the-shelf products, which are often more cost-effective for smaller organizations.
- Procurement and Vendor Management: Thoroughly vet AI vendors, focusing on not just technical capabilities but also downstream risks, regulatory compliance, cybersecurity, and legal considerations. This reduces reliance on internal expertise and allows smaller companies to benefit from specialized vendors.
- Learn from Larger Organizations: Adopt procurement and vendor management best practices observed in larger organizations. This helps smaller companies select the right solutions and manage their risk effectively.
- Role of Government Policy: Grants to NGOs can facilitate access to GenAI tools for broader societal benefit and to ensure a more level playing field. This can support research, development, and responsible AI deployment in diverse sectors, potentially mitigating "winner-take-all" effects. This fosters innovation and competition, preventing dominance by a few large players.
JGL continued: Human-AI Collaboration
- Maintaining Human Influence: Tricky; AI is often perceived as objective and trustworthy, making it easy to over-rely on its recommendations. Human-in-the-loop may be insufficient.
- Integrating Human Roles: Build human roles fundamentally into the decision-making process, not just as a final approval step.
- Oracle AI: Use AI to provide descriptions of the situation and highlight relevant information rather than making recommendations about the final action. This empowers humans to make informed decisions based on AI-generated insights.
- Tasks vs. Jobs: AI excels at automating specific tasks, which may not align with entire jobs. Restructure jobs to separate tasks best done by humans from those best done by AI. This creates a win-win collaboration and maximizes the strengths of both. Long-term concern remains about diminishing human tasks as AI advances.
VS Conclusion:
- Trump administration's approach is not yet clear.
- EU AI Act will impact global organizations.
- Organizations must focus on education, talent acquisition/development, and appropriate human oversight.
Build and deploy AI with confidence
Enzai's AI governance platform allows you to build and deploy AI with confidence.
Contact us to begin your AI governance journey.