- The annual AI Policy Summit in Zurich focuses on bringing together the top minds in AI policymaking across the globe to collectively shape the future of AI policies.
- The spirit of innovation and collaboration was strong, especially among European leaders already making great strides with the ‘Alps’ supercomputer. There were even talks about developing a CERN-like institute for AI.
- Adoption of global AI policies goes beyond legislation. Organizations that develop standards have a huge role to play. Global policy creation and adoption could help standardize AI development and instill trust.
I recently attended the AI Policy Summit 2024 held in Zurich, Switzerland, and online. The summit’s goal was bringing global multi-stakeholders together to collectively shape the future of AI policies. I emerged feeling more energized than ever about the potential for AI and the future of work.
As the Chief Legal Officer for Eightfold, I attended because I have a great interest in exploring how policies drive adoption across the globe. Eightfold has offices in EMEA, India, and the United States, and a global customer base.
AI policies carry global impacts. In today’s cloud computing environment, even one regional development can create global ripples, when computers, data sets, and applications are often applied equally to globally situated customers. Multi-stakeholder voices matter. Historians and academia’s voices carry weight as much as lawyers steeped in traditional consumer-protection and constitutional-right issues.
At this fifth annual summit, the global roster of attendees was broad with 90-plus countries represented. A first-time attendee, I loved the global scope and themes of the event. Speakers included representatives from the European Union, Switzerland, United Nations, International Telecommunication Union, and the United States in industries including government, private sector, and academia.
This year’s global AI theme for the AI Policy Summit could not have been timed better, coming right after this year’s Nobel Prizes in physics and medicine were awarded to world-renowned leaders in AI.
Key takeaways from the event included:
- Interdisciplinarity is vital for successful AI governance implementation.
- Building trust and maintaining ethical considerations are critical for AI development and deployment.
- Capacity building for AI is needed in education, government, and corporations.
Related content: Read more about Responsible AI at Eightfold in our online guide.
Switzerland: A leader in AI
I was inspired by Switzerland’s leadership in AI discussions and actions. Switzerland is home to CERN, a globally recognized leading research laboratory working to “advance the boundaries of human knowledge” in science and technology.
The topic of a potential CERN-like institute for AI was raised. In his keynote speech, Amandeep Gill, Under Secretary General, Envoy on Technology, United Nations, also advocated for independent global institutions to foster a shared understanding of AI. To achieve this, stakeholders need to build capacity for AI, which means easy access to computing resources and data sets.
I believe Switzerland has a special place in the AI landscape. The nation’s long history of engineering prowess, like the ingenuity of the Top of Europe railway, reflects the deeply talented people in the country. Its proximity to, yet independence from, the EU allows Switzerland to fashion its own AI strategies.
It is exciting to see universities like ETH Zurich and EPFL taking the lead in shaping the dialogue and engaging in deep AI practices. Founded in 1855, ETH Zurich was established to drive growth for Swiss industries and is still focused on innovating in science and computing for the future. ETH Zurich and EPFL are now starting a new joint institute on AI.
The “Alps” supercomputer, recently built using 10K-plus Grace Hopper GPUs, is a particularly unique achievement. Hundreds of researchers and 10-plus institutions in Switzerland are engaging in the Swiss AI Initiative to build performant and open-source foundation models for economic and societal applications.
Data sets: The foundation of trustworthy AI
Data is the lifeblood of AI. A key discussion at the conference was about how to build high-quality data that can be reused, which leads to trustworthy AI.
Good data is fit-for-purpose data. Not surprisingly, governments are a huge source of reusable data, and interconnecting them with standard APIs is a critical pursuit to drive effective LLM use cases.
The I14Y interoperability platform is Switzerland’s national data catalog. The key principles behind it and other related data initiatives are FAIR (findable, accessible, interoperable, and reusable).
Corporate data sets are equally important in this regard. Each large corporation holds myriad data sources, including data from vendors, customers, and internal workforce. Building data interoperability in a single platform leads to more trust and transparency.
Trustworthy and reusable analytics save time and show powerful insights. Dynamic and reusable data and multi-model deployments lead to agentic AI use cases, enabling streamlined workflows and more collaborative and adaptive workforces.
Academia’s role
At the summit, several professors at the forefront of researching frontier AI models devoted talks to applying open-source models to critical aspects of societal needs in education, health, and science.
Academia has a strong role in driving advancements in AI. Professors are funded by several sources, including corporate and government grants, but those indirect funding sources are enablers and not the determinants of academic research results.
I especially appreciated the voice of Christopher Stubbs, a Harvard physics professor who took the stage urging U.S. universities to step up in dialogues on AI policies. The debates on stage and through audience Q&As underscored the need for fundamental AI education at the undergraduate level.
Personally, I see the debate about how AI should be used play out every day. Online posts are replete with debates on what to do with AI in education, juxtaposing comments praising professors who prohibit students from using ChatGPT for homework with criticisms that educators have not done enough to enable AI to be responsibly used.
On this I am heartened by my son’s sharing that students at his elementary school are proactively raising questions about AI. Their teacher sees the value of GenAI ideation, encouraging students to use GenAI to visualize image ideas for fanciful words like “vindicate” and “zealot,” and knowing that the final homework result still needs to be drawn by the students.
The purpose of this assignment is to learn the meanings of new vocabulary. How to get there, AI or not, is secondary, and as a parent, I appreciate the teacher’s flexibility to embrace new technology.
Global AI policies
Standardization is a key enabler of AI trust.
Organizations like NIST and ITU are actively involved, and it’s noteworthy that over 150 AI standards are currently under development. ISO 42001, an international standard specifying requirements for establishing, implementing, maintaining, and continually improving AI management systems in organizations, was highlighted, though accredited auditors for this standard are still scarce.
As these standards are finalized, iterative refinement and regular auditing will be critical for market adoption. At Eightfold, we are closely monitoring this landscape and exploring ISO 42001 for our own AI trust program.
The participation of global entities and leaders from the African Union, UN, EU, and U.S. was particularly encouraging. I am aligned with those representatives that AI policies should go beyond legal compliance, which focuses on typical justice and fairness issues like discrimination and liability. The benefits of AI should be equitably distributed across geographies and stakeholders, and responsible adoption is key.
Fear-based approaches, over-regulation, and piecemeal legislation are counterproductive. Often, AI regulations align with existing laws around the same underlying principles: consumer protection, fostering competition, privacy enhancement, transparency building, and intellectual property promotion. Building upon these established frameworks is an effective way for many AI field applications.
I commend ETH Zurich and Ayisha Piotti, Head of the AI Policy Summit, for organizing this thought-provoking event. Global discussions like these are invaluable for shaping how we will continue to collectively integrate AI into our work and lives.
Read more about Responsible AI at Eightfold online or in our white paper.
Roy Wang is the Chief Legal Officer for Eightfold.