Recently, Facebook’s parent company, Meta, along with IBM and over 50 other founding members, announced an AI Alliance to “advance open, safe, responsible AI.” The group would be committed to “open science and open technologies,” promoting standards and benchmarks to reduce the risk of harm that advanced models might cause.
These are critically important goals, as many tech companies, driven by the breakneck AI arms race, have come out with products that could upend the lives and livelihoods of many, and pose an existential threat to humanity as a whole. Given the near-absolute corporate dominance in the U.S. tech sector, federal support for alternative AI pipelines and nonproprietary forms of knowledge are key to diversifying that sector, using that diversity as democratic guardrails for a dangerous technology.
The lineup of the alliance is impressive: NASA and the National Science Foundation; CERN and the Cleveland Clinic; and a deliberately eclectic group of universities: including Yale, University of California, Berkeley, University of Texas at Austin and University of Illinois, but also the University of Tokyo, Indian Institute of Technology, Hebrew University of Jerusalem and the Abdus Salam International Center for Theoretical Physics. Given the range of institutions represented and their diversity in goals and methods, the alliance could begin by laying a shared foundation of AI literacy, initiating a public conversation about the different kinds of models that could be developed, the different uses to which they could be put, and the degree of openness needed to ensure that developers and people affected by their uses would have input into their designs and operations.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
In…
Read the full article here