OpenAI, a leading AI startup, has announced the formation of a Collective Alignment team to gather and incorporate public input on its AI models’ behaviors. This move aims to ensure that OpenAI’s future AI models align with the values of humanity.
The team is an extension of OpenAI’s democratic inputs program, which awards grants for experiments in setting up a “democratic process” to decide rules for AI systems. The program funds initiatives ranging from video chat interfaces to platforms for crowdsourced audits of AI models.
OpenAI’s efforts to involve public input and establish ethical guardrails for AI are seen in the context of increasing scrutiny from policymakers and criticism from rivals, which accuse it of trying to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.