OpenAI announces team to build ‘crowdsourced’ governance ideas into its models

Key Points:

  • OpenAI forms the Collective Alignment team to gather public input on AI model behaviors.
  • The team is an expansion of OpenAI’s democratic inputs program, which funds experiments in setting up a “democratic process” for AI governance.
  • OpenAI faces increasing scrutiny and criticism from policymakers and rivals regarding its AI technology and business practices.

Summary:

OpenAI, a leading AI startup, has announced the formation of a Collective Alignment team to gather and incorporate public input on its AI models’ behaviors. This move aims to ensure that OpenAI’s future AI models align with the values of humanity.

 

The team is an extension of OpenAI’s democratic inputs program, which awards grants for experiments in setting up a “democratic process” to decide rules for AI systems. The program funds initiatives ranging from video chat interfaces to platforms for crowdsourced audits of AI models.

 

OpenAI’s efforts to involve public input and establish ethical guardrails for AI are seen in the context of increasing scrutiny from policymakers and criticism from rivals, which accuse it of trying to secure “regulatory capture of the AI industry” by lobbying against open AI R&D.

DAILY LINKS TO YOUR INBOX

PROMPT ENGINEERING

Prompt Engineering Guides

ShareGPT

 

©2024 The Horizon