The social media behemoth Meta has dissolved its Responsible AI (RAI) department, which was tasked with overseeing the security of its AI initiatives as they were being developed and implemented. Some members of the RAI team joined the AI Infrastructure team, while many moved to positions in the company’s Generative AI product division. The Generative AI team at Meta was founded in February and is primarily focused on creating solutions that resemble human language and image generation.
As Meta approaches the conclusion of its “year of efficiency,” as CEO Mark Zuckerberg referred to it in a February earnings call, the RAI segment is reorganising. The organisation has experienced a number of layoffs, team mergers, and redistributions as a result. Top players in the industry now prioritise ensuring AI safety, especially as officials and regulators become more aware of the possible risks associated with the emerging technology. Google, Microsoft, Anthropic, and OpenAI established an industry organisation in July with the goal of establishing safety guidelines for AI advancements.
Members of the RAI team have been reassigned within the organisation, but they have not wavered in their commitment to promoting ethical AI research and applications, placing a strong emphasis on continued funding in this field. Meta just unveiled two generative models driven by AI. The first, Emu Video, can create video clips from text and image inputs by utilising Meta’s earlier Emu model. With its emphasis on image modification, the second model—Emu Edit—promises more accuracy in image editing.