WASHINGTON, June 5 – Several former and current employees of OpenAI have published an open letter, voicing their concerns about the fast-paced development of artificial intelligence industry and an absence of law protecting whistleblowers.
“AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this,” the employees stated.
Signatories of the letter included OpenAI former employees – Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright and Daniel Ziegler, former Google DeepMind employee Ramana Kumar, current DeepMind employee Neel Nanda, and several other anonymous former employees.
In the letter, the employees stated that they were worried about “the serious risks posed by these technologies”, most of which is unknown to the outsiders as companies “currently have only weak obligations to share some of this information with governments, and none with civil society”, reported German news agency (dpa).
“We do not think they can all be relied upon to share it voluntarily,” they added.
“It’s really hard to tell from the outside how seriously they’re taking their commitments for safety elevations and figuring out societal harms, especially as there is such strong commercial pressures to move very quickly,” one of the employees argued.
Speaking about the whistleblower laws, the employees stated that “ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated”.
The open letter also emphasised about how the employees are blocked from sharing substantial information about the AI’s capabilities due to confidentiality agreements.
“It’s really important to have the right culture and processes so that employees can speak out in targeted ways when they have concerns,” one of the employees demanded.
In a response to the letter, the Microsoft-backed company said that it is proud of its “track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk”.
OpenAI also highlighted that it has an anonymous integrity hotline and a Safety and Security Committee to protect the whistleblowers.