婆罗门
精华
|
战斗力 鹅
|
回帖 0
注册时间 2013-11-26
|
楼主 |
发表于 2024-9-27 06:09
|
显示全部楼层
Good questions!
Right now, OpenAI, Inc. (California non-profit, lets say the charity) is the sole controlling shareholder of OpenAI Global LLC (Delaware for-profit, lets say the company). So, just to start off with the big picture: the whole enterprise was ultimately under the sole control of the non-profit board, who in turn was obligated to operate in furtherance of "charitable public benefit". This is what the linked article means by "significant governance changes happening behind the scenes," which should hopefully convince you that I'm not making this part up.
To get really specific, this change would mean that they'd no longer be obligated to comply with these CA laws:
https://leginfo.legislature.ca.gov/faces/codes_displayText.x...
https://oag.ca.gov/system/files/media/registration-reporting...
And, a little less importantly, comply with the guidelines for "Public Charities" covered by federal code 501(c)(3) (https://www.law.cornell.edu/uscode/text/26/501) covered by this set of articles: https://www.irs.gov/charities-non-profits/charitable-organiz... . The important bits are:
The term charitable is used in its generally accepted legal sense and includes relief of the poor, the distressed, or the underprivileged; advancement of religion; advancement of education or science; erecting or maintaining public buildings, monuments, or works; lessening the burdens of government; lessening neighborhood tensions; eliminating prejudice and discrimination; defending human and civil rights secured by law; and combating community deterioration and juvenile delinquency.
... The organization must not be organized or operated for the benefit of private interests, and no part of a section 501(c)(3) organization's net earnings may inure to the benefit of any private shareholder or individual.
I'm personally dubious about the specific claims you made about revenue, but that's hard to find info on, and not the core issue. The core issue was that they were obligated (not just, like, promising) to direct all of their actions towards the public good, and they're abandoning that to instead profit a few shareholders, taking the fruit of their financial and social status with them. They've been making some money for some investors (or losses...), but the non-profit was, legally speaking, only allowed to permit that as a means to an end.
Naturally, this makes it very hard to explain how the nonprofit could give up basically all of its control without breaking its obligations.
All the above covers "why does it feel unfair for a non-profit entity to gift its assets to a for-profit", but I'll briefly cover the more specific issue of "why does it feel unfair for OpenAI in particular to abandon their founding mission". The answer is simple: they explicitly warned us that for-profit pursuit of AGI is dangerous, potentially leading to catastrophic tragedies involving unrelated members of the global public. We're talking "mass casualty event"-level stuff here, and it's really troubling to see the exact same organization change their mind now that they're in a dominant position. Here's the relevant quotes from their founding documents:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact...
It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
From their 2015 founding post: https://openai.com/index/introducing-openai/
We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity...
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
From their 2018 charter: https://web.archive.org/web/20230714043611/https://openai.co...
Sorry for the long reply, and I appreciate the polite + well-researched question! As you can probably guess, this move makes me a little offended and very anxious. For more, look at the posts from the leaders who quit in protest yesterday, namely their CTO.
Hacker News上面的一个回复。 |
|