[D] What is OpenAI? I don’t know anymore.
What is OpenAI? I don’t know anymore. A non-profit that leveraged good will whilst silently giving out equity for years prepping a shift to for-profit that is now seeking to license closed tech through a third party by segmenting tech under a banner of pre/post “AGI” technology?
The non-profit/for-profit/investor partnership is held together by a set of legal documents that are entirely novel (=bad term in legal docs), are non-public + unclear, have no case precedence, yet promise to wed operation to a vague (and already re-interpreted) OpenAI Charter.
The claim is that AGI needs to be carefully and collaboratively guided into existence yet the output of almost every other existing commercial lab is more open. OpenAI runs a closed ecosystem where they primarily don’t or won’t trust outside of a small bubble.
I say this knowing many of the people there and with past and present love in my heart—I don’t collaborate with OpenAI as I have no freaking clue what they’re doing. Their primary form of communication is high entropy blog posts that’d be shock pivots for any normal start-up.
Many of their blog posts and spoken positions end up influencing government policy and public opinion on the future of AI through amplified pseudo-credibility due to Open, Musk founded, repeatedly hyped statements, and a sheen from their now distant non-profit good will era.
I have mentioned this to friends there and say all of this with positive sum intentions: I understand they have lofty aims, I understand they need cash to shovel into the forever unfurling GPU forge, but if they want any community trust long term they need a better strategy.
The implicit OpenAI message heard over the years: “Think of how transformative and dangerous AGI may be. Terrifying. Trust us. Whether it’s black-boxing technology, legal risk, policy initiatives, investor risk, …—trust us with everything. We’re good. No questions, sorry.”
We’ll clarify our position in an upcoming blog post.