It has long been argued that techno-capital is a proto AGI of sorts, pulling itself in “from the future”.
It has also been argued that corporations are already a form of AGI, doing more than any one person can do.
AI safety has been a much debated topic, which probably requires a Wittgenstein level of genius to suss out without more empirical evidence1. It’s very clear that even assuming we can create aligned AGI there will always be misuse risk. Even a completely obedient aligned AGI can be used for bad causes if wielded by a bad actor.
But wait a minute, don’t we have almost 500 years of history aligning corporations semi-successfully ?
The grand vision of Mutable.ai is simple:
To create “AI twins” of corporations (eg by wikifying them like our product auto wiki) as an on ramp to automating them, creating AI Organizations, and ultimately AGI; aligning them the same way we align corporations.
More on this soon.
BTW, Mutable.ai is coming out with a major upgrade to our product including a codebase chat that uses retrieval techniques that far exceed the effectiveness of vector embedding techniques and has infinite effective context length.
One thing I’d like to see is a better empirical understanding of AGI behavior profiles and I believe mechanistic interpretability is a (small) step in the right direction