Many AI conferences are concentrated on “the challenges of the next five years (safety, regulation, industrial applications).”
For that reason, AGI30 sets its sights on ultra long-term (post-2030, post-ASI) objectives.
Superalignment
Refers to “the technical problem solving of reliably aligning systems that are far more intelligent than humanity with human intentions and values.”
Hyperalignment
Refers to “the adaptive alignment of the entire system in a post-AGI/ASI world including humans and AI, and extending to social institutions, the economy, and culture.”