Springe direkt zu Inhalt

AI 2027

cover

cover

Link zum Volltext (Open Access)

Künstliche Superintelligenz in Sichtweite – Ein Zukunftsszenario

Der Bericht AI 2027 beschreibt ein realistisches Szenario zur möglichen Entwicklung von künstlicher Superintelligenz (ASI) innerhalb dieses Jahrzehnts. Die Autor:innen betonen, dass führende KI-Firmen wie OpenAI, Google DeepMind und Anthropic bereits mit einem Durchbruch bei AGI (Artificial General Intelligence) in den nächsten fünf Jahren rechnen. Das Dokument will keine Panik schüren, sondern eine fundierte Debatte anregen: Wie könnte eine solche Entwicklung konkret ablaufen – und wie kann die Gesellschaft sich darauf vorbereiten?

AI 2027 bietet dazu zwei mögliche Zukunftspfade: einen dramatischen Wettlauf um KI-Vormachtstellung und eine alternative, kooperativere Entwicklung. Ziel ist es, eine breite Diskussion über Verantwortung, Risiken und Chancen der KI-Zukunft zu entfachen


We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution. The CEOs of OpenAI, Google DeepMind, and Anthropic have all predicted that AGI will arrive within the next 5 years. Sam Altman has said OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.” It’s tempting to dismiss this as just hype. This would be a grave mistake—it is not just hype. We have no desire to hype AI ourselves, yet we also think it is strikingly plausible that superintelligence could arrive by the end of the decade. If we’re on the cusp of superintelligence, society is nowhere near prepared. Very few people have even attempted to articulate any plausible path through the development of superintelligence. We wrote AI 2027 to fill that gap, providing much needed concrete detail. We would love to see more work like this in the world, especially from people who disagree with us. We hope that by doing this, we’ll spark a broad conversation about where we’re headed and how to steer toward positive futures. We wrote this scenario by repeatedly asking ourselves “what would happen next”. We started at the present day, writing the first period (up to mid-2025), then the following period, until we reached the ending. We weren’t trying to reach any particular ending. We then scrapped it and started over again, many times, until we had a finished scenario that we thought was plausible. After we finished the first ending— the racing ending—we wrote a new alternative branch because we wanted to also depict a more hopeful way things could end, starting from roughly the same premises.