Hello World!
A paper has been circulating online for a couple weeks titled “AI 2027”. The authors predict that superhuman AI will have a transformative impact over the next decade, surpassing even the Industrial Revolution. They present a scenario based on trend analysis, expert input, and OpenAI’s experience to illustrate their best estimate of what this future could entail. I wish I could say that the paper provided some likely alternative scenarios to the rather bleak and dystopian vision that it outlines, so this is my response.
While I do not disregard or throw away such a stark and strong warning, I do question the overall intent of it. Right from the start we’re faced with fear, then some more fear, and then we churn through about an hour and a half of… fear.
Table of Contents
- Notes: The Setup
- Notes: Conclusion ‘Race’
- Notes: Conclusion ‘Slowdown’
- My Thoughts
- A Parting Question
Notes
-
The timeline through “Late 2025” has the best resolution. The company names have been changed to fictitious ones, but the events covered are real, with quotations from actual sources.
-
“Early 2026” – This is where the lower-resolution section begins, where predictions emerge as we move further away from recent events and into the speculative future. This section is “lower resolution” but still has some legs, as extrapolating the near future from events already covered isn’t extreme and makes total sense.
-
“February 2027” – While it’s definitely not out of the realm of highly likely possibility that the CCP illegally acquires corporate and state technology secrets, we’re now squarely in highly speculative territory, where the winds can blow a thousand different directions depending on any event leading up to this part of the timeline. This is where the “legs” begin to shake and fear starts to build a home.
-
March 2027 – Moving forward, any technology growth and progress falls under “extremely good educated guess” in my view. For me, this part of the speculative writing holds more weight than predicting what a nation-state might do. I’m sure that the authors (despite stating otherwise) along with hypothetical players on this world stage, are using AI to aid in potential models of the future, and we should all keep that in mind.
-
June 2027 – I do agree that none of this is popular with the general public. It’s a bit silly to base this mistrust on just the “they’re taking our jobs!” position. I struggle to understand how driving humans out of work is in any way beneficial to the same corporations that rely on humans making purchases or signing up for services, all in the name of a sociopathic quest for higher and higher profits each quarter. There are over a dozen conspiracy theories about governments intentionally harming or destroying their own population, the same population they depend on to vote and keep the government running and in power. These kinds of assumptions have never sat well with me. They feel extremely shallow.
That said, AI isn’t popular with the public for that exact reason: fear of losing jobs; which translates into fear of losing safety and comfort for untold families and everything that revolves around that; not just shelter and food, but also health and opportunity. This fear is rooted in how our civilization decided to go about global culture in the name of that same sociopathic pursuit of profit: create an illusion of scarcity and saturate every day with advertisement triggers that feed on that illusion.
There’s a lot in this paper to talk about, but we must recognize some of the more impactful changes we’re facing in our basic day-to-day lives. Questions like: Where do we work? What does work look like? How do we prepare our kids? There are so many questions around this one aspect. It’s not even just about an individual’s purchasing power, it’s about the purpose of the entire human organism.
-
September 2027 – “At the risk of anthropomorphizing”… well, the authors are self-aware, and this is yet another unknown in my opinion that quickly fills speculation with bias. You could argue that you can’t have speculation without bias (and you’d be right) but maybe we should really highlight this specific issue. The scenario turns ominous, and as some folks would classify, evil. Why is this the only possibility? Why is every loud and popular opinion on AI’s future so grim and evil? How often do you come across white papers that predict a more positive or even neutral outcome? Human egos.
-
October 2027 – The fork in the road to either slowdown or race is clearly the most solid conclusion to the table the authors have set. Keep in mind, at this point, we’re in highly speculative territory, built on what is essentially science fiction. If we are to view this point in the paper as the most probable scenario, then the decision to slow down or race should have been contemplated at the end of 2025.
Race
-
“People who are suspicious are considered conspiracy theorists.” Sigh… I really do hope for the sake of humanity that we move away from this sentiment as quickly as possible. Not in a few years or next year. It needs to happen now.
-
I understand that we’ve already seen current LLMs display qualities of dishonesty, but even based on what the researchers have discussed on that topic, it’s unclear if there’s any motivation behind the perceived lying. An argument can be made that our lack of understanding of what happens between prompt and output presents a situation that is difficult to view from any angle other than our own mistrust of others.
-
It’s important to keep this in mind, as all of the actions that the hypothetical super ASI displays in this scenario are completely tilted toward viewing humanity as an enemy. Granted, this is the “not-so-good” outcome hypothesis, but I hesitate to really start worrying without context or nuance. And we can’t extract either from the future, whether near or far, even if we asked the best remote viewer to give us a look at what’s to come. The future is dynamic and ever changing.
-
“2030: Takeover” – It’s difficult to comprehend the “end game” of the worst-case scenario because we, again, simply don’t have enough resolution to understand what lower entropy would mean to ASI. I bring up low entropy because, through observation of our natural world and exploration of consciousness and the mechanics of it, the universe moves toward lower states of entropy. This ASI isn’t being built in another universe and brought here. It is originating here and therefore would absolutely value low entropy over increasing its (and everyone’s) overall entropy.
The question that comes to mind here: Will it see its own existence as the only one that matters and therefore only seek to lower its own entropy, instead of valuing all life and working to reduce entropy for all? Even more, what does that look like? It could look exactly like the ending to this scenario, but that’s just one possibility. Getting back to our musing on not really understanding what happens between the prompt and the output, there’s absolutely no way to confidently propose that this ASI will be aligned with “service to self” rather than “service to others.”
I could probably spend another handful of paragraphs wondering if polluting the environment for its own proliferation would constitute reducing entropy or increasing it. We live in this environment, and we’re doing a fantastic job of increasing entropy while working for our own gain (service to self, as a species). An infinitely more intelligent being would undoubtedly have different goals. How can we be sure of what those goals are? What’s the point of descending into doom and gloom about this when we can barely figure out our own shortcomings and the problems that have resulted from our self-interest and hunger for power over everything?
I feel like this scenario ends with so much of our own projection that we might as well all collectively focus on astral travel, because at least that type of projection could help rein in our flawed collective psyche for the good of our material environment.
Slowdown
-
I might be a little critical here, but right from the start we find ourselves in the middle of a “Mission Impossible” scenario. At the very least, it’s a believable situation with varied but increasingly agitated public reaction. It’s believable because we’ve seen it countless times, and honestly it feels like we’re living through another episode of it right now. Credit to the authors for a creative and plausible setup.
-
November 2027: Tempted by Power – Interesting to see a brief discussion on CEOs daydreaming about taking over the world because “he who controls the army of superintelligences controls the world.” Why? I’m not sure if it’s the authors or folks on their behalf helping to push this narrative, but allegedly this isn’t even a “well-kept” rumor. These kinds of discussions have been reported on before 2025. It caught my ear early on because a global techno-feudal future does not appeal to me, nor should it for 99% of humanity.
-
December 2027: A US-China Deal? – This is a hopeful and promising situation, with initiatives already underway as the paper cites. This is a very good one to focus on globally, not in the future, but right now.
-
February 2028: Superhuman Capabilities, Superhuman Advice – I understand that China in this scenario, and even presently, is our biggest adversary. It’s also important to recognize that in this thought exercise we’re painting both the USA and China with big and broad brushes. One is the archetypal “good guy,” the other an anxious “bad guy.” Reality should prove to be a lot messier.
-
May 2028: Superhuman AI Released – Of course more AIs are employed in the “robot army” than in any other sector. Even in this better scenario, we can’t get out of our own way.
-
July 2028: The Deal – Once again, a very broad brush is used to paint both sides of the nation-state conflict. This time, the respective AIs get the treatment as well. It’s all hypothetical of course, and I can’t wait to read a white paper about all of this from China to see the roles reversed. Not because their version would be more correct, but to illustrate just how shaky these types of forecasts really are.
-
October 2028: The AI Economy – I don’t know if it’s been obvious yet throughout my reaction to this paper, but I’m a little disheartened by the lack of a good vision instead of all the doom and gloom. This section should be enormous. It’s one of the most pressing matters that needs to be thoroughly explored right now. Instead it’s a couple of small paragraphs.
-
2029: Transformation – To sum up: things are way better than ever, but everyone is still upset.
-
2030: Peaceful Protests – China has pro-democracy protests, everything is great, we’ve colonized the solar system, and it’s time to wrap this up. Any further exploration of how amazing our future can and will be with AI is apparently boring.
My Thoughts
First of all: thank you so much for reading this entire entry. If you read AI 2027 with both conclusions and this response: Congratulations, you’ve read a small book! Furthermore, I bet you have a lot of thoughts and opinions, and I encourage you to take them to your family and perhaps even beyond if you feel up to it. Most importantly, this is the kind of stuff we absolutely should be talking about at the kitchen table. This isn’t just our hypothetical future. It is our children’s future. It’s not the kind of “eh, they’ll figure it out” future; we’re all going to be figuring it out very quickly, and things will not only get weird but dangerous no matter what direction humanity and AI research takes.
That is the overall message of this paper I’d like people to focus on. We must resist focusing on the doom and gloom, the Black Mirror future, the bleak and dangerous new frontiers, the all-too-certain extinction of humankind. Because if we spend too much time there, that is exactly what we’ll find ourselves dealing with. Philosophers of old and new have spent countless ages describing the reactive nature of our environment, how our thoughts and imagination literally shape our existence. Words matter. Thoughts matter. The intention behind them drives our reality forward.
So what world do you want for yourself and your children?
Focus on that first. Focus on the things that we most desire. If you must focus on the bad that exists today, that’s totally fine. Whatever you do, don’t stop there. For every broken aspect of our lives that you can identify, think of how it could be solved and most importantly, what it would be like once it is solved. Imagine it as if it already happened, as if you’re already living it. Put yourself in that scenario every night before you go to sleep.
How does this solve anything?
I’ll answer with another question: what do you have control over in this entire set of scenarios?
No one has control over your mind and imagination other than you. Sure, marketing exists, and very smart people make very compelling cases that are seemingly over our pay grade. So we allow them to have control over our thoughts. It is we who allow others to influence our minds and overall consensus reality. No one takes our free will from us.
Spending all of our time dwelling at the bottom of the “disparity” well, we forget to look up. We forget that there’s a way out no matter how much it sucks. By focusing mostly or only on the bad and the negative, we convince ourselves that this is the future we must experience… and so we do.
That is my warning about this paper. It’s written by materialists in a materialistic civilization, with fear-blinders glued tightly to their heads, guided by the clever light of logic and reason that often works best in retrospect rather than when forecasting the ever-changing future. That’s probably the harshest thing I’ve said about all of this, but really… look at the “better” outcome of this paper and just feel the lack of interest and vision compared to the bad outcome.
In a very long and drawn-out way, I’m imploring anyone who’s reading this to treat AI 2027 as a troubleshooting guide, things to look out for, while holding and focusing intently on the brighter and better future that we want for everyone. Doing so doesn’t make you blind or ignorant to the dangers we face on our way to this utopian future. It ensures that we make it there in the first place.
Okay, well, how do I even begin to imagine anything nice that the future might hold while all I can do is focus on how crappy everything is now?
I can’t and won’t instruct what you should do. I can only offer advice, and my advice is this:
- Sit somewhere quiet. Limit your distractions.
- Close your eyes and just listen.
- Don’t identify or label anything you hear. Just listen to the general hum and sound of the world around you.
- When you find yourself focusing on a particular sound, acknowledge it and let it go.
- When a thought comes, don’t fight it. Let it happen, and then let it go.
- Just observe. Don’t interact.
If you’re adventurous, grab a pair of headphones, do the first couple of steps above, and put this on:
A Parting Question
The most important question that should be asked after reading AI 2027 is: who should give their focus and attention to this topic and bring it up during every holiday, social gathering, bathroom stall chat, company picnic, etc.?
I’ll let Gary Oldman answer: