The World of Yesterday, Today and Tomorrow

The World of Yesterday, Today and Tomorrow
"Most of the girls in my class completely missed the moment when the world began to end." — Maddie Kim (Pantheon)

Now, I do not believe the world is ending. At least not in the literal sense. But the world as we had known it did end on November 30, 2022 — the day ChatGPT stepped into the public spotlight and into everyday life.

ChatGPT 3.5 was one of the first public large language models to become accessible to the broader public. Reaching one million users in just five days, it set a record at the time. But what exactly are large language models? Put simply, they are chatbots that, for the first time, were able to generate answers to almost any question in a way that was barely distinguishable from human responses. By now, they can create photorealistic images, generate videos, analyze and solve complex problems, write code better than most average programmers — myself included — and, with the help of software agents, even act autonomously to complete tasks in the real world, up to and including buying a car. More on the fundamentals of large language models, software agents, and the underlying ideas will follow in future articles.

In this piece, I want to explore how the world has already changed because of generative artificial intelligence, what is still likely to change, and why this is perhaps the most important moment to seriously engage with its consequences.

The World of Yesterday

Let us first take a step back — and not just six years, but roughly twenty-six. Back to a time when the internet was still taking shape, teenagers lived without smartphones, and we had to gather our knowledge from those strange old objects called books, newspapers, and magazines. Back then, acquiring knowledge was an active task. It required effort. And institutions such as schools and universities were meant to encourage exactly that. What is interesting is that, at least from today’s perspective, both institutions seemed to enjoy far more trust and respect than they do now.

Humanity may already have been beaten at chess — Deep Blue defeated Garry Kasparov in 1996 — but thanks to Go, we still felt intellectually superior to the machine. The world was becoming connected, yes, but getting online still required a modem and, for many of us, probably an AOL CD lying around somewhere ;-)

But what was the real difference compared to today’s world? In my view, the biggest one is this: back then, we actually had to learn skills and build competence ourselves. Charlatans and frauds have always existed, of course, but even they had to put in some work before pretending to be experts. It was not enough to open ChatGPT, read two paragraphs on a topic, and suddenly present yourself as an authority. Proper research had to be learned. Source criticism had to be learned. And both of those are inseparable from critical thinking — and from thinking in general. That, at least, has already been in decline since the rise of social media (“But influencer XYZ said so…”), and generative AI is now accelerating it even further. Why think for yourself when you can simply ask the machine?

With the rise of the internet, we also began to lose something else: frustration tolerance, and with it the curiosity to explore things on our own. Video games are a wonderful example of this. On the one hand, modern games have become easier and less demanding in many ways. On the other hand, help is now built directly into the experience. Tutorials are longer, hints appear everywhere, and if you still get stuck — well, there is always Google. Old urban legends, like the mythical ways to catch Mew in Pokémon Red and Blue, would never survive in a world like this. One of my first video games was Super Mario 64, and for weeks, I simply did not realize that King Bob-omb just had to be thrown to the ground three times in a row. I tried everything, including carrying him halfway across the map and attempting to feed him to Chain Chomp. Today, I would probably Google it after ten minutes and move on. That certainly saves time — but it also narrows the space for experimentation, curiosity, and actual independent thought.

The World of Today

So here we are, now in year four post-GenAI. And while these models were still being laughed at in the beginning, today’s systems can do far more than just the practical party tricks we have already grown used to — summarizing texts, translating, coding, generating images and videos, and so on. They are now capable of producing the occasional genuine wow moment. Claude Opus 4.6, for example, reportedly solved a graph theory problem within an hour. The author of the linked paper was Donald Knuth, one of the true pioneers of computer science — author of The Art of Computer Programming and recipient of the Turing Award in 1974. By the way, the Turing Award is basically the Nobel Prize of computer science. In fairness to humanity, it should be said that Knuth did not simply throw the problem into Claude and walk away. He had to guide it in the right direction through deliberate, complex prompting. Still, the broader point remains: generative AI systems increasingly look like emergent systems, meaning they are far more than the sum of their parts — or of their training data. AlphaFold, by the way, had already made protein folding manageable with the help of AI years ago. “Solved” is perhaps too strong a word, but certainly more tractable. And yes, that problem was long considered one of those brutally difficult computational challenges that seemed almost out of reach. But I am drifting off here — there will be time for that in future articles.

AI changes everything. Yes, yes, we have heard that line many times over the last twenty years. But this time, at least from where I am standing, it actually feels different. Combined with software agents, many AI systems are now able to handle increasingly complex tasks on their own — from financial analysis to writing software to executing entire workflows. In the long run, that means a large share of current jobs may simply become obsolete. And not just in tech, but across much of the white-collar world. Office jobs, knowledge work, the whole polished middle-class professional layer. Not tomorrow, perhaps. But the direction of travel is pretty clear.

The people at Citrini Research go even further in one of their scenarios — and yes, it is a scenario, not a prophecy — arguing that by 2028 we could face something like a global intelligence crisis, with unemployment in the US rising above 10 percent. The jobs disappearing would not just be random low-wage positions, but well-paid knowledge work. In other words: the classic middle class, and even parts of the lower upper-middle class, would begin to crack. We are already seeing similar tendencies in Germany, although AI is certainly not the only driver here — our economic and energy policy deserve their own separate rant at some point. But let us not get lost there. The quiet erosion of decent jobs probably deserves an article of its own anyway.

So where does the Citrini scenario ultimately lead? To a world in which AI does not fail because it is not capable enough, but triggers an economic shock precisely because it is too capable — and improves too quickly. It is not that AI cannot do enough. It is that it can do too much, too fast. Large parts of the SaaS universe could suddenly become worthless. Platforms lose their margins. Differentiation between software products begins to disappear. And for the first time in a long time, we might actually experience strong and persistent deflation. Add to that massive job losses, which create an oversupply of workers competing for a shrinking number of roles — and the result is obvious: wages come under pressure, hard. Tada. The perfect storm.

What we should really be debating right now is the same kind of question that unions in the automotive industry had to confront after the Second World War. Back then, the focus was on complementary technologies — technologies that did not simply eliminate labor, but created new forms of work, ideally more meaningful ones. That was one of the reasons productivity gains were shared more broadly, and why the middle class expanded so significantly. For anyone interested in that dynamic, I highly recommend Power and Progress. But of course, that is not the debate we are having. Partly because there is a spectacular lack of foresight, partly because public priorities are often laughably misplaced, and partly because our far more connected and globalized world makes this debate much harder to conduct in the first place.

So what remains? Well, as already mentioned, the scenario above is only one scenario among many. And given Europe’s current speed of innovation, we will probably not have to worry about it until 2042 anyway 😉. But jokes aside: the labor market is going to get harsher. Jobs will change. And those who completely refuse to engage with AI technologies will eventually fall behind, simply because they will be too slow at what they do. At the same time, certain foundational skills still need to be developed first — and not instantly outsourced to AI the moment things become difficult. Simply teaching “more digital skills” is not the answer. If anything, Gen Z already suggests the opposite, being the first generation in modern times to appear weaker than its predecessors across a range of cognitive dimensions. One major reason seems to be excessive screen time. Another is the overuse of digital education technologies in schools. Jonathan Haidt arrives at similar conclusions in The Anxious Generation, although his focus is more on depression and mental health. So we are walking a very narrow line here: between using AI intelligently on the one hand, and continuing to train and use our own brains on the other. Use it or lose it — in a double sense. Use AI, yes. But do not forget to keep using your own brain, too 🧠

The World of Tomorrow

Well, no, I do not own a crystal ball, and simply asking Chatty, Gemini, or my buddy Claude to generate a neat little future forecast for me would be far too boring. But the problems ahead have already been outlined in the previous section, and perhaps that is the true bitter irony of our time: we are creating machines that appear increasingly intelligent, while at the same time risking becoming dumber, more dependent, and mentally lazier ourselves. If you prefer to look at that scenario with at least a little humor, I would simply refer you to Idiocracy 😅

To end on a slightly more optimistic note, let us at least consider a somewhat less apocalyptic scenario for industry and software development. Because, realistically speaking, creative destruction does not happen overnight — and in large companies especially, the wheels tend to turn much more slowly than people imagine. The shift will happen, yes, but not in one sudden collapse. There will likely be fewer programmers and software developers in the traditional sense, but perhaps more architects, testers, project managers, and other roles around the process itself. Tasks may even become more fulfilling, and work less stressful in some areas. Pure industrial software development, however, will probably decline in the long run, and programmers may increasingly become something closer to craftsmen. There will still be niches where that is valued — the gaming industry, for example, may one day proudly market titles developed entirely without AI support, much like people now market handmade goods or analog photography. Perhaps other industries will do something similar. But the market for that will be much smaller.

The Future Will Not Fail Because of AI, but Because of Us

We are the generations that still have it in our hands — especially Generation Y. Maybe that is the real answer to the question of why now? Exactly because of that. We still remember the world of yesterday, while the world of today is not foreign to us either. That puts us in a rare position, and we should use that experience and that opportunity to help shape the world of tomorrow. Artificial intelligence, digital education, automation, social media, and the broader question of how we want to work, learn, and think in the future are not forces of nature simply crashing down on us like a storm from a clear sky. They are choices. And to be honest: we are living through some of the most fascinating times imaginable. We are witnessing one of the greatest technological transformations in human history. So let us not just consume it, fear it, or passively comment on it. Let us actually use it — and shape it.