AI is going to take over the world! It’s going to take all the jobs and life will cease to exist as we know it. That was the opening keynote at a conference I went to. I was in disbelief that this was the message being pushed out to a global group of AI professionals. Ironically, the follow up to that speaker was informed us about the new agentic architecture for AI which will allow us to build more efficient and capable AI. …perhaps so that it can take over the world faster? Truly, that made me chuckle. It’s one of those moment where I had to wonder if the the organizers actually knew what the first speaker was going to say or if this was some rouge AI doomsday evangelist that misrepresented himself to be picked for the opening speaker spot … or maybe he was from the future.
Ever since ChatGPT disrupted the market a couple years ago there has been little talk of any technology that didn’t include AI to some degree. The aptly named LLM indeed did spark chatter, and not for no reason. It is a historical breakthrough with a notable before and after. A computer being able to respond to natural language which one can carry on a conversation with has never been seen before. This has already transformed the way people interact with technology, the transfer of information, and how things get done in some cases taking a fraction of the time it would have 6 months earlier. This is a shift that will separate those who use AI from those who don’t, but will it separate human from machine and ultimately lead to the demise of civilization and war between man and machine that only John Connor can win? Before I got further down that road I’m going to take a couple steps back.
AI is talked about like it is a brand new technology, but in fact AI efforts go all the way back to the 1950s. AI was largely powered by manually created lists even into the 1980s. The basic concept then is that AI would match the patterns in lists and arrive at a result based on the matches. Back then, AI was a pattern matching technology and things haven’t changed as much as one might think. AI is still about pattern matching, it’s just a lot more complex. Pattern matching layered on top of pattern matching compared against other matched pattern results which may or may not be influenced my human intervention so that the patterns AI finds are also patterns that we recognize as humans. These stacked layers of pattern matching systems have evolved into neural networks which continued to teach each other and developed into deep leaning. AI has a lot of buzz words but it’s still about pattern matching. GenAI comes out of deep learning and ChatGPT was the real game changer and with other LLMs to follow. GenAI generates new content as opposed to finding a pattern and predicting a future pattern based on historical patterns. LLMs use tokens where are words or parts of words and when it generates new content it is matching tokens that are commonly used together. It knows they are commonly used together because it learned from data that humans created. AI mirrors our own behaviors, mannerisms, and thoughts which can lead to the appearance of sentience. However, there is a big difference between mimicking and understanding; the Chinese room experiment is a great illustration of this. There are many examples, especially in image generation, which show a notable lack of understanding about what it is actually generating. As humans we tend to attach emotion to words which can create strong vibes of sentience, but words that sounds like sentience and sentience are not the same. When we compare today’s AI to the human brain, which it is designed to mimic, it falls notably short. So, I’m sorry to disappoint those of you prepping for the Skynet takeover, but you’re going to have to wait a bit longer.
GenAI is really impressive right now and part of the reason for that is it has decades of training data that has been created by humans. The net new incremental training data sets are going to be exponentially smaller, which perhaps explains part of the reason OpenAI struggles to make the same strides from GPT 4.0 to 4.1 as they did with 3.5 to 4.0. AI models will train AI models and that does lead to improvement but if we are looking down the road at the possibility of AI being able to create novel content and evolve itself to general intelligence, I see that possibility dropping significantly as time goes on. AI will not take all the jobs, but the jobs will change. Humans are historically adverse to change, probably because change will lead to some fallout and some will be left behind. I don’t want to minimize that, but I do want to emphasize there is plenty of opportunity in embracing and evolving with change. AI is being implemented in ways which minimize repetitive tasks. The end result will be that humans will be needed to do things that AI is not good at. In addition to the jobs that support the upkeep and evolution of AI itself, I think we are going to see increased demand for soft skills like creativity, empathy, and strategy. AI will reduce the quantity of some positions but the humans in the loop will remain critical. As AI continues to evolve the need for some jobs will be reduced but other opportunities that we can’t imagine today will appear. Imagine describing a a mobile app developer job to someone 90s. The closest thing I can remember is Super Mario on a Texas Instruments calculator. Certainly not the ecosystem of opportunity that it is today.
Leave a Reply