Now, here is the use case that we need to worry about:
The active-duty US Army Green Beret who authorities say exploded a Tesla Cybertruck outside the Trump International Hotel in Las Vegas last week used artificial intelligence to plan the blast, according to the Las Vegas Metropolitan Police Department.
Ugh. The case itself is more like he wanted to bring attention to the horrors of war, we will see, and by using a Cybertruck outside Trump tower it got global headlines - the note about "not wanting to hurt others" is important, maybe it was more calculated than originally reported as much of the blast was 'detained' in the Cybertruck.
But it is a warning how LLMs can be used to create stories that can then be used by people to weaponize and attack in situations we are not prepared for. The FBI have long warned against self-driving cars / trucks laden with explosives.
I agree with your post. Every time I get involved in a large enterprise system implementation, which is very frequently, I witness the same pattern: tasks become automated, and if the system cannot perform them for one reason or another, no one knows how to do them manually anymore.
The "laziness" accompanying technological advancement is not new; it has been a recurring pattern throughout history. Consider the advent of calculators, which led us to outsource basic mathematical computations. Similarly, GPS has diminished our ability to read maps and navigate independently. Writing on paper, once a fundamental skill, is quickly becoming obsolete as laptops and digital devices dominate. Questions such as the following will need to be asked across all aspects of human life:
How long before children stop needing to learn to write by hand?
With the rise of AI, we are on the verge of outsourcing even more fundamental human abilities—our capacity to think critically, comprehend complex ideas, and express ourselves through writing. This trend extends across various fields; surgeons increasingly rely on robotic systems for precision surgeries. They are losing the ability to perform a complete surgery from start to end, and doctors are beginning to delegate one of the most human aspects of their profession—providing emotional support and answering patients’ questions—to AI-powered tools. A prime example of this is discussed in this article (https://tinyurl.com/yc4svv2w), highlighting how AI is integrated into healthcare communication.
While these advancements offer undeniable benefits, they also present significant challenges. If AI succeeds in taking over many cognitive and creative tasks, we must ask: What happens to the people displaced by this progress? Most jobs may not vanish entirely in the short run, but the nature of work will shift dramatically. Also, in the short term, the least experienced and skilled workers will bear the brunt of this transformation. The most proficient 20% of workers will likely become hyper-productive, reducing the need for larger workforces in fields like analysis, coding, writing, and creative industries.
This shift signals a troubling paradox: by training AI to perform tasks more efficiently than we can, we are, in essence, training our replacements. In the long run, society must grapple with the question: What will those permanently displaced by AI do in a world where human labor is no longer essential for many tasks?
The danger here is not just AI-induced laziness but the gradual erosion of essential human skills and the widening socioeconomic divides that could result. If we are not careful, the convenience offered by AI could come at the cost of our humanity and our ability to adapt to a world increasingly shaped by machines.
On a more optimistic note, those who maintain their agency and learn to use AI as a tool rather than a crutch will likely become the most critical human resources of the future. Their tacit knowledge, experience, intuition, and ability to address the edge cases where AI struggles—those last 10% of unpredictable scenarios—will make them invaluable. These individuals will bridge human creativity and AI efficiency, ensuring progress without sacrificing the human touch.
Global banks will cut as many as 200,000 jobs in the next three to five years as artificial intelligence encroaches on tasks currently carried out by human workers, according to Bloomberg Intelligence. Chief information and technology officers surveyed for BI indicated that on average they expect a net 3% of their workforce to be cut, according to a report published Thursday. Citi said in a report in June that AI is likely to displace more jobs across the banking industry than in any other sector. About 54% of jobs across banking have a high potential to be automated, Citi said at the time.
As I mentioned in my comments above, Finance and Technology will be among the first sectors to lose jobs.
Microsoft is also laying off a few thousand people.
I agree with that article and your observations - banks in Europe have been closing some back office functions, shared service centers, reducing call center staff. Banks are investing heavily in AI - see eg Evident Research https://evidentinsights.com/
Most people are not taking AI seriously and its impact on their jobs. If we continue to make even moderate advances, with or without AGI, we will see significant layoffs and reduced hiring within the next few years, especially of people who are less skilled (with or without experience) across several industries. There is a perception that this is similar to cryptocurrency—a product without a use case—but they will find out otherwise shortly.
It is very true. We certainly need an awareness and education campaign. But generally people do fall into 2 camps, or maybe even 3 - like the 90, 9, 1 rule - 1% will take proactive action, 9% will think about some form of action and 90% will be oblivious.
On my postgraduate program for executives The Coming Wave book is compulsory. It helps get the message started. But then other lengthy discussions and ultimately seeing first hand a labor intensive process in their organization get automated (or even semi automated) wakes them up. Eg an LLM checking insurance policies in multi formats, previously requiring 80 full time people... in 4 months reduced to 10 people! Thankfully the bank are relocating the other 70!
We were recently helping a young lad build a shed. For some reason he wanted to know '4x44'; we immediately said 176 {I mean what could be easier, (4x40)+(4x4)}. He stared at me and my wife as if we were some kind of superhumans; got out his phone, and checked our instant calculation. Of course, superficially doing numbers in one's head is a minor example, and perhaps its loss is no great shakes. But the idea that such abilities can affect our ability to think about thinking, can affect our general cognitive abilities, and can fine-tune our bull-shit monitors, is intriguing. With every technology, we gain something, and lose something. We as individuals are responsible for our own interactions with technology. Nice article; thank you.
Fantastic example Joshua, thank you. I used to add the price of everything I put into the supermarket cart and knew more or less to the cent how much it would be at the checkout. It was a fun exercise. I even forget my phone has a calculator :-) So true "With every technology, we gain something, and lose something." Thank you.
"Technology poses a more insidious threat to the development of future Mandelstams than ideology. ... Nearly four in 10 students admit to using programs such as ChatGPT to write their papers ... As this technology advances, few will be able to resist the temptation to outsource the greatest part of their thinking to machines."
Now, here is the use case that we need to worry about:
The active-duty US Army Green Beret who authorities say exploded a Tesla Cybertruck outside the Trump International Hotel in Las Vegas last week used artificial intelligence to plan the blast, according to the Las Vegas Metropolitan Police Department.
https://www.cnn.com/2025/01/07/us/las-vegas-cybertruck-explosion-livelsberger/index.html
Ugh. The case itself is more like he wanted to bring attention to the horrors of war, we will see, and by using a Cybertruck outside Trump tower it got global headlines - the note about "not wanting to hurt others" is important, maybe it was more calculated than originally reported as much of the blast was 'detained' in the Cybertruck.
But it is a warning how LLMs can be used to create stories that can then be used by people to weaponize and attack in situations we are not prepared for. The FBI have long warned against self-driving cars / trucks laden with explosives.
I agree with your post. Every time I get involved in a large enterprise system implementation, which is very frequently, I witness the same pattern: tasks become automated, and if the system cannot perform them for one reason or another, no one knows how to do them manually anymore.
The "laziness" accompanying technological advancement is not new; it has been a recurring pattern throughout history. Consider the advent of calculators, which led us to outsource basic mathematical computations. Similarly, GPS has diminished our ability to read maps and navigate independently. Writing on paper, once a fundamental skill, is quickly becoming obsolete as laptops and digital devices dominate. Questions such as the following will need to be asked across all aspects of human life:
How long before children stop needing to learn to write by hand?
With the rise of AI, we are on the verge of outsourcing even more fundamental human abilities—our capacity to think critically, comprehend complex ideas, and express ourselves through writing. This trend extends across various fields; surgeons increasingly rely on robotic systems for precision surgeries. They are losing the ability to perform a complete surgery from start to end, and doctors are beginning to delegate one of the most human aspects of their profession—providing emotional support and answering patients’ questions—to AI-powered tools. A prime example of this is discussed in this article (https://tinyurl.com/yc4svv2w), highlighting how AI is integrated into healthcare communication.
While these advancements offer undeniable benefits, they also present significant challenges. If AI succeeds in taking over many cognitive and creative tasks, we must ask: What happens to the people displaced by this progress? Most jobs may not vanish entirely in the short run, but the nature of work will shift dramatically. Also, in the short term, the least experienced and skilled workers will bear the brunt of this transformation. The most proficient 20% of workers will likely become hyper-productive, reducing the need for larger workforces in fields like analysis, coding, writing, and creative industries.
This shift signals a troubling paradox: by training AI to perform tasks more efficiently than we can, we are, in essence, training our replacements. In the long run, society must grapple with the question: What will those permanently displaced by AI do in a world where human labor is no longer essential for many tasks?
The danger here is not just AI-induced laziness but the gradual erosion of essential human skills and the widening socioeconomic divides that could result. If we are not careful, the convenience offered by AI could come at the cost of our humanity and our ability to adapt to a world increasingly shaped by machines.
On a more optimistic note, those who maintain their agency and learn to use AI as a tool rather than a crutch will likely become the most critical human resources of the future. Their tacit knowledge, experience, intuition, and ability to address the edge cases where AI struggles—those last 10% of unpredictable scenarios—will make them invaluable. These individuals will bridge human creativity and AI efficiency, ensuring progress without sacrificing the human touch.
Bloomberg article today:
Global banks will cut as many as 200,000 jobs in the next three to five years as artificial intelligence encroaches on tasks currently carried out by human workers, according to Bloomberg Intelligence. Chief information and technology officers surveyed for BI indicated that on average they expect a net 3% of their workforce to be cut, according to a report published Thursday. Citi said in a report in June that AI is likely to displace more jobs across the banking industry than in any other sector. About 54% of jobs across banking have a high potential to be automated, Citi said at the time.
As I mentioned in my comments above, Finance and Technology will be among the first sectors to lose jobs.
Microsoft is also laying off a few thousand people.
I agree with that article and your observations - banks in Europe have been closing some back office functions, shared service centers, reducing call center staff. Banks are investing heavily in AI - see eg Evident Research https://evidentinsights.com/
This is a good article by David Crawshaw on how he uses LLMs for programming - https://crawshaw.io/blog/programming-with-llms
Salesforce CEO said he is not recruiting Software Engineers!
I am sure we are in for a rough ride with job losses between now and 2030.
Most people are not taking AI seriously and its impact on their jobs. If we continue to make even moderate advances, with or without AGI, we will see significant layoffs and reduced hiring within the next few years, especially of people who are less skilled (with or without experience) across several industries. There is a perception that this is similar to cryptocurrency—a product without a use case—but they will find out otherwise shortly.
It is very true. We certainly need an awareness and education campaign. But generally people do fall into 2 camps, or maybe even 3 - like the 90, 9, 1 rule - 1% will take proactive action, 9% will think about some form of action and 90% will be oblivious.
I am trying to prepare my organization slowly, but it is a battle. However, I am experimenting with a few things to see what will work best.
On my postgraduate program for executives The Coming Wave book is compulsory. It helps get the message started. But then other lengthy discussions and ultimately seeing first hand a labor intensive process in their organization get automated (or even semi automated) wakes them up. Eg an LLM checking insurance policies in multi formats, previously requiring 80 full time people... in 4 months reduced to 10 people! Thankfully the bank are relocating the other 70!
We were recently helping a young lad build a shed. For some reason he wanted to know '4x44'; we immediately said 176 {I mean what could be easier, (4x40)+(4x4)}. He stared at me and my wife as if we were some kind of superhumans; got out his phone, and checked our instant calculation. Of course, superficially doing numbers in one's head is a minor example, and perhaps its loss is no great shakes. But the idea that such abilities can affect our ability to think about thinking, can affect our general cognitive abilities, and can fine-tune our bull-shit monitors, is intriguing. With every technology, we gain something, and lose something. We as individuals are responsible for our own interactions with technology. Nice article; thank you.
Fantastic example Joshua, thank you. I used to add the price of everything I put into the supermarket cart and knew more or less to the cent how much it would be at the checkout. It was a fun exercise. I even forget my phone has a calculator :-) So true "With every technology, we gain something, and lose something." Thank you.
"Technology poses a more insidious threat to the development of future Mandelstams than ideology. ... Nearly four in 10 students admit to using programs such as ChatGPT to write their papers ... As this technology advances, few will be able to resist the temptation to outsource the greatest part of their thinking to machines."
https://www.theatlantic.com/ideas/archive/2025/01/plea-heroic-poets-navalny-mandelstam/681353/