The Banach-Tarski Paradox
The Banach-Tarski Paradox
The Banach-Tarski Paradox
The Banach-Tarski Paradox
The Banach-Tarski Paradox

The Paradox of AI's Imperfections

The Paradox of AI's Imperfections

The Paradox of AI's Imperfections

The Paradox of AI's Imperfections

The Paradox of AI's Imperfections

There's this century-old math puzzle called The Halting Problem. This problem says there's no guarantee an algorithm will know whether a computer program will ever finish. To clarify, given a specific input, it's easy to see whether a specific program halts or not by just running it. But the catch with the Halting Problem comes into play when we want to have a general algorithm that can determine, for any given program and input, whether the program halts on that input. Alan Turing proved that such an algorithm cannot exist ie, it's impossible to create an algorithm that accurately determines whether arbitrary programs halt on a given input. So, what this means for us everyday folks is, no matter how much info we throw at AI to learn/train, there's stuff it just can't solve / do.

Researchers from Cambridge and Oslo made this clear in 2022. Even neural networks, have their weaknesses. When developers of AI models aim to make them smarter and more precise, there's a catch: there's no foolproof way to find these neural networks. The hard truth for us is that even after rigorous training with vast amounts of relevant data, they might still trip up.

As our reliance on AI grows exponentially, acknowledging and accepting its fallibility becomes increasingly crucial. Unlike AI, we humans have the ability to realize when we f-up, learn from our mistakes, and course correct accordingly - this is something AI currently can’t do. Hence as AI becomes an everyday part of our lives, we need to remain cognizant and not evolve into a species waiting for the that next AI-generated pellet in a numb-mind trance state. Curiosity in this case should not kill the cat, even if AI attempts to do so …

For kicks, I asked Midjourney to visualize the Banach-Tarski paradox. The image in this write-up is the result. Simply put, the Paradox states that you could mathematically split say a chocolate ball into pieces and then reassemble them into not just one, but two chocolate balls of the same size. The issue here is that the Banach-Tarski paradox involves splitting the ball into an infinite number of non-physical points, rendering it physically impossible but mathematically correct. This tells us that sometimes, our perceptions (in this case, our perception of volume) can mislead us and in this quick experiment as can AI.


There's this century-old math puzzle called The Halting Problem. This problem says there's no guarantee an algorithm will know whether a computer program will ever finish. To clarify, given a specific input, it's easy to see whether a specific program halts or not by just running it. But the catch with the Halting Problem comes into play when we want to have a general algorithm that can determine, for any given program and input, whether the program halts on that input. Alan Turing proved that such an algorithm cannot exist ie, it's impossible to create an algorithm that accurately determines whether arbitrary programs halt on a given input. So, what this means for us everyday folks is, no matter how much info we throw at AI to learn/train, there's stuff it just can't solve / do.

Researchers from Cambridge and Oslo made this clear in 2022. Even neural networks, have their weaknesses. When developers of AI models aim to make them smarter and more precise, there's a catch: there's no foolproof way to find these neural networks. The hard truth for us is that even after rigorous training with vast amounts of relevant data, they might still trip up.

As our reliance on AI grows exponentially, acknowledging and accepting its fallibility becomes increasingly crucial. Unlike AI, we humans have the ability to realize when we f-up, learn from our mistakes, and course correct accordingly - this is something AI currently can’t do. Hence as AI becomes an everyday part of our lives, we need to remain cognizant and not evolve into a species waiting for the that next AI-generated pellet in a numb-mind trance state. Curiosity in this case should not kill the cat, even if AI attempts to do so …

For kicks, I asked Midjourney to visualize the Banach-Tarski paradox. The image in this write-up is the result. Simply put, the Paradox states that you could mathematically split say a chocolate ball into pieces and then reassemble them into not just one, but two chocolate balls of the same size. The issue here is that the Banach-Tarski paradox involves splitting the ball into an infinite number of non-physical points, rendering it physically impossible but mathematically correct. This tells us that sometimes, our perceptions (in this case, our perception of volume) can mislead us and in this quick experiment as can AI.


There's this century-old math puzzle called The Halting Problem. This problem says there's no guarantee an algorithm will know whether a computer program will ever finish. To clarify, given a specific input, it's easy to see whether a specific program halts or not by just running it. But the catch with the Halting Problem comes into play when we want to have a general algorithm that can determine, for any given program and input, whether the program halts on that input. Alan Turing proved that such an algorithm cannot exist ie, it's impossible to create an algorithm that accurately determines whether arbitrary programs halt on a given input. So, what this means for us everyday folks is, no matter how much info we throw at AI to learn/train, there's stuff it just can't solve / do.

Researchers from Cambridge and Oslo made this clear in 2022. Even neural networks, have their weaknesses. When developers of AI models aim to make them smarter and more precise, there's a catch: there's no foolproof way to find these neural networks. The hard truth for us is that even after rigorous training with vast amounts of relevant data, they might still trip up.

As our reliance on AI grows exponentially, acknowledging and accepting its fallibility becomes increasingly crucial. Unlike AI, we humans have the ability to realize when we f-up, learn from our mistakes, and course correct accordingly - this is something AI currently can’t do. Hence as AI becomes an everyday part of our lives, we need to remain cognizant and not evolve into a species waiting for the that next AI-generated pellet in a numb-mind trance state. Curiosity in this case should not kill the cat, even if AI attempts to do so …

For kicks, I asked Midjourney to visualize the Banach-Tarski paradox. The image in this write-up is the result. Simply put, the Paradox states that you could mathematically split say a chocolate ball into pieces and then reassemble them into not just one, but two chocolate balls of the same size. The issue here is that the Banach-Tarski paradox involves splitting the ball into an infinite number of non-physical points, rendering it physically impossible but mathematically correct. This tells us that sometimes, our perceptions (in this case, our perception of volume) can mislead us and in this quick experiment as can AI.


There's this century-old math puzzle called The Halting Problem. This problem says there's no guarantee an algorithm will know whether a computer program will ever finish. To clarify, given a specific input, it's easy to see whether a specific program halts or not by just running it. But the catch with the Halting Problem comes into play when we want to have a general algorithm that can determine, for any given program and input, whether the program halts on that input. Alan Turing proved that such an algorithm cannot exist ie, it's impossible to create an algorithm that accurately determines whether arbitrary programs halt on a given input. So, what this means for us everyday folks is, no matter how much info we throw at AI to learn/train, there's stuff it just can't solve / do.

Researchers from Cambridge and Oslo made this clear in 2022. Even neural networks, have their weaknesses. When developers of AI models aim to make them smarter and more precise, there's a catch: there's no foolproof way to find these neural networks. The hard truth for us is that even after rigorous training with vast amounts of relevant data, they might still trip up.

As our reliance on AI grows exponentially, acknowledging and accepting its fallibility becomes increasingly crucial. Unlike AI, we humans have the ability to realize when we f-up, learn from our mistakes, and course correct accordingly - this is something AI currently can’t do. Hence as AI becomes an everyday part of our lives, we need to remain cognizant and not evolve into a species waiting for the that next AI-generated pellet in a numb-mind trance state. Curiosity in this case should not kill the cat, even if AI attempts to do so …

For kicks, I asked Midjourney to visualize the Banach-Tarski paradox. The image in this write-up is the result. Simply put, the Paradox states that you could mathematically split say a chocolate ball into pieces and then reassemble them into not just one, but two chocolate balls of the same size. The issue here is that the Banach-Tarski paradox involves splitting the ball into an infinite number of non-physical points, rendering it physically impossible but mathematically correct. This tells us that sometimes, our perceptions (in this case, our perception of volume) can mislead us and in this quick experiment as can AI.


There's this century-old math puzzle called The Halting Problem. This problem says there's no guarantee an algorithm will know whether a computer program will ever finish. To clarify, given a specific input, it's easy to see whether a specific program halts or not by just running it. But the catch with the Halting Problem comes into play when we want to have a general algorithm that can determine, for any given program and input, whether the program halts on that input. Alan Turing proved that such an algorithm cannot exist ie, it's impossible to create an algorithm that accurately determines whether arbitrary programs halt on a given input. So, what this means for us everyday folks is, no matter how much info we throw at AI to learn/train, there's stuff it just can't solve / do.

Researchers from Cambridge and Oslo made this clear in 2022. Even neural networks, have their weaknesses. When developers of AI models aim to make them smarter and more precise, there's a catch: there's no foolproof way to find these neural networks. The hard truth for us is that even after rigorous training with vast amounts of relevant data, they might still trip up.

As our reliance on AI grows exponentially, acknowledging and accepting its fallibility becomes increasingly crucial. Unlike AI, we humans have the ability to realize when we f-up, learn from our mistakes, and course correct accordingly - this is something AI currently can’t do. Hence as AI becomes an everyday part of our lives, we need to remain cognizant and not evolve into a species waiting for the that next AI-generated pellet in a numb-mind trance state. Curiosity in this case should not kill the cat, even if AI attempts to do so …

For kicks, I asked Midjourney to visualize the Banach-Tarski paradox. The image in this write-up is the result. Simply put, the Paradox states that you could mathematically split say a chocolate ball into pieces and then reassemble them into not just one, but two chocolate balls of the same size. The issue here is that the Banach-Tarski paradox involves splitting the ball into an infinite number of non-physical points, rendering it physically impossible but mathematically correct. This tells us that sometimes, our perceptions (in this case, our perception of volume) can mislead us and in this quick experiment as can AI.