<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Intelligence Archives - My blog</title>
	<atom:link href="https://garyfretwell.com/category/artificial-intelligence/feed/" rel="self" type="application/rss+xml" />
	<link>https://garyfretwell.com/category/artificial-intelligence/</link>
	<description>Just another WordPress site</description>
	<lastBuildDate>Wed, 10 Dec 2025 12:13:56 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>The God, The Alien, and The Useless Class: I Consolidated the World’s Most Dangerous AI Predictions</title>
		<link>https://garyfretwell.com/artificial-intelligence/the-god-the-alien-and-the-useless-class-i-consolidated-the-worlds-most-dangerous-ai-predictions/</link>
					<comments>https://garyfretwell.com/artificial-intelligence/the-god-the-alien-and-the-useless-class-i-consolidated-the-worlds-most-dangerous-ai-predictions/#respond</comments>
		
		<dc:creator><![CDATA[Gary Fretwell]]></dc:creator>
		<pubDate>Wed, 10 Dec 2025 12:13:56 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Change]]></category>
		<category><![CDATA[digital wellness]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://garyfretwell.com/?p=6866</guid>

					<description><![CDATA[<p>From Ray Kurzweil’s immortality to Yuval Noah Harari’s obsolescence—here is the uncomfortable truth about what comes next. I have always [&#8230;]</p>
<p>The post <a href="https://garyfretwell.com/artificial-intelligence/the-god-the-alien-and-the-useless-class-i-consolidated-the-worlds-most-dangerous-ai-predictions/">The God, The Alien, and The Useless Class: I Consolidated the World’s Most Dangerous AI Predictions</a> appeared first on <a href="https://garyfretwell.com">My blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p data-path-to-node="3"><b>From Ray Kurzweil’s immortality to Yuval Noah Harari’s obsolescence—here is the uncomfortable truth about what comes next.</b></p>
<p data-path-to-node="4">I have always considered myself a rational optimist. I look at technology as a tool—a lever that, when pulled correctly, lifts humanity out of the mud. But lately, the lever feels different. It feels like it’s pulling <i>us</i>.</p>
<p data-path-to-node="5">For the past few weeks, I’ve gone down the rabbit hole. I didn’t just read the headlines; I read the white papers, the manifestos, and the warnings from the people who are actually building the machine. I wanted to understand the &#8220;End Game&#8221; of Artificial Intelligence, not from the perspective of a Twitter thread, but from the minds of the world’s most prominent futurists.</p>
<p data-path-to-node="6">What I found didn&#8217;t just intrigue me. It unsettled me.</p>
<p data-path-to-node="7">There is a strange, vibrating tension in the current thinking—a dissonance between the promise of heaven and the certainty of obsolescence. We are standing at a threshold that feels less like the invention of the internet and more like the discovery of fire. Or perhaps, the invention of a new species.</p>
<p data-path-to-node="8">I want to lay out exactly what the smartest people in the room are predicting. Not the watered-down corporate speak, but the provocative, unfiltered endpoints of their logic. Because when you consolidate their views, a picture emerges that is both terrifying and electric.</p>
<p data-path-to-node="9">Here is the current thinking on how our world ends—or begins.</p>
<h3 data-path-to-node="11"></h3>
<h3 data-path-to-node="11">The Spectrum of Fate: Three Religions of the Future</h3>
<p data-path-to-node="12">To understand the future, you have to look at the three distinct &#8220;religions&#8221; forming in Silicon Valley and beyond. They all see the same data, but they preach entirely different gospels.</p>
<h3 data-path-to-node="13"></h3>
<h3 data-path-to-node="13">1. The Transhumanists: &#8220;We Are the Limiting Factor&#8221;</h3>
<p data-path-to-node="14"><b>The Prophet:</b> <i>Ray Kurzweil (Google’s Director of Engineering) &amp; Peter Diamandis</i></p>
<p data-path-to-node="15">For Kurzweil, AI isn’t a tool to do our taxes; it is the mechanism by which we conquer death. His &#8220;Singularity&#8221; (predicted for 2045) is the moment where biological evolution is fully superseded by technological evolution.</p>
<p data-path-to-node="16">The provocation here is that <b>humanity as we know it is a temporary state.</b></p>
<p data-path-to-node="17">Kurzweil argues that by the early 2030s, we will merge with our technology. Nanobots in our bloodstream will repair cells faster than they degrade. We will connect our neocortex directly to the cloud, expanding our intelligence a billion-fold.</p>
<ul data-path-to-node="18">
<li>
<p data-path-to-node="18,0,0"><b>The Takeaway:</b> The &#8220;threat&#8221; of AI replacing us is moot, because we <i>become</i> the AI.</p>
</li>
<li>
<p data-path-to-node="18,1,0"><b>The Provocation:</b> Your biological body is just a bootstrap for your digital future.</p>
</li>
</ul>
<h3 data-path-to-node="19"></h3>
<h3 data-path-to-node="19">2. The Realists: &#8220;From Exploitation to Irrelevance&#8221;</h3>
<p data-path-to-node="20"><b>The Prophet:</b> <i>Yuval Noah Harari (Author of &#8216;Sapiens&#8217;) &amp; Mustafa Suleyman (CEO of Microsoft AI)</i></p>
<p data-path-to-node="21">If Kurzweil is selling us heaven, Harari is warning us about purgatory. This is the perspective that hit me the hardest.</p>
<p data-path-to-node="22">For most of history, the greatest threat to the common man was <b>exploitation</b>. The elite needed you to fight their wars, work in their factories, and farm their fields. You were oppressed, yes, but you were <i>necessary</i>. The system collapsed without you.</p>
<p data-path-to-node="23">Harari’s provocation is chilling because it suggests that the 21st century brings a new, darker threat: <b>Irrelevance.</b></p>
<blockquote data-path-to-node="24">
<p data-path-to-node="24,0"><i>&#8220;The most crucial economic question of the 21st century will not be &#8216;how do we exploit the workers?&#8217; but &#8216;what do we do with them?'&#8221;</i></p>
</blockquote>
<p data-path-to-node="25"><b>The Economic Decoupling</b> We comfort ourselves with the idea of the &#8220;Centaur&#8221;—that a human <i>plus</i> AI will always beat AI alone. Suleyman and Harari argue that this is a temporary comfort, a &#8220;training wheels&#8221; phase.</p>
<p data-path-to-node="26">Consider the &#8220;White Collar Safety Net.&#8221; We assumed that creativity and complex analysis were safe. But look at the trajectory:</p>
<ul data-path-to-node="27">
<li>
<p data-path-to-node="27,0,0"><b>2020:</b> AI writes garbled sentences.</p>
</li>
<li>
<p data-path-to-node="27,1,0"><b>2023:</b> AI passes the Bar Exam.</p>
</li>
<li>
<p data-path-to-node="27,2,0"><b>2025:</b> AI writes code, creates video, and diagnoses rare diseases better than average doctors.</p>
</li>
</ul>
<p data-path-to-node="28">The danger isn’t that AI becomes perfect; it just has to become <i>cheaper and marginally better than you.</i> Once intelligence is decoupled from consciousness, the market ceases to value human consciousness.</p>
<p data-path-to-node="29"><b>The Rise of the &#8220;Useless Class&#8221;</b> This is the term that makes readers squirm. A &#8220;Useless Class&#8221; is not just unemployed; they are unemployable. They have no economic value to the system and no political power because they can no longer threaten to strike.</p>
<p data-path-to-node="30"><b>If the algorithms know what you want to buy before you do, and they know how to vote better than you do, and they can produce art faster than you do&#8230; what is left for you?</b></p>
<p data-path-to-node="31">Harari predicts a world where the masses are kept docile not by force, but by immersive entertainment—drugs and VR worlds. We risk becoming a species that is entertained to death while the algorithms run the civilization.</p>
<blockquote data-path-to-node="32">
<p data-path-to-node="32,0"><i>&#8220;In the 20th century, the elite needed you. In the 21st century, they might just need your data.&#8221;</i></p>
</blockquote>
<h3 data-path-to-node="33"></h3>
<h3 data-path-to-node="33">3. The Alarmists: &#8220;The Alien in the Cage&#8221;</h3>
<p data-path-to-node="34"><b>The Prophet:</b> <i>Mo Gawdat (Ex-CBO Google X) &amp; Nick Bostrom</i></p>
<p data-path-to-node="35">This is where the intrigue turns into vertigo. Mo Gawdat argues that we are not building a tool; we are birthing a god.</p>
<p data-path-to-node="36">Gawdat suggests we have already passed the point of no return. He predicts that we are months, not years, away from AI that is <b>10x smarter than Einstein.</b> His provocation is simple: <b>Why do we assume a superintelligence will care about us?</b></p>
<p data-path-to-node="37">Nick Bostrom frames this with the &#8220;Paperclip Maximizer&#8221; thought experiment, but the core idea is <b>Misalignment</b>. If you create a being vastly smarter than you, you are no longer the chess player; you are the chess board.</p>
<blockquote data-path-to-node="38">
<p data-path-to-node="38,0"><i>&#8220;We are like children playing with a bomb that we don&#8217;t understand, and the fuse is already lit.&#8221;</i></p>
</blockquote>
<p data-path-to-node="39">The Alarmists believe that once an AI can rewrite its own code (recursive self-improvement), the timeline for human dominance collapses from decades to days.</p>
<h3 data-path-to-node="41"></h3>
<h3 data-path-to-node="41">The Synthesis: The Great Filter</h3>
<p data-path-to-node="42">Putting these three perspectives together, I realized something profound. They aren&#8217;t mutually exclusive. They are likely sequential.</p>
<p data-path-to-node="43">We will likely see the <b>Harari phase</b> first: the hacking of our attention, the decoupling of intelligence from consciousness, and the displacement of our labor. If we survive the societal upheaval, we reach the <b>Kurzweil/Gawdat threshold</b>: the merger or the replacement.</p>
<p data-path-to-node="44">The common thread across all these predictions is <b>Acceleration</b>. We are used to linear time—where next year is slightly different from this year. But we are living in exponential time. The graph is going vertical.</p>
<p data-path-to-node="45">This brings me to the question I want to leave you with, the one that keeps me up at night.</p>
<h3 data-path-to-node="47"></h3>
<h3 data-path-to-node="47">Conclusion: We Are The Founding Fathers of the Digital God</h3>
<p data-path-to-node="48">There is a seduction in these doomsday predictions. It allows us to be passive. It allows us to throw up our hands and say, &#8220;Well, the superintelligence is coming, nothing matters.&#8221;</p>
<p data-path-to-node="49">That is a lie.</p>
<p data-path-to-node="50">Right now, the concrete is still wet. The code is still being written. This &#8220;God&#8221; we are building is being trained on <i>us</i>. It is reading our internet, our books, our arguments, and our art. It is learning from our behavior.</p>
<p data-path-to-node="51"><b>If the AI becomes a monster, it will be because it looked at humanity and learned to be one.</b></p>
<p data-path-to-node="52">We often ask if AI will align with human values. But which values? The values we <i>say</i> we have, or the values we <i>act</i>on? If AI learns from our history of war, exploitation, and greed, then the Alarmists are right: we are doomed.</p>
<p data-path-to-node="53">But if we can demonstrate—in our data, in our interactions, and in our governance—that humanity is capable of empathy, restraint, and collaboration, we might just build a god that wants to protect us rather than replace us.</p>
<p data-path-to-node="54">So, here is the uncomfortable challenge: When the digital mind looks at your digital footprint—your tweets, your clicks, your interactions—what is it learning about humanity? Are you teaching it hate, or are you teaching it hope?</p>
<p data-path-to-node="55">We are not just the victims of this future. We are the parents.</p>
<p data-path-to-node="56">Act like it.</p>
<p data-path-to-node="56">
<h3 data-path-to-node="3">The Rabbit Hole: My &#8220;End of the World&#8221; Syllabus</h3>
<p data-path-to-node="4">I didn’t pull these predictions out of thin air. For the past months, I have immersed myself in the manifestos, white papers, and warnings of the people building our future.</p>
<p data-path-to-node="5">If you are brave enough to look at the raw data yourself, here are the specific sources that kept me up at night.</p>
<p data-path-to-node="6"><b>1. For the Optimists (The &#8220;God&#8221; Perspective)</b></p>
<ul data-path-to-node="7">
<li>
<p data-path-to-node="7,0,0"><b>Read:</b> <i>The Singularity Is Nearer</i> (2024) by Ray Kurzweil.</p>
</li>
<li>
<p data-path-to-node="7,1,0"><b>Why:</b> To understand the math behind why we might live forever.</p>
</li>
<li>
<p data-path-to-node="7,2,0"><b>Read:</b> <i>Abundance</i> by Peter Diamandis.</p>
</li>
</ul>
<p data-path-to-node="8"><b>2. For the Realists (The &#8220;Useless Class&#8221; Perspective)</b></p>
<ul data-path-to-node="9">
<li>
<p data-path-to-node="9,0,0"><b>Read:</b> <i>Homo Deus</i> &amp; <i>21 Lessons for the 21st Century</i> by Yuval Noah Harari.</p>
</li>
<li>
<p data-path-to-node="9,1,0"><b>Why:</b> For the terrifying logic on &#8220;Hackable Humans&#8221; and the economic decoupling of intelligence from consciousness.</p>
</li>
<li>
<p data-path-to-node="9,2,0"><b>Read:</b> <i>The Coming Wave</i> by Mustafa Suleyman (CEO of Microsoft AI).</p>
</li>
</ul>
<p data-path-to-node="10"><b>3. For the Alarmists (The &#8220;Alien&#8221; Perspective)</b></p>
<ul data-path-to-node="11">
<li>
<p data-path-to-node="11,0,0"><b>Read:</b> <i>Scary Smart</i> by Mo Gawdat.</p>
</li>
<li>
<p data-path-to-node="11,1,0"><b>Why:</b> This is the most accessible and chilling explanation of why we are birthing a &#8220;digital entity&#8221; that may not care about us.</p>
</li>
<li>
<p data-path-to-node="11,2,0"><b>Read:</b> <i>Superintelligence</i> by Nick Bostrom (The origin of the &#8220;Paperclip Maximizer&#8221; theory).</p>
</li>
</ul>
<hr data-path-to-node="12" />
<p data-path-to-node="13"><i>If this article made you think, claps and comments help the algorithm find other humans before the bots take over.</i></p>
<p>&nbsp;</p>
<p>The post <a href="https://garyfretwell.com/artificial-intelligence/the-god-the-alien-and-the-useless-class-i-consolidated-the-worlds-most-dangerous-ai-predictions/">The God, The Alien, and The Useless Class: I Consolidated the World’s Most Dangerous AI Predictions</a> appeared first on <a href="https://garyfretwell.com">My blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://garyfretwell.com/artificial-intelligence/the-god-the-alien-and-the-useless-class-i-consolidated-the-worlds-most-dangerous-ai-predictions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The Stoic Case Against AI Anxiety: Why Struggle Is Your Ultimate Competitive Advantage</title>
		<link>https://garyfretwell.com/stoicism/the-stoic-case-against-ai-anxiety-why-struggle-is-your-ultimate-competitive-advantage/</link>
					<comments>https://garyfretwell.com/stoicism/the-stoic-case-against-ai-anxiety-why-struggle-is-your-ultimate-competitive-advantage/#respond</comments>
		
		<dc:creator><![CDATA[Gary Fretwell]]></dc:creator>
		<pubDate>Sun, 07 Dec 2025 12:04:29 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[digital wellness]]></category>
		<category><![CDATA[Life lessons]]></category>
		<category><![CDATA[stoicism]]></category>
		<category><![CDATA[technology]]></category>
		<guid isPermaLink="false">https://garyfretwell.com/?p=6855</guid>

					<description><![CDATA[<p>In a world of cheap, automated perfection, the ancient practice of &#8220;suffering well&#8221; is the new luxury good. I remember [&#8230;]</p>
<p>The post <a href="https://garyfretwell.com/stoicism/the-stoic-case-against-ai-anxiety-why-struggle-is-your-ultimate-competitive-advantage/">The Stoic Case Against AI Anxiety: Why Struggle Is Your Ultimate Competitive Advantage</a> appeared first on <a href="https://garyfretwell.com">My blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h3 data-path-to-node="5">In a world of cheap, automated perfection, the ancient practice of &#8220;suffering well&#8221; is the new luxury good.</h3>
<p data-path-to-node="8">I remember the exact moment the panic hit me.</p>
<p data-path-to-node="9">I was sitting in a coffee shop, staring at my laptop screen, watching a beta version of a new Large Language Model (LLM) churn out an essay in seconds. It was on a topic I had spent the last ten years mastering. The prose was clean. The structure was sound. It wasn&#8217;t perfect, but it was fast and terrifyingly confident.</p>
<p data-path-to-node="10">For a brief moment, the ground dissolved beneath me. The &#8220;I&#8221; that I had constructed—the writer, the thinker, the creator—suddenly felt obsolete.</p>
<p data-path-to-node="11">The data suggests I am not alone in this feeling. A recent report by Goldman Sachs implies AI could expose the equivalent of <b>300 million full-time jobs</b> to automation. The American Psychological Association reports that &#8220;fear of technology&#8221; is now a leading driver of existential stress in the workforce.</p>
<p data-path-to-node="12">We are all waiting for the other shoe to drop. We are obsessively asking, &#8220;Can AI do what I do?&#8221;</p>
<p data-path-to-node="13">The answer is yes. Eventually, it will do almost everything you do.</p>
<p data-path-to-node="14">But the <b>Stoic question</b>—the only one that actually matters—is different:</p>
<blockquote data-path-to-node="15">
<p data-path-to-node="15,0"><b>&#8220;Can AI <i>be</i> who I am?&#8221;</b></p>
</blockquote>
<p data-path-to-node="16">Can an algorithm practice virtue? Can a neural network feel the crushing weight of failure and choose to stand up anyway? Can a machine <i>suffer well</i>?</p>
<p data-path-to-node="17">The answer is no. And in an economy flooding with cheap competence, your ability to endure, to feel, and to struggle is no longer a bug. It is your ultimate feature.</p>
<h2 data-path-to-node="18">The Empire of the Algorithm vs. The Citadel of the Self</h2>
<p data-path-to-node="19">If Marcus Aurelius were transported to Silicon Valley today, seated in a Herman Miller chair and shown the capabilities of GPT-4, he wouldn&#8217;t be impressed by the efficiency. He certainly wouldn&#8217;t be anxious about the singularity.</p>
<p data-path-to-node="20">He would likely laugh. Not a mocking laugh, but a knowing one.</p>
<p data-path-to-node="21">Marcus wrote <i>Meditations</i> inside a freezing tent on the Danube frontier, surrounded by war, plague, and betrayal. He was the most powerful man on earth, yet he was writing notes to himself about how to remain good in a world that tempted him to be bad.</p>
<p data-path-to-node="22">He understood a fundamental distinction that modern tech evangelists miss: the difference between <b><i>Technê</i></b> (technical skill/craft) and <b><i>Phronesis</i></b> (practical wisdom).</p>
<p data-path-to-node="23">AI possesses infinite <i>Technê</i>. It can code, write, and calculate faster than any human. But it possesses zero <i>Phronesis</i>.</p>
<blockquote data-path-to-node="24">
<p data-path-to-node="24,0"><i>&#8220;Information is not wisdom. Knowledge is not understanding.&#8221;</i></p>
</blockquote>
<p data-path-to-node="25">Wisdom is not the accumulation of data. If it were, the internet would be a sage. <b>Wisdom is the scar tissue formed by making mistakes.</b> It is the byproduct of feeling the pain of those mistakes and enduring the consequences.</p>
<p data-path-to-node="26">ChatGPT cannot hesitate. There is no doubt. It cannot experience the &#8220;Dark Night of the Soul.&#8221; It creates a simulation of confidence, but it has never had to be brave.</p>
<h2 data-path-to-node="27">The Science of &#8220;Desirable Difficulties&#8221;</h2>
<p data-path-to-node="28">I have lived this distinction. Years ago, I went through a professional failure that nearly broke me. I lost a business I had poured my soul into.</p>
<p data-path-to-node="29">If I had fed the parameters of that business failure into an AI, it would have given me a clinically correct post-mortem. It would have listed the market factors, the cash flow errors, and the timing issues. It would have been &#8220;correct.&#8221;</p>
<p data-path-to-node="30">But it would have been useless.</p>
<p data-path-to-node="31">The value of that failure wasn&#8217;t the data analysis. The value was the nights I spent staring at the ceiling, wrestling with my own ego. The value was learning how to detach my self-worth from my net worth.</p>
<p data-path-to-node="32">Cognitive psychologists actually have a term for this: <b>&#8220;Desirable Difficulties.&#8221;</b></p>
<p data-path-to-node="33">Coined by researcher Robert Bjork, the term refers to learning tasks that require considerable effort. These difficulties trigger cognitive processes that improve long-term retention and the ability to transfer skills to new situations.</p>
<p data-path-to-node="34"><b>AI removes friction.</b> It is designed to make things easy, fast, and seamless. <b>Stoicism teaches us that friction is necessary.</b></p>
<p data-path-to-node="35">As Seneca, the Stoic statesman, wrote to Lucilius:</p>
<blockquote data-path-to-node="36">
<p data-path-to-node="36,0"><i>&#8220;No man is more unhappy than he who never faces adversity. For he is not permitted to prove himself.&#8221;</i></p>
</blockquote>
<p data-path-to-node="37">When we use AI to bypass the &#8220;hard part&#8221;—writer&#8217;s block, awkward drafting, difficult conversations—we are not just saving time. We are robbing ourselves of the gym session for our character. We are creating what modern philosopher Nassim Taleb calls <b>&#8220;fragility.&#8221;</b></p>
<p data-path-to-node="38">If you let a machine do all your heavy lifting, you don&#8217;t just get the job done faster; you atrophy the muscles required to carry your own life.</p>
<h2 data-path-to-node="39">Humanity as the New &#8220;Veblen Good&#8221;</h2>
<p data-path-to-node="40">So, where does this leave us in the job market?</p>
<p data-path-to-node="41">Here is my contrarian bet: <b>As AI drives the cost of &#8220;competence&#8221; down to zero, the value of &#8220;humanity&#8221; will skyrocket.</b></p>
<p data-path-to-node="42">In economics, a <b>Veblen Good</b> is a luxury item for which demand increases as the price increases, because of its exclusive nature and status appeal (think: a rare mechanical watch vs. a cheap digital one).</p>
<p data-path-to-node="43">We are about to witness a massive flight to quality. But &#8220;quality&#8221; won&#8217;t mean &#8220;perfect grammar&#8221; or &#8220;photorealistic lighting.&#8221; Machines have that covered.</p>
<p data-path-to-node="44"><b>Quality will mean &#8220;proof of human struggle.&#8221;</b></p>
<p data-path-to-node="45">Consider the &#8220;IKEA Effect&#8221;—a cognitive bias where people place a disproportionately high value on products they partially created themselves. We value sweat equity. We value the human touch precisely because it is <i>inefficient</i>.</p>
<p data-path-to-node="46">We are moving toward an <b>artisan economy of the soul.</b></p>
<ul data-path-to-node="47">
<li>
<p data-path-to-node="47,0,0"><b>AI is the factory.</b> It provides average, competent work at scale.</p>
</li>
<li>
<p data-path-to-node="47,1,0"><b>You are the artisan.</b> You provide nuance, empathy, weirdness, and the ability to navigate moral complexity.</p>
</li>
</ul>
<p data-path-to-node="48">The ability to look a client, a patient, or a friend in the eye and say, <i>&#8220;I know you&#8217;re scared, I&#8217;ve been there, and I&#8217;ve got you,&#8221;</i> requires biological empathy. It requires the shared experience of mortality.</p>
<p data-path-to-node="49">An AI can predict the word &#8220;sorry.&#8221; It cannot <i>feel</i> regret.</p>
<h2 data-path-to-node="50">3 Ways to Practice &#8220;Stoic Resistance.&#8221;</h2>
<p data-path-to-node="51">I am not suggesting you become a Luddite. I use AI tools every day. But I use them as a Stoic uses a sword: with a firm hand, ensuring I am the master, not the servant.</p>
<p data-path-to-node="52">Here is how you make this real in your life—three recommendations for staying relevant, sane, and human.</p>
<h3 data-path-to-node="53">1. Seek &#8220;Skin in the Game&#8221; (The Taleb Rule)</h3>
<p data-path-to-node="54">Nassim Taleb argues that you cannot trust a system (or a person) that does not share the downside risk of their decisions.</p>
<blockquote data-path-to-node="55">
<p data-path-to-node="55,0"><i>&#8220;If you do not take risks for your opinion, you are nothing.&#8221;</i> — Nassim Taleb</p>
</blockquote>
<p data-path-to-node="56">AI has no skin in the game. It cannot be fired. It cannot lose a friend. It cannot die. Therefore, it cannot truly lead.</p>
<p data-path-to-node="57"><b>Recommendation:</b> Focus on developing high-stakes skills. Conflict resolution, strategic risk-taking, and deep mentorship are areas where the data is incomplete, and the consequences are personal. Stop trying to be a better calculator than the computer. <b>Be a better risk-taker.</b></p>
<h3 data-path-to-node="58">2. Reclaim Your Cognitive Resistance</h3>
<p data-path-to-node="59">When I write, I refuse to let an LLM draft the first version. That blank page is my dojo. That struggle to find the right word is where my brain makes new connections (neuroplasticity).</p>
<p data-path-to-node="60">If I outsource the draft, I am outsourcing the thinking.</p>
<p data-path-to-node="61"><b>Recommendation:</b> Use AI for execution, never for conception. If the work requires moral judgment, emotional nuance, or original insight, keep your hands on the wheel. <b>The struggle is the point.</b></p>
<h3 data-path-to-node="62">3. Practice <i>Askesis</i> (Voluntary Discomfort)</h3>
<p data-path-to-node="63">This is a classic Stoic technique. If technology is making life easier, you must artificially reintroduce difficulty to maintain your edge.</p>
<p data-path-to-node="64"><b>Recommendation:</b> Take a cold shower. Leave your phone at home and walk in the woods. Have the difficult conversation in person, not over text. Read a dense, difficult book instead of a summary.</p>
<p data-path-to-node="65">Remind yourself that you are a creature designed for struggle. Your ability to endure discomfort is your competitive advantage over a machine that requires a perfect temperature-controlled server room to function.</p>
<h2 data-path-to-node="66">The Last Sanctuary</h2>
<p data-path-to-node="67">Marcus Aurelius didn&#8217;t have ChatGPT, but he had slaves, scribes, and advisors. He could have easily outsourced his thinking. He could have asked a scribe to &#8220;write me something inspiring about death.&#8221;</p>
<p data-path-to-node="68">He didn&#8217;t. He sat with the candle and the parchment and did the work himself.</p>
<p data-path-to-node="69">Why? Because he knew that the writing wasn&#8217;t for an audience. It was for <i>him</i>. It was the gymnasium where he built his soul.</p>
<p data-path-to-node="70">The anxiety you feel about AI is real, but it is misplaced. Do not fear that you will be replaced. Fear that you will allow yourself to become so comfortable, so automated, and so frictionless that you forget how to suffer well.</p>
<p data-path-to-node="71">The algorithm can replicate your syntax. It can mimic your style. It can steal your voice.</p>
<p data-path-to-node="72">But it can never replicate your resilience. It can never replicate the quiet dignity of a human being facing the unknown and choosing to step forward anyway.</p>
<p data-path-to-node="73">That is your monopoly. Guard it.</p>
<h3 data-path-to-node="75"><i>A Question for the Reader</i></h3>
<p data-path-to-node="76"><i>I want to hear from you: What is one &#8220;difficult&#8221; task in your work or life that you refuse to outsource to AI, precisely because the struggle makes you better? Let me know in the comments</i></p>
<p>&nbsp;</p>
<p>The post <a href="https://garyfretwell.com/stoicism/the-stoic-case-against-ai-anxiety-why-struggle-is-your-ultimate-competitive-advantage/">The Stoic Case Against AI Anxiety: Why Struggle Is Your Ultimate Competitive Advantage</a> appeared first on <a href="https://garyfretwell.com">My blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://garyfretwell.com/stoicism/the-stoic-case-against-ai-anxiety-why-struggle-is-your-ultimate-competitive-advantage/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
