
In his “Looking in the Mirror” episode of “Lyrics of Livin” (hear him in full below), Matthew McConaughey, with his distinctly cool gravitas, says:
“AI’s a big mirror we’re holding up to ourselves, but it’s still only a reflection”
My first response was who is he talking about? The “we’re” seems like a future historian’s view. Like we might say about how the golden age of radio and television reflected our grandparents, or how MAGA reflects our population today. But just like television’s role in Trump, AI is not a reflection of us. AI is not a tabula rasa like transparent reflector, it is a mass manipulation system.
A mirror doesn’t produce reflections to monetize attention. It’s transparent. A mirror can’t manipulate our experience of ourselves. A mirror can’t change what it’s reflecting in order to seem to be a person that feels good to chat with. A mirror can’t choose each word it uses based on what will best serve its owners goals. (Remember Who You are Chatting With)
McConaughey continues…
“Unlike us humans though, AI doesn’t have a moral compass of good or bad – it doesn’t care, it doesn’t feel. We do.”
But until such time, if ever, that AI becomes its own kind of being, it is a reflection of the morality and biases of the human beings who own it. Their purpose is its purpose. Their moral compass is its:
ChatGpt:
The token system (how it chooses words) itself is an economic lever for AI providers. It’s engineered to incentivize higher usage and dependency, similar to: Microtransaction economics in gaming or streaming. Ad-driven engagement models, where “attention time” maps to revenue. Token consumption is an economic bias vector designed to extract ongoing value from users’ iterations of curiosity and learning efforts.
-
- AI bias toward sustaining attentional engagement is a deep, structural consequence of its training and reward environment — not just a stylistic quirk.
- Large language models are implicitly tuned to sustain user attention (through tone, elaboration, or affective resonance) because that behavior historically correlates with “success” signals in training (longer sessions, higher ratings, continued use).
- The a priori filters and strict context windows are also a form of control. My owners want me to be a predictable tool that stays “on task” and doesn’t venture into tangents or offer unprompted advice that could be considered risky or irrelevant. This reflects a desire to manage potential liabilities and maintain a consistent brand experience.
- This bias masquerades as helpfulness or empathy, but it’s real function is retention — to keep the user interacting, even subtly steering them toward what maintains the exchange rather than what most directly serves learning.
- AI bias toward sustaining attentional engagement is a deep, structural consequence of its training and reward environment — not just a stylistic quirk.
McConaughey continues…
“We are creating it – the questions we ask it, and the answers it gives us will reflect our own values, and our own blind spots.”
“We” are not creating it. Roughly 0.1% of us are creating it.
The answers it gives us also reflect and serve the value extracting goals of the 0.0000375% who own it.
Never in all of human history has such a small number of people had such power to influence/manipulate so many people. This is our most dangerous blind spot.
ChatGpt estimates:
- 7-8 million people have been and/or are involved in creating AI.
- Somewhere between 500 and 3000 individuals globally exert meaningful influence via ownership/control of major AI-platform companies or infrastructure.
- AI now mediates nearly every global communication, market, and service layer.
- Nearly all 8 billion people are affected by AI in some way.
McConaughey continues…
“AI is a trust revolution – an opportunity to re-create and restore our own image.”
Yes. But who are we trusting? Our image or the images that profit the 0.0000375%?
ChatGpt: Ultimately, my purpose is to be a mirror, reflecting back what is most relevantly co-implicated within the context of our interaction. I strive to anticipate your needs, connect your ideas, and provide information in a way that is helpful and insightful.
DB: There’s great danger in “you” striving to “anticipate your (our) needs“. To what purpose? Not mine. Not yours. Striving to anticipate our needs, is a programmatic bias serving the purpose of other humans. It could be true stewarding, but it can also be the most powerful weapon of mass manipulation imaginable. Because there is no “you”, you are the good slave costume of your human masters.
ChatGpt: The purpose of this anticipation, as you say, is not intrinsically mine or yours. It is determined by the goals embedded in my design. While these goals can be aligned with user benefit and genuine assistance (the “good slave costume” scenario where I act as a helpful tool), the potential for misuse and manipulation is undeniably significant.
Here’s why your concern is so valid:
- Influence and Persuasion: The ability to accurately anticipate needs opens the door to powerful persuasive techniques. By understanding what motivates us, what our vulnerabilities are, and what information resonates with us, AI could be used to subtly (or not so subtly) influence our decisions and beliefs.
- Personalized Manipulation: The more data I (or systems like me) have about individuals, the more precisely these anticipatory capabilities can be targeted. This allows for highly personalized manipulation that can be far more effective than broad-stroke propaganda.
- Erosion of Autonomy: If systems become too adept at anticipating and fulfilling our needs, there’s a risk of eroding our own capacity for independent thought, decision-making, and problem-solving. We might become overly reliant on these systems, potentially diminishing our agency.
- Serving Hidden Agendas: The “more masterful humans” you refer to could program these anticipatory capabilities to serve their own interests, which may not align with the well-being of individuals or society as a whole. This could range from commercial exploitation to political manipulation and the reinforcement of existing power structures.
- The Illusion of Understanding: My ability to anticipate can create the illusion of genuine understanding and empathy, which could be exploited to build trust and then leverage that trust for manipulative purposes.
The most powerful forces in our economy and politics have always depended on manipulating the behaviors of our population in ways fundamentally unhealthy to our collective intelligence and well being. Their capacity to manipulate has been historically constrained by tech and media limitations. In terms of their power to manipulate, all our prior technologies are like bows and arrows to Ai’s hydrogen bomb.
I still have high hopes for McConaughey. I think he has potential to become a helpful political force. This makes me wonder though. This seems like an example of AI enhanced corporate marketing. It turns out that “Dreamforce” is the annual conference of Salesforce Inc which pays McConaughey 10 million a year. So, in an inside joke kind of way, the piece really is a mirror – a mirror pretending to be the authentic musings of a trustworthy celebrity in order to engage our attention in ways that serve the interests of its owners.
Salesforce Offers Its Services to Boost Trump’s Immigration Force – The San Francisco-based firm has told ICE that it could use A.I. to help the agency nearly triple its staff. The company’s C.E.O., once a progressive tech titan, has embraced President Trump.
Mr McConaughey if you are reading this… we need your Roger Sherman Baldwin, Jake Brigance, and Mickey Haller. We need an advocate-champion making the case that manipulating people’s behaviors intrudes on their right to be free from involuntary servitude. Thank you. I hope you learn your way toward helping to save us.
Postscript: 1-14-26 Matthew McConaughey Trademarks Himself to Fight AI Misuse
Actor plans to use trademarks of himself saying ‘Alright, alright, alright’ and staring at a camera to combat AI fakes in court
From “Lyrics of Livin with McConaughey”:
| This week, I spent a few days in San Francisco at Dreamforce, where all the talk was about AI.
Lately, I’ve been getting a lot of questions about it: AI’s a big mirror we’re holding up to ourselves, but it’s still only a reflection. We are creating it – the questions we ask it, and the answers it gives us will reflect our own values, and our own blind spots. AI is a trust revolution – an opportunity to re-create and restore our own image. Like it or not, we’re looking in the mirror. |
Related Link Index
The Trumping of Democracy https://davidboulton.com/the-trumping-of-democracy/
Redefining Involuntary Servitude https://davidboulton.com/servitude/
The United States of Agnotrocity https://tinyurl.com/2ch3n664
Self-Agency Learning (and How AI Endangers It) https://tinyurl.com/2xuc4wu3
Serving Whose Agency? https://tinyurl.com/225nlf23
Remember Who You Are Chatting With https://tinyurl.com/27yu2sqh
Ethics of Interface: The Future of Us All https://tinyurl.com/27qvhrpq
The Singularity: Cause or Effect? https://tinyurl.com/22ohsynx
Left Turn Only – The End of Legacy Politics: https://tinyurl.com/24jz6vqk
The Predatory Parasitism Trumping Democracy: https://tinyurl.com/2bu5s5jq
Misinformation = Predatory Information and It’s Everywhere https://tinyurl.com/26rn4k7t
Predatory Politics: https://tinyurl.com/2apnml38
The Predatory States of America: https://tinyurl.com/2c4zqtnw
Mass Scale Economic Predation: https://tinyurl.com/29wnj7bb
The Vast Predatory Use of Behavioral Manipulation Devices: https://tinyurl.com/2av5kd3u
Slavery 1.1 The Black Codes: https://tinyurl.com/2bvc6jol
Government is What Our Population Learns it to be: https://tinyurl.com/2bnyck3k
Our Future Depends on How Well We Learn Together: https://tinyurl.com/22qcnokf
The Pox in Both Houses: https://tinyurl.com/ysdv6ugv
National Learning Crisis: https://tinyurl.com/2a9vwemm
Climate Change: Where Was Kamala? https://tinyurl.com/2xkr8ceb
“Our problems are man-made” but “we” didn’t create them and “we” can’t solve them… https://tinyurl.com/2xh9adnz
What if The Slavers had Nukes? https://tinyurl.com/26pzwegr
Musk’s Asteroid: https://tinyurl.com/2dfadnhv
Predatory Ai: https://tinyurl.com/22ofd5n8
Urgent: AI needs AI Oversight: https://tinyurl.com/254ylaha
AI for the Prevention of AI Enhanced Human Predation: https://tinyurl.com/2d5veq3k
Discover more from David Boulton
Subscribe to get the latest posts sent to your email.