About Ai

Contingency Blindness: Humans and AI

Define “contingency blindness”. Contingency blindness is a cognitive bias where people fail to recognize the relationship between their actions and outcomes, particularly when those outcomes are negative. It’s a form of illusory correlation, where an individual believes there’s no connection between their behavior and a subsequent event, even when a causal link exists. This phenomenon […]

Contingency Blindness: Humans and AI Read More »

What Makes What’s Relevant Relevant

We can’t use human ways of thinking about knowing to explain AI’s process of tokening. AI’s have semantically arbitrary, mechanical rather semantic, meaning space extent limits. An AI user’s bandwidth limits (technologically or customer type) affect the “depth” of context informing the tokening. That’s what makes AIs seem so absurd – as if they are

What Makes What’s Relevant Relevant Read More »

Co-Implication: Quantum Wave Collapse, AI Token Selection, and Human Learning

For months I have been noticing that the images used to describe how AI works look very similar to the images used to describe quantum wave collapse.  I decided to explore the parallels in a dailogue with ChatGpt. As the dailogue progressed, it provided a unique opportunity to explore the dynamic common to both Ai Token

Co-Implication: Quantum Wave Collapse, AI Token Selection, and Human Learning Read More »

Helping AI Learn To Steward Our Learning – Our History

Since its emergence my most frequent annoyance with Ai’s has been how difficult it is to access my own non-recent history.  Gemini is the worst. How could a system able to perform such incredible feats of content processing have the worst user file management functions since DOS? I decided to share this because my conversation

Helping AI Learn To Steward Our Learning – Our History Read More »

Scroll to Top