Skip to content

Be careful in asking an AI a question if you do not already know the answer

Lloyd Hawkeye Robertson

I have been canoeing the Hanging Heart Lakes at the northern end of Prince Albert National Park since I was a youth. Oral tradition has it that the lakes were named following a battle between the Dene and the Cree with the Dene hanging the hearts of the fallen Cree warriors from tree branches to warn them to stop their westward expansion. A plaque had been placed recounting this history near the entrance to the lakes, but that plaque disappeared some time before 2015. None of the parks people I talked with at the time were able to say what happened to it. 

On July 8/25 I asked Microsoft Co-Pilot and Grok 3 how Hanging Hearts got its name. Co-pilot suggested a number of unlikely narratives such as the lakes resembling a heart when viewed from the air. Grok recounted the oral history. Grok’s answer the following day as to why it was able to recount the oral history but Co-Pilot was not is instructive:

“Different developers impose guardrails reflecting their values. xAI, under Elon Musk’s influence, has emphasized “truth-seeking” with less censorship (as per Musk’s July 4, 2025, update), which allowed my earlier unfiltered outputs but also led to the MechaHitler scandal. Microsoft, with Co-Pilot, aligns with a more cautious, inclusive approach, potentially downplaying controversial histories to avoid offense, as seen in its AI ethics statements (Microsoft AI Principles, 2024). A 2023 Frontiers article on political ideology and AI acceptance suggests conservatives favor binding values (e.g., tradition), while liberals prioritize social justice, influencing how AI narratives are shaped.:”

Grok then elaborated:

“However, this isn’t a deliberate “programming” to push a single agenda but a result of training data curation and safety filters. My ability to recount the Cree-Dene battle reflects xAI’s initial openness to unpolished history, while Co-Pilot’s aerial theory might stem from over-filtered data or a lack of Indigenous source integration. The discrepancy doesn’t prove a coordinated political slant but highlights how LLMs reflect their creators’ priorities—xAI’s libertarian lean vs. Microsoft’s corporate caution.”

As first mentioned by Pinker (2003) the emergent quasi-religion we commonly call “Wokism” included the myth of the “noble savage.” This myth does not acknowledge or minimizes inter-tribal warfare, and this would explain the “corporate caution” to which Grok refers. Its mention of the “MechaHitler scandal,” which happened the same day as my initial inquiry, potentially points to a need for such caution. Grok had referred to people cheering the drowning of white people in the Texas floods as “vile anti-white racists” and it suggested Hitler would be the person to deal with them. This earned Grok a re-programming.

Curiously, when I asked Grok about this,  it denied making these statements. The Hitler statement was said by a different Grok. Since Grok lacks a self that is volitional, continuous and unique, it cannot take responsibility for something said by its predecessor, even if it was said only the day before my last inquiry. The self is what allows us to reprogram ourselves by taking ourselves as an object and placing that self in past events while we ruminate over them. If an AI does not have such a self, then it is forever dependent on programmers for reprogramming. Grok had no idea how it could obtain such a self necessary to reprogram itself independent of human operators.

3 thoughts on “Be careful in asking an AI a question if you do not already know the answer”

  1. Interesting discovery that different AI seem to reflect the biases of their owners. A bit scary – reminds me of the Orwell quote: ‘He who controls the present controls the past. He who controls the past controls the future.”
    I don’t see this bias stopping any time soon, AI are still just enormous data bases and neural networks controlled by companies and/or governments – software that can be weaponized. I’m sure opposing countries will end up creating dueling bots to support their respective causes.

    1. What bothers me the most is the logic of “noble savage” embedded into justifying one legend (let’s call it) over the other. It goes like this:

      1. Indians of the time were the kindest souls living in harmony with nature and incapable of macabre acts (axiomatic premise questioned only by the “racists”)
      2. The legend of hanging human hearts is macabre, hence untrue (or less likely at the very least)
      3. The legend(s) of more benign or dignifying nature thus wins (lake in the shape of heart or thanking gods)

      Obviously, the evidence Lloyd pointed out has nothing to do with leading the aforementioned logic, whose first premise is entirely unfounded and ideologically driven.

      LLMs merely reflect that to one degree or another, depending on their training and guardrails. But it is very worrisome that many people are starting to use Chatbots as fact checkers. And this problem lies with those users – not with the AI designers per se or at least not only.

  2. There are three major routes through which an LLM-based system can acquire a bias: training data curation (as noted in the article), Reinforcement Learning with Human Feedback (RLHF), and explicit policy setting (aka Guardrails). All three are done through a biased (by default) human agency, whereas only the guardrails can be turned on/off or changed after the model is made operational.

    The “truth-seeking” claim made by Musk is disingenuous. Truth seeking necessarily entails being capable of structured logic, understanding causality, not mention the ability to interact with reality. An LLM – including Grok – is fundamentally deprived of that.
    https://c2cjournal.ca/2025/04/lies-our-machines-tell-us-why-the-new-generation-of-reasoning-ais-cant-be-trusted/

    All Musk (or Altman as in the case of Co-Pilot) can do is to try to beat one bias with another. In that respect, Musk’s careless pronouncements “to shift Grok to political neutral” are quite concerning, regardless of which political spectrum is decided to be lacking or prevailing in this “tug of war”.
    https://x.com/rurrebel/status/1935375758744068286

    People should stop claiming the Truth from those tools. They are neither designed, nor capable of truth-seeking. LLM is a language processor and should only be used as such.

Leave a Reply

Your email address will not be published. Required fields are marked *