Why are we talking about AI on the Selene River Press blog? What does this have to do with holistic nutrition resources?
Soon enough AI tools will be built into most search engines, browsers, phones, you name it— and being able to discern accurate information in those circumstances requires AI literacy so you know what you’re looking at. In the same way that many of us can discern “good” search engine results from “spam” search engine results, and know the difference between paid ad content and organic search content, we also now have to learn how to read between the lines when it comes to content delivered to us from an AI source.
Let me rephrase that: because of how AI tools currently work, GOOD, accurate holistic health information may be harder to find in favor of more conventional “health” information.
Artificial intelligence in the form of generative chat bots like ChatGPT, Microsoft’s CoPilot, and Google’s Gemini have become ubiquitous on the Internet and are only getting more so. And as most new technology, the use of it is celebrated early on by early adopters. These early adopters get a chance to interact with technology and learn about any shortcomings, quirks, and things that are problematic with the technology.
Another way to put this is: Early adopters gain early literacy in new tools— and make no mistake, learning how generative AI works is very much a literacy skill in the same way that learning how to use Google well is a literacy skill. Everybody has that friend who can bend Google’s search engine to their will and turn up results on things that other mere mortals just can’t.
AI is exactly the same. For what it’s worth, so was Facebook, Instagram, Twitter, YouTube, blogging, laptop computers, the typewriter, the telephone, the lightbulb… and also, so is holistic health. So is mental health. So is financial fortitude. Viewed through the lenses of skill gaps that are fixable through learning, everything and anything is a literacy skill.
So, a quick primer: Generative AI text tools utilize something called a “large language model.” Or basically, a huge library of tons and tons and tons of content like newspapers and books and encyclopedias and, you name it.
The AI software looks at statistics rather than the subject matter: How likely is it that the word “pickles” appears in the same sentence as “hamburger,” “dirt,” “bun,” and “baseball”?
Over a large enough sample size and – depending on how you slice and dice the content – you end up with a guide (the AI term for this is matrix) that dictates how particular types of sentences are put together and how particular types of information is presented.
Put enough of these matrices together and you end up with a model that can formulate anything from a blog post about nutrition to a recipe for perfect burgers to a poem about organic gardening. And, as you ask more questions or provide more information, this is integrated into the responses it generates to filter and narrow the context.
This is also why AI has a very hard time generating humor, which is all about violating expectations. Knowing which word to use based on statistical probability means that violating expectations is a bug, not a feature.
Different AI tools differ because every model is trained on different content, not every model is the same, and not every methodology is the same to generate the matrix of meaning.
Think of an AI tool like a Wheel of Fortune contestant. There’s a particular phrase up on the board and, through a process of elimination and context clues, you end up with the right word. AI does this over and over and over again billions of times a second to piece together text. There’s no intelligence behind it, just a lot of really fancy math and filters provided by context clues.
The basic premise is to shoot for average. But, because intelligence is in the name, folks assume there’s a crystal ball of knowledge behind the scenes. Without context, if you ask ChatGPT for a holistic remedy for a headache, it’s just as likely to suggest essential oils as it is interpretive dance.
In fact, please enjoy this brief statement generated by an AI on why interpretive dance is a great headache remedy: “Holistic health emphasizes the connection between physical and mental well-being. Interpretive dance can be a form of mindfulness and stress release, which might indirectly help manage headaches, especially tension headaches. The movement and artistic expression of interpretive dance could distract you from your headache pain, offering temporary relief.”
This particular tool did offer the following disclaimer: “AI tools are not medical professionals. Always consult with a qualified healthcare practitioner before relying on AI for health advice.”
With all of that in hand, we can arm ourselves with the following knowledge:
- Given no other context clues when asked a question, AI will generate the most COMMON (not necessarily most CORRECT) form of content knowledge. It will aim very squarely for the middle, the average, the median.
- This pattern holds true until additional context is loaded into the current session by you (and what you ask/say to the AI tool).
- The best way to arrive at an answer that is neither common nor average is to use the Socratic method to shift contexts and refine focus around a different meaning. Ask questions as you go to get it to provide more context. More context means more refined answers.
Here’s where the problem comes in: Some AI tools can sound REALLY confident when delivering perfectly wrote information, even if that information is made up or false. When an AI tool manufactures a fact, it’s called hallucination. Some tools, when out of their depth, tend to hallucinate more than others while still sounding perfectly confident or blending fiction with reality.
The better AI tools cite their sources in answers, but as a human you still have the obligation to fact check. But here’s the biggest problem of all: As humans, we are HARD-WIRED to love any information that confirms our pre-existing notions and view of the world. This is known as confirmation bias. It’s why all the people you think are wrong are so often found grouped together (ha!)
Health practitioners know the “WebMD” effect all too well. And their well-meaning, intelligent clients are not immune to finding confirming-but-bad-for-them-in-context health information. Throw AI tools into the mix and AI models that may not understand the nuances of nutritional supplements or the mind-body connection, and you’ve got a client who has incomplete, incorrect, or possibly dangerous answers who never bothered to call the practitioner.
Realistically, this happens a lot. Its why habit loops are so hard to break. And this is just the mostly benign single-player version of this game. Consider what happens if you mix in malicious humans into the mix; folks who want to do harm for the fun of it or folks who want to profit from your confirmation bias.
There was a recent news story about how an extortionist used AI to replicate a woman’s voice on a phone call to her parents. The extortionist had used clips from the woman’s social media accounts and called the parents to demand money. They paid. It’s easy to be an alarmist about malicious applications like this, and any time there’s a new way to get “free” money from folks, bad actors will show up.
On the other hand, I used Photoshop in high school to superimpose my Hawaiian-shirt-wearing-self into historical photos with FDR. I was using audio editing tools to create audio drama adventures to make myself sound like an elf or a giant or a wizard. Could I have used these things for malicious purposes? Sure. The only difference here is the learning curve is substantially easier.
Recognizing that the dangers exist mostly from ignorance of the possibilities, I turned to my family and showed them how easy it was to replicate my own voice, my own writing, and even my face. Aside from the possibility that the kiddos now had the knowledge and tools to impersonate me to get a day off school, they’re now more equipped to understand that they can’t always believe what they see, even if it looks and sounds very convincing.
I taught them to look for the edges and how they might be able to identify fake sources of information. Now we have special key phrases which each of us know. I taught them how to use ChatGPT and CoPilot and Gemini to not only ask questions but to have these tools play “3 truths and a lie” to reinforce the idea that even a computer can lie to you.
More than just the basic safety talks, I also cautioned to not share too much, understand it can be used for evil, understand it can lie to you.
They are also learning how to get the most out of these tools. Specifically:
- Search the same topic on multiple AI tools & traditional websites.
- Prime the pump of your question with as much context as you can provide. For instance: “I need help finding a quote. Specifically, I want a quote about resilience from an inventor or entrepreneur who is significantly well-known and recognized within their field and known for their sense of humor. Even better if the person is still currently living.”
- When asking for research, ask for citations (or use a tool like CoPilot that provides citations by default): “I’m working on a research paper on health advice from the early 1900s. What journals, publications, and books were the most prevalent and most cited, who were the experts that were being cited, and what are the key points of what they had to say?” and “Thanks for that information, can you please provide source citations for all of the quotes provided so far?”
- Use examples where possible and break your request into “setting the context” and “refining the response”. “I’d like to write a speech similar to the Gettysburg address on the topic of youth literacy. Can you provide the text of key speeches by US Presidents, create bullet points of each of the metaphors they used, and a summary of the evocative emotional language they used?” “Great, can you help me with a first draft that borrows heavily from X, Y, and Z, with the following metaphors:…”
Bringing this back to the premise of how to find and identify good health sources on the internet: Start by loading the context of what you’re looking for: “According to Dr. Royal Lee, most of the health problems caused by nutrient deficiency are the result of the consumption of overcooked and processed foods. Can you provide me with some recent health studies from reliable sources that back up this assertion?”
Notice that I’ve left the tool to decide what constitutes a “reliable source”. It may return with NIH or it may return with JoBob’s Crab Shack. You could specify which sources you want to consider or even narrow down based on the type of application, e.g. nutrition, osteopathy, exercise, and physiology, etc.
Based on whatever it returns you, give it more context by quoting the article back to it, “This article talks about [insert quote]. Is there more data or studies that support this and can you provide the links to them?”
It’s time to start playing with AI to understand the applications, the edges and dangers, and how we can utilize it to return better information. As a research, summary, and analysis tool, AI has the potential to save all of us a ton of time. Just remember, whether you’re dealing with a chatbot or your overly opinionated aunt, a healthy dose of skepticism is always warranted.
Images from iStock/KamiPhotos (main), ValeryBrozhinsky (hooded man), metamorworks (woman in the apron), metamorworks (woman on the computer).