At Mercola.com, Dr. Joseph Mercola discusses the advantages of using AI large language models to learn more about your health. He writes:
It’s been just a short 14 months since ChatGPT and the other large language models (LLMs) which are the progenitors of artificial general intelligence (AGI) have been available to us. I suspect many of you have explored their use. It’s a fascinating technology that has enormous potential to serve as patient teachers to help you understand a wide range of concepts that you might find confusing.
One of the challenges of writing this newsletter is that it is virtually impossible to simplify certain medical concepts because there’s such a wide range of medical knowledge among the readers.
I regularly use ChatGPT to answer certain biological questions or concepts that I am unclear about. For example, I recently did a deep dive on the mechanisms of how carbon dioxide (CO2) might work as one of the most effective and simple interventions to improve health and prevent disease.
I was really impressed with how quickly it would explain complex physiology that could help me understand the topic better. It occurred to me that my readers could use this tool to help them understand areas of medical science that they don’t yet fully understand.
A classic example of this would be mitochondrial cellular energy production and the electron transport chain function. It is clearly a very complex topic but you can continuously ask ChatGPT as many questions as you want, and repeat your questions until you understand it.
This is a great example to use because it is a topic that many don’t fully understand, yet it’s not controversial — it doesn’t violate any medical narrative that is radically influenced by the pharmaceutical paradigm. As long as you restrict your use of this tool to basic science topics you should be OK, and I would encourage you to do this on a regular basis. You can use the video above to help you refine your search strategies.
You just want to be very, very careful and avoid ever asking any questions that relate to treatment options, because you can be virtually certain it will be biased toward the conventional narrative and give you incorrect information. It will even warn you that something you know to be both effective and harmless is dangerous.
For example, the last thing you would want to ask the program is how to treat heart disease, diabetes or obesity. It will merely regurgitate what the drug companies want you to hear and give you serious warnings about the dangers of any strategy that conflicts with these recommendations.
Read more here.
If you’re willing to fight for Main Street America, click here to sign up for my free weekly email.