However, these chatbots can exhibit biases that shape the health information they provide. Take oral contraceptives (birth control pills) for example. When asked about the pill, chatbots often provide information that skews towards the benefits like pregnancy prevention, while minimizing discussion of potential side effects. This presents a limited perspective on oral contraceptives. The algorithms driving chatbots are trained on available data, which suffers from reporting and publication biases that accentuate benefits over harms. So chatbots end up perpetuating these biases.
Providing comprehensive, balanced information is vital for truly informed decision-making about health. Biased information from chatbots can steer choices in a particular direction, often aligned with business interests rather than public health goals.
To counter such biases, chatbots can employ oversight from experts to ensure balance in the information provided. Guidelines can be issued urging chatbot creators to minimize biases through training data selection and algorithm tweaking. For example, they can be trained to request the user if they need a more comprehensive view in case a request is pointed in one direction.
Finally, educating people to approach chatbots critically rather than blindly trusting their guidance is key. Just like with human experts, examining chatbot recommendations against alternate credible sources allows for balanced perspectives.
In an evolving digital health landscape, chatbots hold promise in improving access to information. But thoughtfully addressing their limitations is crucial so these tools empower rather than inadvertently mislead people in making health choices aligned with their needs. Openness to oversight and continual learning will allow chatbots to better serve individuals and the public health good.

Methods: Understanding inherent bias in popular chatbots

If Internet search engines utilize complex algorithms to deliver their search results then chatbots like ChatGPT and Bard are even more so and at an advanced level. Concepts like Artificial Neural Networks (ANN) and Natural Language Processing (NLP) are just the surface of it. For example, Fig 2 represents a schematic of an affective conversation where the emotion depends on the context. The health assistant understands the affective state of the user in order to generate effective and empathetic responses.
To understand a chat was initiated with ChatGPT, a popular chatbot available at https://chat.openai.com/. At this point, one must understand that all conversations are not the same; hence, responses to the same question posed by other users can evoke different answers. Though this approach personalizes the answers to suit each user and their inherent 'intent', the overall objectivity of the answers provided can vary widely. It is entirely possible that responses from AI are in the auto-learning process and keep adapting as the number of users asking the question changes. This makes the process very personalized though not necessarily uniformly objective.