Why Are We Comfortable With AI Managing Our Money & Shopping, but Not Our Health?

During a recent family get-together, we were discussing AI and its role in everyday life.

Most of us are comfortable letting AI analyze portfolios, help with shopping decisions, and handle other routine tasks. But when the question came up — “Have you ever used an AI tool to analyze your blood test or medical report?” — the mood changed instantly.

The general consensus was that while AI might be fine for finance or productivity, medical interpretation still feels like something we want to hear directly from a doctor.

That reaction raised an interesting question for me: is this hesitation about accuracy, accountability, or the need for emotional reassurance?

Before the weekend, here is a question that’s not about money.

Would you feed an important medical report into an AI? Why or why not?

1 Like

AI is seen more as a support tool than a replacement.

2 Likes

I would but not like scanning and sharing a Screenshot which i assume most would do because actually having a conversation and putting the reports like an example is tedious and time-consuming. I prefer it because it helps the user of AI to set a context and share the data as an example so that AI won’t directly use it but only refer to it if there are more examples with the same scenarios.

1 Like

I would, have already done this for a few issues and the results were pretty accurate. It is more like a second opinion for me though.

2 Likes

For me, I mainly use AI to summarize the quaterly results and quick analysis.

I have faced some instances as well. The data mentioned hasn’t been accurate, even though I have premium subscriptions for a few of them.

1 Like

understandable why people may want to visit a doctor for their medical reports. I’ve used AI to make sense of the complex terms in my medical report

1 Like

A welcome change to see members of Dhan’s team expressing their opinions. In the past, it was noticeable that team members rarely shared their views on subjects discussed on the forum. I thought this was due to fear of being singled out for their opinions or that speaking openly might affect their job security. Seeing more open participation now feels like a positive and encouraging shift.

1 Like

AI is not fool proof. So use as a secondary doctor to validate only. A real doctor has spent time in a university learning and has also seen many patients and reports too. Unless an AI has seen that level of data, trained over that kind of data, it can make errors of judgement. Also while a generic AI model may give good or bad based on blood parameter normal ranges available on the internet and give judgements based on what it reads on the internet it is certainly not the final word on anything.

Besides I have seen Chatgpt, Perplexity etc make addition errors when given a long list of numbers to add.I have seen AI make coding errors. So AI is a good tool to have in the hands of a human master. It is not ready to be the master. It is also a bigger question when it comes to whether it should be allowed to be the controlling master of anything.

1 Like

saw an interesting update from ChatGPT around this topic - they have launched ChatGPT Health, which they claim as a more secure place to share your health and medical records.

Screenshot 2026-01-08 at 11.45.05 AM

2 Likes