Dhan Support Chat bot False Information

Hi @Dhan Team,

Based on a Recent Discussion with your support bot, i found that, it is providing incorrect information or hallucinating. Seems like it needs a little fine-tuning.

Similar issue occurred with other companies as well but i strongly believe Dhan team will be able to fix this.

cc : @RahulDeshpande @Hardik @PravinJ

Hi @Sammy Can you share the full log with us on feedback@dhan.co / if this is from your registered account - we will check.

PS: Yes, you are right. AI bots need to be trained and can hallucinate. We monitor these on a regular basis and fine-tune for accuracy based on the interaction logs.

1 Like

Thank you @PravinJ , I already shared these details with Yash from Customer Support Team.

1 Like

Hi @Sammy, thank you bringing this to our notice! Raise AI team is looking into this! This was caused because we missed an edge-case where the assistant couldn’t differentiate between trading account and bank account. We’ll fix this asap!

2 Likes

@Anirudha Thank you very much

1 Like

@Anirudha I’ve noticed another instance of the chatbot providing incorrect information. My SIPs were scheduled for May 3rd, but the bot confidently provided transaction IDs from April instead. When I pointed out that no money had been debited and no NAV had been allocated, the bot finally corrected itself. It seems we currently cannot rely on the chatbot’s accuracy for transaction details.






CC: @PravinJ

Hi @Sammy , thank you so much for being patient while we scale this to more use-cases. I went through this chat and it definitely provided an incorrect resolution. It picked up on April transactions instead of May. Our original intention for this chatbot has always been to guide customers in the quickest and most accurate way for self-serve users who know their way around the app already like you. If you’re unhappy or confused about the responses, please request the bot to escalate to a human agent and it’d promptly redirect during business hours. Your feedback has been extremely valuable in improving edge cases like this and thank you so much for bearing with us while we sort these out.

1 Like

@Anirudha

Chat bot has not been useful for a very long time. It is just a waste of time for any customer trying to get a resolution.

Hi @t7support , we have an extremely high accuracy rate (>85%) which is best in the industry and we receive a lot of positive feedback across the board. Let me DM you, would love to understand the specific scenarios where it didn’t work out for you.

I have responded to you via DM with a screenshot and some details of my experience.

1 Like

In the backend it might be using Chat GPT white label or some open LLM by meta or others which in itself are not optimized at all, so even after training on Dhan data the current model is still hallucinating a lot.

I doubt there will be training as the data is transactional and changes every second. Hence the usual process would just be to create agents to analyze the query, identify the datasource, get the data, give the data and a system prompt to a reasoning model to get the response. Its just prompt engineering and also making sure that the costs incurred in input tokens and output tokens do not get out of hand :slight_smile:

Considering “analyze the query, identify the datasource, get the data, give the data and a system prompt to a reasoning model to get the response”, the response should be correct all the time. But the OP mentioned that it provided transaction details that never happened, that’s hallucination and that can only happen with insufficient training.

Hallucinations are a common phenomenon with models now as the amount of data that everyone shares is huge. Also if any company would be training the model again and again based on data, it wouldn’t be feasible as the compute costs will be huge. Training would only work for cases like Fuzz.ai where the internal model is pre-trained with a certain dataset. For a chatbot to execute, either you do the process as above or get the data via MCP. Anyway let’s see if the Chatbot gets corrected.

As per my understanding Fuzz’s model should be a subset of Support bot’s model. You are not wrong considering how Dhan is doing it internally.