Join Jaydeep Chakrabarty, Director of AI at Piramal Finance, for 2025: The Agentic Shift – Are We Reasoning, Or Just Retrieving Smarter?
This talk explores whether the rise of agentic frameworks marks AI’s leap into true reasoning, or if we’re just refining smarter retrieval.
Jaydeep will break down the evolution from LLMs to RAG to agentic models, explain when to use “Fast Thinker” retrieval vs. “Slow Thinker” reasoning, and demystify frameworks like ReAct that enable autonomous planning and action.
Reserve your free spot today and explore the future of AI autonomy!
What evaluation methods can help us assess whether an agentic system is truly reasoning effectively rather than just creating the illusion of intelligence?
Can we make AI agent which can make other AI agents automatically?
We are seeing that a lot of banking and finance organizations are being cautious in adapting to AI considering how sensitive finance data is for the user. How do you ensure guardrails at Piramal Finance to ensure data security for your customers?
Do you think AI will ever be able to truly reason like humans, or will it always be advanced pattern matching?
If AI becomes the primary driver of QA, how do we avoid bias, where AI only tests scenarios it has seen before and misses the true edge cases?
From the QA/testing perspective, what’s the greatest advantage of agentic AI over non-agentic AI tools and processes?
Is agentic AI (with human oversight) better equipped to resolve issues around accessibility, inclusion and ethics than other tools and options?
What’s the ethical line in using AI for testing , for example, if AI discovers a production vulnerability, who’s accountable: the tester, the AI, or the organization?
How do we distinguish true reasoning from advanced pattern retrieval in AI systems?
Can agentic AI models demonstrate creativity, or are they limited by their training data?
What potential benefits could AGI or any intermediate steps toward that possibility have for Q/A testing, or is agentic AI sufficient?
Apart from engineer jobs, which other non software roles can be replaced by AI soon?
Are there testing use cases where you might not recommend agentic AI-powered tools?
How can we ensure that AI systems move beyond retrieval to true reasoning, especially in high-stakes enterprise applications where decisions impact business outcomes?
What risks emerge if we mistake retrieval-based outputs for genuine reasoning?
In regulated industries like banking or healthcare, how do we maintain auditability and compliance if a test was generated or modified by AI without human approval?
What are some potential drawbacks or risks of using Artificial Intelligence (AI) in software development and testing?
What are the bias related challenges companies still facing to adapt AI on Org level ?
Will AI ever reason like humans, or is it just advanced pattern matching?