Each of the Big Four firms–having committed themselves to billions of dollars worth of investments in generative AI, as well as entering strategic alliances with tech companies like Microsoft, Google and OpenAI–are hard at work building an AI infrastructure that, over the long term, will support the technology’s integration into nearly every aspect of their practice.
Like many things, the key to these AI ambitions is data. All possess massive stores of data, collected over the years from thousands of routine operations. Leaders plan to leverage this data to train their own custom AI models. This is not a plan for the far off future. Each has already begun doing so, sometimes through creating something new and other times through adding generative AI capacities onto existing solutions.
Joe Atkinson, chief products and technology officer with PwC, said they are in the middle of piloting what, internally, is being called “ChatPwC,” which is essentially a ChatGPT-like model operating within the firm’s secure environment. People can (and have) upload information to this secure environment over many years. ChatPwC is trained on this data and, so far, is used to assist staff members with certain administrative and research tasks.
“[ChatPwC] takes advantage of that data to make first drafts of memos and reports and to analyze large documents and figure out risk factors to see where we need to dig in. It’s in pilot, but we’re expecting to scale it, so we’re evaluating the impact and the cost because generative AI models are expensive to operate,” he said.
Will Bible, Deloitte’s audit and assurance digital transformation and innovation leader, confirmed that his firm, too, is developing a chatbot which draws upon massive amounts of internal data to generate intelligent responses and provide insights to people on technical subject matters. At the moment, he said, they are in the middle of performing quality control on the bot to fine tune its accuracy, which he said still can be iffy. The plan right now is to have the bot available for internal use only, but leaders are also evaluating whether it should be broadly available as part of its technical library.
“The research chatbot is a fairly direct use case. You’ve seen a lot of chatbots in the news, which makes a lot of sense with a natural language interface, but there are other areas where generative AI can play a role in terms of evaluating documents, summarization, those kinds of things. So our R&D is prototyping around applying these capabilities,” he said.
E&Y, meanwhile, has added generative AI capacity to its already-existing EY Canvas platform, which supports over 120,000 staff members on top of 350,000 clients worldwide, though it is not described as a chatbot but rather a “recommendation engine.” The program draws on the huge amount of data stored on the system, generated by day to day activities on the platform by both staff and clients. This allows the AI to observe how people conduct processes and tasks in an engagement, which then informs its own insights and suggestions.
Richard Jackson, an E&Y assurance partner who specializes in technology, said it is the equivalent of drawing upon the collective knowledge and experience of 10,000 professionals on what they did in similar situations. He compared it to bumping into an experienced colleague at the water cooler and talking out a problem, except on a mass scale.
“So instead of ‘who is Richard speaking to’ it’s more I get to have a machine to help me tap into the thousands of client insights we have. The mechanics of what I do are similar to what I do now in applying my own professional judgment, but now my frame of reference, and my input to that thought process, is not just the people in the office but a global scale,” he said, adding it is a great example of how E&Y is seeking more augmentation than replacement of accountants.
Rodrigo Madanes, E&Y’s global AI leader, added in an email that the firm is also working on a variety of generative AI applications both for internal use and its clients. Recent technological advances, he said, allow generative AI chatbots to be integrated into databases and other types of structured data, which gives rise to create better user experiences. He raised the example of the EY Intelligent Payroll Chatbot, launched in March 2023 with the ability to answer employee payroll questions and personalize the employee experience. However he added that while the firm is highly interested in generative AI, it is keenly aware of its inherent risks.
“EY is working on a variety of generative AI technology both for internal use and for our clients, and doing so following responsible AI guidelines. We are being careful in the development of our “chatbots” or conversational interfaces, as there are a number of well-known issues including hallucinations and biases. We have developed Responsible AI guidelines in order to ensure our technology is safe and that it augments the capabilities of people,” he said.
Cliff Justice, KPMG U.S. enterprise innovation leader, said that its focus hasn’t been on a single generative AI tool but a range of different ones for different purposes. Developed in direct partnership with Microsoft, it is using chatbots for a range of tasks. Already professionals in KPMG’s Advisory and Tax already have access to GPT-like tools for tasks such as creating content, summarizing long documents, conducting research and assisting with code development; KPMG tax, meanwhile, is combining the tools with a cloud technology platform, Digital Gateway, to assist in ESG reporting; and advisory staff are integrating generative AI with already existing tools.
“As you can imagine, we have lots of data. We have existing data, the data created by our people, so we combine that and those tools and customize the output and software with the AI platforms to help automate, streamline and improve productivity across the firm,” he said.
KPMG Australia also touted its internal KymChat bot, described as a proprietary version of ChatGPT, which acts as an assistant for internal staff members. While its main use cases for now pertain to efficiency and innovation within the firm, it is expected that leaders will eventually scale up its capacities to provide more functionality.
An oft-cited concern regarding AI bots is data privacy, especially given that some of the most popular, publicly available applications store all conversations on their servers. This is why many firms, while enthusiastic about ChatGPT, hesitate on using it for serious client work (see previous story). Given professional rules about client privacy, entering client data into ChatGPT (or other models like Bard or Claude) could represent an ethics violation.
This is not a major concern for the Big Four firms, however, all of whom develop their AIs in their own secure environment, behind their own firewalls. The bots feed only on the data they give it, and disclose nothing to the outside world. Much of this is possible due to their partnerships with major players in the AI space, which gives them access to development tools generally not available to the public. By having direct access to these tools, they can train their models inside their secure environment without releasing client data.
PwC’s Atkinson said they are not pursuing these developments for their own sake but, rather, because their clients are likely doing so as well. If they expect to be able to service their clients in the future, he said, it will be necessary to meet them where they are, which means increasing their own AI capabilities.
“Today we see the overwhelming majority of services have a technology component to them. Tomorrow, in the AI world, all of them do. There is no delivery without application of AI in smart ways. I am confident we can deliver a ton of value today but our capacity to deliver value will explode in an AI world,” he said.
Credit: Source link