President Biden’s executive order on artificial intelligence was generally seen as a positive first step by accounting technology leaders, but more work remains to be done if the government wants to limit the risks presented by the new technology without strangling its potential.
The executive order, signed late last month, generally calls for the development of standards and best practices to address various aspects of AI risk, such as for detecting AI-generated content and authenticating official communications. It also directed government agencies to study things such as how AI could impact the labor market and how agencies collect and use commercially available information. It also emphasizes the development of new technologies to protect privacy rights and bolster cybersecurity, as well as training on AI discrimination, and the release of guidance on how different agencies should be using AI.
Aaron Harris, chief technology officer at practice management solutions provider Sage, said this executive order is an important step forward, which has come at a crucial time given the rapid proliferation of AI technologies. He felt that the order sets the appropriate tone for AI development, as it emphasizes safe and responsible uses.
“Broadly speaking, the order sets a clear expectation for responsible AI practices. In the accounting space, the technology is already being used to automate tasks and handle interactions with customers and vendors, for example. Biden’s vision is a reminder of the need for such practices to adhere to ethical and responsible guidelines, including fairness, transparency and accountability,” he said.
Piritta van Rijn, head of product for accounting, tax and practice at Thomson Reuters, had a similar sentiment. While some might object to regulating this technology, she noted that the growth of AI depends on people’s willingness to trust it, which often involves regulation.
“The executive order signifies good progress. No matter whether you think AI regulation should be driven by the industry, by government or by a combination of these — it has a critical role to play in instilling public trust in AI. While we may not have all the answers right now, putting some fundamental AI guardrails in place, in the form of a globally aligned, cross-industry regulatory framework will help address key concerns such as transparency, fairness and accuracy. This will help close the trust gap and unlock AI’s benefits in a trusted way,” she said.
Alex Hagerup, co-founder and CEO at payment automation and insights solutions provider Vic.ai, also felt the executive order was a good step, but was more guarded in his optimism. The order, he said, is certainly ambitious, and it is encouraging to see a comprehensive approach being adopted at the highest levels of government. At the same time, the order has a lot of moving parts, and its success will ultimately be contingent on how well the government implements its policies and, later, keeps them up to date.
“The order sets ambitious goals and, if effectively translated into actionable policies and regulations, it could serve as a significant step forward in AI governance. However, it is critical to ensure that these regulations keep pace with AI advancements and do not become obsolete,” he said. “While the executive order establishes a high-level framework for AI governance, the devil will be in the details of how these guidelines are enacted. Ongoing engagement with the AI community, including companies like Vic.ai, will be crucial to ensure that the rules are practical, promote innovation, and do not stifle the growth of AI in the accounting solutions space.”
Pascal Finette, co-founder and CEO of technology consulting firm “be Radical”, though, said it seems like the order was crafted by someone who doesn’t really understand AI and is acting more out of fear than anything else, pointing to its far-reaching scope and language that infers AI is a weapon that must be controlled.
“Overall it feels far reaching and somewhat reactionary to the perceived threat of AI as a ‘weapon,’ as is evident in the Biden administration evoking the war-time Defense Production Act,” he said. “The focus on foundational models, as well as the assumption that these will come from companies… seems misguided, as the regulatory lever is likely much easier applied at the application layer.” He added that the way the order is structured, it will have issues being applied to open source, community-driven models.
He noted that the order seems to have been triggered by a perspective on AI, sometimes called “singularitarianism,” which holds that AI, once it achieves a certain level of development, will inevitably enter a runaway effect that some believe will ultimately lead to the creation of a computer superintelligence that will bring civilization into a new era where human primacy is no longer assured. Finette questioned whether it is prudent to start from such a premise. “To me, this is far from being clear,” he said.
He wasn’t very concerned there would be any direct impacts on the accounting solutions space, as most vendors don’t create their own models but instead rely on those created by other companies that may or may not fall under the executive order.
While he does not know for sure, Vic.ai’s Hagerup said the executive order will at the very least prompt a review of its own AI platform to make sure it adheres to the new safety and security standards, once they are produced. “It may prompt additional privacy and red-team testing procedures. It also opens up opportunities to innovate within the space of privacy-preserving AI techniques, which could give us a competitive advantage and align with the company’s commitment to responsible AI usage,” he said.
Van Rijn, of Thomson Reuters, similarly said that while the executive order does not require them to do anything different right now, once federal agencies have the opportunity to complete initial assessments, she expects the company will likely be within scope for certain compliance provisions due to their status as a U.S. contractor and AI developer. She was sanguine about this, however, as she felt her company was well aligned with the goal of promoting the responsible use of AI.
“Thomson Reuters is aligned with its imperative of harnessing the positive benefits that AI will bring to the accounting profession. AI is important to our ability to innovate and remain competitive. Across the accounting profession, AI could help firms improve their productivity and efficiency, it could automate mundane tasks, and free up time for tax professionals to focus on delivering additional value for their clients. It also has the potential to address human capital issues such as job satisfaction, wellbeing and work-life balance. We’re focused on developing solutions responsibly, and we are prioritizing enhancements that will be transformative to the way that tax professionals work,” she said.
Sage’s Harris raised a similar point. AI is a core part of Sage’s business, which means it benefits from the promotion of fair, responsible, accurate and safe AI models. The company, he said, will continue to ensure its workforce is fully capable of responsible and innovative AI product development, while being fully transparent about the development and use of the technology.
“Building and securing trust in the technology is as critical as the innovation itself,” he said. “Advancing these goals further requires the development of specific, accessible standards to underpin what safe and practical applications of the technology look like in reality.”
Credit: Source link