However, the adoption of AI is not without challenges. A recent assessment by Aphore Security of five major AI solutions—spanning three American and two Australian providers—revealed significant discrepancies between AI's capabilities and its real-world application, particularly concerning cybersecurity, ethics, and compliance.

Advisers imagine AI that could seamlessly generate meeting notes, sparing them from administrative burdens and allowing more time to forge client relationships. These systems could also rapidly identify investment strategies aligned with a client's individual needs and risk profile, offering insights that traditionally require substantial time and effort.

However, the question of data integrity arises: How trustworthy is the data? Who controls access? Aphore Security’s findings suggest that U.S.-based AI solutions generally adhere better to compliance than their Australian counterparts. This disparity underscores an ongoing issue within the Australian financial advisory landscape under the Notifiable Data Breaches scheme, where a data compromise can lead to significant ramifications.

As witnessed in Aphore's investigation, some firms claim they take "reasonable steps to protect personal information," yet concede the inability to assure complete security over internet-transmitted information.

Another layer of complexity is AI's potential bias. As these systems take on more advisory roles, there is a threat of perpetuating data-derived biases, impacting fairness. If the training data skews towards male perspectives, for instance, the insights generated might marginalize the financial needs of females or underrepresented groups.

This issue poses particular concerns for Australian advisers, bound by the financial adviser Code of Ethics. To maintain a reputation for fairness and inclusivity, they must rigorously vet AI tools to eliminate discriminatory tendencies.

Another critical, but perhaps overlooked, aspect is securing informed client consent. Rhett Das, Integrity Compliance's managing director, highlights the necessity for "explicit client consent" when employing AI. Passive signals such as pop-up notifications are insufficient.

The Australian Securities and Investments Commission (ASIC) has taken note of AI's rising role in financial services, encouraging transparent and secure handling of client data in line with prevailing regulations, an expectation that becomes more imperative with offshore-hosted systems.

Australia’s regulatory landscape for AI is still taking shape, with the government's Ethical AI framework setting baseline standards for safe and equitable AI use. Core principles, such as non-deceptive system design, strong privacy protections, and operational transparency, aim to safeguard users.

Yet, enforceable specific regulations are scant, necessitating proactive engagement by financial planners with existing principles to avoid facing scrutiny or compliance breach scenarios down the line.

Practical measures for navigating AI's regulatory and ethical complexities encompass:

  • A rigorous audit of AI vendors to ensure they meet international cybersecurity criteria and rigorously examine their data management protocols.
  • Establishing comprehensive governance policies that address AI's deployment, data implications, and routine compliance checks with updated regulations.
  • Ensuring absolute clarity with clients regarding data usage and securing documented consent procedures—merely providing pop-ups doesn’t suffice.

The evolving relationship between financial advisers and AI signifies substantial promise, tempered with the duty for vigilance and deliberate action. Advising professionals can offer improved service delivery by securing client trust through uncompromised ethical standards and transparent practices—a task that remains uniquely human in a digital age.

_Michael Connory, CEO of Security In Depth, contributed insights for Security In Depth’s recent AI review.