Transparency & Responsibility — Forging the Gen AI Path


Session Insights
Written by Leslie Lang

Sam Hamilton

SVP, Global Head of Data & AI

Visa

JULY 2024

As the hype around generative AI continues to build, doing nothing is no longer an option. Wherever executives are in their journeys, analyzing both the opportunities and risks associated with GenAI is essential. At the recent Southern California CDAO Executive Summit, Sam Hamilton, Senior Vice President and Global Head of Data & AI at Visa, shared his approach to data use in this new generation of AI. 

He said the commercial impact of generative AI holds significant potential to transform entire industries. And, given the unique characteristics of some of the businesses, the impact of generative AI could be even more profound.

He spoke about generative AI’s challenges, security issues, and remedies, as well as Visa’s principles for the responsible use of AI. He highlighted why data is paramount to unlocking AI potential and how responsible data use is more critical than ever.
 

Key AI Challenges Are Amplified by GenAI

Sam said AI is more than just another technology – it has the potential to shift our relationship with technology and change how we interact with machines. It’s a collaborative system, rather than just another tool, and has the potential to transform every industry by improving products, services, experiences, and productivity.

I can’t think of one industry that is not going to use generative AI.”


With all the opportunities, generative AI poses some new challenges and introduces new threats. For instance, Sam mentioned that as more organizations around the globe begin using GenAI, we may see more issues related to fairness. Imperfect training data or decisions made by engineers developing GenAI models can introduce bias into algorithms.

In addition, privacy concerns can arise when users input private information that shows up in model outputs in a way that identifies them. Bad actors can use generative AI to accelerate the speed and sophistication of a cyberattack or poison the model to provide a malicious output. Finally, models can also produce different answers to the same prompts, impeding the user’s ability to assess the accuracy and reliability of outputs.

Let’s keep a high standard when it comes to AI and GenAI and use it responsibly.”


Advanced Security Issues with GenAI

Sam said there are many areas where generative AI requires advanced security measures. “GenAI can help bad actors automate and speed up the process of finding and exploiting system vulnerabilities and hacking into them,” he explained. 

Some examples include: 

  • Adversaries can “poison” the data used to train AI models, causing them to make incorrect predictions or decisions.
  • Generative AI can engage in synthetic identity fraud, creating fake identities with realistic names, addresses, and even social media profiles. These synthetic identities can be used to apply for loans, or other financial services.
  • GenAI can also create deep fakes, or highly realistic fake images, videos, or audio recordings, which can be used for misinformation, fraud, or identity theft.
  • He said it can generate compelling phishing emails or messages that trick users into revealing sensitive information.

Sam said that it’s critical to implement robust cybersecurity measures, regular system audits, and intrusion detection systems. It’s important to monitor system activities and log them for continuous analysis, which can help detect unusual patterns and potential threats. He suggested using adversarial training to recognize and resist attempts to manipulate it.

Sam also spoke about data privacy being critical. Organizations must incorporate privacy considerations into the design and operation of their AI system, such as data encryption and secure data storage, and enforce ethical guidelines for its use.
 

Solutions for Security Challenges

He talked about security issues surrounding users. “We need to use authentication and authorization mechanisms to control who can access an AI system and what actions they can perform,” Sam explained. “Every user of an AI system must have security training to ensure they understand potential risks and how to mitigate them,” he said.

In addition, regularly monitoring and updating models helps you continuously learn and adapt, according to Sam. “Model or concept drift refers to the change in data patterns over time, which can lead to a decrease in model performance,” he explained. He suggested considering implementing drift detection algorithms.

Third-party risk management is also essential to assess and manage risks associated with third-party vendors and service providers.

“Every organization must also have an incident response plan in place to respond to security incidents,” he added. “This includes identifying the issue, containing the damage, eradicating the cause, and recovering.”

Sam talked, too, about the increasing regulations and public scrutiny around AI. He suggested that rigorous testing and validation of the models and applications, while monitoring and following applicable data protection laws, and regulations is critical. “Clarify the purpose and variables of the model to better understand why the AI system is making certain predictions,” he said.
 

Data, Infrastructure, & Talent

Sam said data is key and needs to be high quality, relevant, and accessible. “Make sure your organization continues to invest in data infrastructure, data governance, and retention policies.”

Because AI handles large volumes of data and complex computations, investing in a high bandwidth network, high-speed storage, and high computation power (GPU and AI accelerators) is also critical.

It is also important to keep growing your community of data scientists, AI specialists, and data engineers, who can design, build, and maintain AI models. Invest in training to upskill and reskill the existing workforce so that they can leverage the latest AI tools and evolving technologies.
 

Visa’s Principles for the Responsible Use of AI

Sam said that as a pioneer of artificial intelligence (AI) in payments since 1993, Visa believes that AI should benefit individuals, businesses, and economies. This overarching approach is underpinned by Visa’s commitment to be accountable stewards of data, uphold privacy, and promote high standards of responsible, ethical, and compliant practices in every market where the organization operates. 

He shared Visa’s AI Principles, which offer guidance on the safeguards they strive to achieve and apply to the development, deployment, and assessment of AI systems and use cases across Visa.  

Visa’s AI Principles include:

  • Security: In pursuit of innovation, Visa strives to deploy AI systems with confidentiality, integrity, and reliability to help ensure robust levels of security and safety for the individuals, businesses, and partners participating in its payments ecosystem. Sam said that at Visa, security is the number one job of every employee. “There is no higher priority for Visa than safeguarding those who use our products, services and network,” he noted.
  • Control: Visa works to deploy AI systems that respect privacy by design, with controls and governance to create a trusted, confidence-inspiring, ecosystem for individuals, businesses and partners.
  • Value: Visa invests in and employs AI to drive innovation and support its mission to uplift everyone, everywhere by being the best way to pay and be paid.
  • Fairness: In developing AI systems, Visa pursues programs to promote responsible innovation and the ethical use of AI, protect individual rights, and build societal confidence in AI.
  • Accountability: Visa works to align decisions made or informed by AI with the organization's values and clearly define roles and responsibilities across technology, operational and management stakeholders.

Sam said maintaining trust in this era of ubiquitous digital commerce means upholding security. “As fraudsters try to capitalize on increasing digitization, continued policy frameworks for data-driven innovation will remain vital to keeping financial transactions secure. To implement these policy frameworks, the public and private sectors will need to work hand-in-hand,” he shared.

“That’s where Visa is well-positioned to be a leader in AI,” he said. “Through our AI principles, our strong understanding of consumer needs, and our foundation in data ethics, we can bridge that gap.”

Sam concluded by sharing, “Generative AI is an incredible tool that unlocks vast potential in many fields. It can enhance productivity and efficiency and drive innovation.” 

“At the same time, we must remain cautious of the threats it poses,” he continued. “It is our responsibility to ensure that we use this powerful technology ethically and responsibly. Let us remain vigilant as we continue to explore the exciting possibilities of Generative AI.”

To share and discuss more about generative AI and other key topics for CDAOs, join an Evanta CDAO community. If you are already a member, sign in to MyEvanta to explore opportunities to get together in person and exchange best practices with your CDAO peers. 
 

Content adapted from the Southern California CDAO Executive Summit. Special thanks to Sam Hamilton and Visa.

by CDAOs, for CDAOs


Join the conversation with peers in your local CDAO community.

LEARN MORE