At the most basic level, the regulatory landscape for AI in finance is complex and evolving. Financial institutions continue to face a web of regulations designed to protect the consumer and maintain market stability and security. The basic understanding revolves around adhering to data protection laws (like GDPR in Europe), anti-money laundering regulations, and fair lending practices. AI systems must be explainable and auditable to meet regulatory requirements. Regulators may require detailed explanations on how a model makes loan approval decisions, or how a model predicts the stock market - AI system creators must know these to follow regulations. Institutions must also ensure that AI models are both robust and resilient; in this context, that means being able to withstand cyber attacks and data breaches without difficulty. The Bank for International Settlements emphasizes the need for central banks to stay ahead of technological advancement, to properly develop regulatory frameworks that promote innovation while also safeguarding financial integrity and security. Here, financial institutions should also engage in regular compliance audits and risk assessments to ensure adherence to regulatory standards. By proactively addressing and adapting to regulatory challenges, financial institutions internationally can better leverage AI to drive innovation and improve operational efficiency, all while maintaining the highest standard and compliance and ethical conduct.