Skip to main content
Menu Icon
Close

InfoBytes Blog

Financial Services Law Insights and Observations

Filter

Subscribe to our InfoBytes Blog weekly newsletter and other publications for news affecting the financial services industry.

  • SEC Chair Gensler weighs in on AI risks and SEC’s positioning

    Privacy, Cyber Risk & Data Security

    On February 13, SEC Chair Gary Gensler delivered a speech, “AI, Finance, Movies, and the Law” before the Yale Law School. In his speech, Gensler spoke on the crossovers between artificial intelligence (AI) and finance, system-wide risks on a macro-scale, AI offering deception, AI washing, and hallucinations, among other topics.

    Gensler discussed the benefits of using AI in finance, including greater financial inclusion and efficiencies. However, he highlighted that the use of AI amplifies many issues, noting how AI models can be flawed in making decisions, propagating biases, and offering predictions. On a system-wide level, Gensler opined how policy decisions will require new thinking to overcome the challenges to financial stability that AI could create.  Gensler addressed AI washing, stating that it may violate securities laws, emphasizing that any disclosures regarding AI by SEC registrants should still follow the “basics of good securities lawyering” for disclosing material risks, defining the risk carefully, and avoiding disclosures that could mislead the public regarding the use of an AI model. Lastly, Gensler warned about AI hallucinations, saying that advisors or brokers are not supposed to give investment advice based on inaccurate information, closing with “You don’t want your broker or advisor recommending investments they hallucinated while on mushrooms.”

    Privacy, Cyber Risk & Data Security Artificial Intelligence Securities Exchange Act Securities AI

  • Connecticut Attorney General reports on Connecticut Data Privacy Act

    State Issues

    On February 1, Connecticut’s Attorney General (AG) released a report on the Connecticut Data Privacy Act (CTDPA) including information on the law and how the state enforces it. Enacted in May 2022, the CTDPA is a comprehensive consumer data privacy law which took effect on July 1, 2023. The CTDPA gives consumers in Connecticut a set of rights regarding their personal information and privacy standards for businesses handling such data. Connecticut residents can: (i) see what data companies have on them; (ii) ask for corrections on inaccurate information; (iii) request the deletion of their data; and (iv) choose not to have their personal information used for selling products, targeted advertisements, or profiling. The report noted that within the first six months the CTDPA has been in effect, the AG issued dozens of violations towards a number of information requests. It added that companies generally responded positively to the notices and updated quickly their privacy policies and consumer rights mechanisms. According to the report, while some companies initially went below the CTDPA threshold, they made changes to meet it later while a few went beyond identified areas in the notices by strengthening their disclosures. 

    The report also mentioned that beginning on January 1, 2025, businesses are required to acknowledge universal opt-out signals, reflecting consumers’ choice to opt out of targeted advertising and the sale of personal data. This mandatory provision was emphasized during Connecticut's legislative process to alleviate the consumer burden, and it has been enacted into law. Finally, the report discusses possible expansions and clarifications to the CTDPA for the legislature to consider.  

    State Issues Connecticut State Attorney General Privacy, Cyber Risk & Data Security

  • New York Governor proclaims January 21-27 as Data Privacy Awareness Week

    Privacy, Cyber Risk & Data Security

    On January 26, New York Governor, Kathy Hochul, issued a proclamation establishing January 21-27, 2024, as Data Privacy Awareness Week in partnership with several state agencies, including NYDFS. Generally celebrated as a Data Privacy Day, this will be the first time that the event expands to an entire week. This proclamation addresses ways that citizens can protect their personal information against bad actors. The week is designed to help “educate the public” and heighten the importance of data privacy. The press release highlights how consumers can keep their personal information private and protect themselves, including: keeping applications up to date; using unique and complex passwords for every account; enabling multi-factor authentication on devices; exercising caution when opening unsolicited links in emails or messages; limiting the amount of personal data collected by websites; considering what personal information is shared on social media; setting up a virtual private network, or VPN; and being careful when using public wi-fi networks. 

    Privacy, Cyber Risk & Data Security New York Governors NYDFS Consumer Education

  • California Attorney General investigates streaming services for CCPA violations

    Privacy, Cyber Risk & Data Security

    On January 26, California State Attorney General Rob Bonta announced an investigative initiative by issuing letters to businesses operating streaming apps and devices, accusing them of non-compliance with the California Consumer Privacy Act (CCPA). The focus of the investigation is the evaluation of streaming services’ adherence to the CCPA's opt-out requirements, in particular those businesses that sell or share consumer personal information. The investigation targets businesses failing to provide a direct mechanism for consumers wishing to prevent the sale of their data.

    AG Bonta urged consumers to know about and exercise their rights under the CCPA, emphasizing the right to instruct businesses not to sell their personal information. The CCPA grants California consumers enhanced rights regarding the collection, sharing, and disclosure of their personal information by businesses, and compliance responsibilities include responding to consumer requests and providing necessary notices about privacy practices. AG Bonta noted that the right to opt-out under the CCPA mandates that businesses selling or sharing personal data for targeted advertising must facilitate an easy and minimal-step process for consumers to exercise their right. For example, users should be able to easily navigate their streaming service’s mobile application settings to enable the “Do Not Sell My Personal Information” option. The expectation is that this choice remains effective across various devices if users are logged into their accounts when electing to opt-out. Finally, Bonta added that consumers should be given easy access to a streaming service’s privacy policy outlining their CCPA rights. 

    Privacy, Cyber Risk & Data Security State Issues State Attorney General CCPA California Compliance Opt-Out Consumer Protection

  • NIST group releases drafts on TLS 1.3 best practices aimed at the financial industry

    Privacy, Cyber Risk & Data Security

    On January 30, the NIST National Cybersecurity Center of Excellence (NCCoE) released a draft practice guide, titled “Addressing Visibility Challenges with TLS 1.3 within the Enterprise.” The protocol in question, Transport Layer Security (TLS) 1.3, is the most recent iteration of the security protocol most widely used to protect communications over the Internet, but its implementation over TLS 1.2 (the prior version) remains challenging for major industries, including finance, that need to inspect incoming network traffic data for evidence of malware or other malicious activity. A full description of the project can be found here.

    Compared to TLS 1.2, TLS 1.3 is faster and more secure, but the implementation of forward secrecy, i.e., protecting past sessions against compromises of keys or passwords used in future sessions, creates challenges related to data audit and legitimate inspection of network traffic. As a result, NIST released the practice guide to offer guidance on how to implement TLS 1.3 and meet required audit requirements without compromising the TLS 1.3 protocol itself.  The practice guide suggests how businesses improve their technical methods, such as implementing passive inspection architecture either using “rotated bounded-lifetime [Diffie Helman] keys on the destination TLS server” or exported session keys, to support ongoing compliance with financial industry and other regulations––for continuous monitoring for malware and cyberattacks. The draft practice guide is currently under public review with Volumes A and B of the guide open until April 1, 2024. Volume A is a second preliminary draft of an Executive Summary and Volume B is a preliminary draft on the Approach, Architecture, and Security Characteristics. 

    Privacy, Cyber Risk & Data Security Data Internet Privacy NIST

  • CFTC’s subcommittee report on decentralized finance highlights its findings and recommendations

    Privacy, Cyber Risk & Data Security

    On January 8, the CFTC issued a report on decentralized finance ahead of the CFTC’s event on artificial intelligence, cybersecurity, and decentralized finance. Authored by the CFTC’s Subcommittee on Digital Assets and Blockchain Technology, which is a group of fintech experts selected by the CFTC, the report urged government and industries to work together and advance the developments of decentralized finance in a responsible and compliant way.

    The report lists many key findings and recommendations for policymakers to implement. For example, the report highlights how policymakers should keep in mind customer and investor protections, promotion of market integrity and financial stability, and efforts to combat illicit finance when creating regulations, among others. Recommendations for policymakers include increasing their technical understanding of this space, surveying the existing regulatory “perimeter,” identifying and cataloging risks, identifying the range of regulatory strategies, and applying regulatory framework on digital identity, KYC and AML regimes, and calibration on privacy in decentralized finance.

    For further learning on decentralized finance, IOSCO released a publication on its nine recommendations, which was previously covered by InfoBytes here.

    Privacy, Cyber Risk & Data Security CFTC Decentralized Finance Blockchain IOSCO Financial Stability

  • FTC alleges data broker company mishandled consumer location data

    Federal Issues

    On January 9, the FTC released a proposed order and complaint against a data broker that sells consumer location data to companies. According to the complaint, which alleges seven violations of the FTC Act, the data broker company had no policies or procedures in place to remove any of the raw data from the location data sets that it sold, which could be used to identify sensitive personal information. The FTC alleges that because of this, the data broker company failed to provide “necessary technical safeguards” to ensure that consumers’ privacy choices were honored. The FTC also alleges that the data broker’s contracts with entities to purchase the data were “insufficient to protect consumers from the substantial injury caused by the collection, transfer, and use of the consumers’ location data” as they visit sensitive locations, such as churches, healthcare facilities, and schools.

    The data broker company collected 10 billion location data points daily worldwide throughout its apps, but it failed to inform its consumers that it sold this data to advertisers, employers, or government contractors. The FTC further alleges that the data broker’s business practices are likely to cause substantial injury to consumers due to its lack of reasonable data security measures.

    According to the proposed order, the company must comply with FTC mandates that include requiring it to prohibit misrepresentations using the data, prohibit the use, sale, or disclosure of sensitive location data, and implement a sensitive location data program. The data broker neither admits nor denies any wrongdoing and the FTC did not levy a money judgment.

    Federal Issues Data Brokers Consumer Data FTC Act Privacy, Cyber Risk & Data Security

  • FSOC report highlights AI, climate, banking, and fintech risks; CFPB comments

    Privacy, Cyber Risk & Data Security

    On December 14, the Financial Stability Oversight Counsel released its 2023 Annual Report on vulnerabilities in financial stability risks and recommendations to mitigate those risks. The report was cited in a statement by the Director of the CFPB, Rohit Chopra, to the Secretary of the Treasury. In his statement, Chopra said “[i]t is not enough to draft reports [on cloud infrastructure and artificial intelligence], we must also act” on plans to focus on ensuring financial stability with respect to digital technology in the upcoming year. In its report, the FSOC notes the U.S. banking system “remains resilient overall” despite several banking issues earlier this year. The FSOC’s analysis breaks down the health of the banking system for large and regional banks through review of a bank’s capital and profitability, credit quality and lending standards, and liquidity and funding. On regional banks specifically, the FSOC highlights how regional banks carry higher exposure rates to all commercial real estate loans over large banks due to the higher interest rates.

    In addition, the FSOC views climate-related financial risks as a threat to U.S. financial stability, presenting both physical and transitional risks. Physical risks are acute events such as floods, droughts, wildfires, or hurricanes, which can lead to additional costs required to reduce risks, firm relocations, or can threaten access to fair credit. Transition risks include technological changes, policy shifts, or changes in consumer preference which can all force firms to take on additional costs. The FSOC notes that, as of September 2023, the U.S. experienced 24 climate disaster events featuring losses that exceed $1 billion, which is more than the past five-year annual average of 18 events (2018 to 2022). The FSOC also notes that member agencies should be engaged in monitoring how third-party service providers, like fintech firms, address risks in core processing, payment services, and cloud computing. To support this need for oversight over these partnerships, the FSOC cites a study on how 95 percent of cloud breaches occur due to human error. The FSOC highlights how fintech firms face risks such as compliance, financial, operational, and reputational risks, specifically when fintech firms are not subject to the same compliance standards as banks.

    Notably, the FSOC is the first top regulator to state that the use of Artificial Intelligence (AI) technology presents an “emerging vulnerability” in the U.S. financial system. The report notes that firms may use AI for fraud detection and prevention, as well as for customer service. The FSOC notes that AI has benefits for financial instruction, including reducing costs, improving inefficiencies, identifying complex relationships, and improving performance. The FSOC states that while “AI has the potential to spur innovation and drive efficiency,” it requires “thoughtful implementation and supervision” to mitigate potential risks.

    Privacy, Cyber Risk & Data Security Bank Regulatory FSOC CFPB Artificial Intelligence Banks Fintech

  • EU Commission, Council, and Parliament agree on details of AI Act

    Privacy, Cyber Risk & Data Security

    On December 9, the EU Commission announced a political agreement between the European Parliament and the European Council regarding the proposed Artificial Intelligence Act (AI Act).  The agreement is provisional and is subject to finalizing the text and formal approval by lawmakers in the European Parliament and the Council. The AI Act will regulate the development and use of AI systems, as well as impose fines on any non-compliant use. The object of the law is to ensure that AI technology is safe and that its use respects fundamental democratic rights while balancing the need to allow businesses to grow and thrive. The AI Act will also create a new European AI Office to ensure coordination, transparency, and to “supervise the implementation and enforcement of the new rules.” According to this EU Parliament press release, powerful foundation models that pose systemic risks will be subject to specific rules in the final version of the AI Act based on a tiered classification.

    Except with foundation models, the EU AI Act adopts a risk-based approach to the regulation of AI systems, classifying these into different risk categories: minimal risk, high-risk, and unacceptable risk. Most AI systems would be deemed as minimal risk since they pose little to no risk to citizens’ safety. High-risk AI systems would be subject to the heaviest obligations, including certifications on the adoption of risk-mitigation systems, data governance, logging of activity, documentation obligations, transparency requirements, human oversight, and cybersecurity standards.  Examples of high-risk AI systems include utility infrastructures, medical devices, institutional admissions, law enforcement, biometric identification and categorization, and emotion recognition systems. AI systems deemed “unacceptable” are those that “present a clear threat to the fundamental rights of people” such as systems that manipulate human behaviors, like “deep fakes,” and any type of social scoring done by governments or companies. While some biometric identification is allowed, “unacceptable” uses include emotional recognition systems at work or by law enforcement agencies (with narrow exceptions).

    Sanctions for breach of the law will range from a low of €7.5 million or 1.5 percent of a company’s global total revenue to as high as €35 million or 7 percent of revenue. Once adopted, the law will be effective from early 2026 or later. Compliance will be challenging (the law targets AI systems made available in the EU), and companies should identify whether their use and/or development of such systems will be impacted.

    Privacy, Cyber Risk & Data Security Privacy European Union Artificial Intelligence Privacy/Cyber Risk & Data Security Of Interest to Non-US Persons

  • NYDFS settles with title insurance company for $1 million

    Privacy, Cyber Risk & Data Security

    On November 27, the NYDFS entered into a consent order with a title insurance company, which required the company to pay $1 million for failing to maintain and implement an effective cybersecurity policy and correct a cybersecurity vulnerability. The vulnerability allowed members of the public to access others’ nonpublic information, including driver’s license numbers, social security numbers, and tax and banking information. The consent order indicates the title insurance company discovered the vulnerability as early as 2018. The title insurance company’s failure to correct these changes violated Section 500.7 of the Cybersecurity Regulation.

    In May 2019, a cybersecurity journalist published an article on the existence of a vulnerability in the title insurance company’s application, that led to a public exposure of 885 million documents, some found through search engine results. The journalist noted that “replacing the document ID in the web page URL… allow[ed] access to other non-related sessions without authentication.” Following the cybersecurity journalist’s article, and as required by Section 500.17(a) of the Cybersecurity Regulation, the title insurance company notified NYDFS of its vulnerability, at which point NYDFS investigated further. The title insurance company has been ordered to pay the penalty no later than ten days after the effective date.

    Privacy, Cyber Risk & Data Security State Issues Securities NYDFS Auto Insurance Enforcement

Pages

Upcoming Events