Whether you run a business or work in marketing or operations, you may have witnessed a new trend: rather than allocate resources towards onboarding a new employee, many businesses are choosing to “hire” something that won’t need a paycheck.  Artificial intelligence (“AI”) is and will continue to touch every aspect of our professional and personal lives. Like many, you may have already asked yourself how you might employ AI, with its seemingly myriad applications from customer service and data analysis to supply chain optimization, recruitment, and more. Over the past few years, many businesses have opted to tackle one or many of these areas by implementing AI tools into their existing customer-facing websites or back-end infrastructure, hoping to gain a competitive edge.1
Implementing such AI tools into a business presents various compliance challenges that must be considered, stemming from the European Union AI Act in Europe and state-specific legislation in the United States. Businesses now must ensure that any implementation of an AI tool adheres to regulations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the United States. These regulations mandate strict requirements for the handling and protection of personal information, and other matters including consumer rights, accessibility standards, and advertising. Noncompliance with these regulations can lead to significant fines/penalties, reputational damage, and loss of consumer trust.2 Therefore, it is crucial for businesses to assess their potential financial exposure and risk “when onboarding their most recent hire.”
In this article, we will cover the new EU and US AI laws and provide pointers on how businesses can best protect themselves against noncompliance when launching AI tools within their business.
The European Union’s new AI law, aptly named the “Artificial Intelligence Act” (or “the Act”), aims to establish a comprehensive regulatory framework for the development, deployment, and use of AI technologies across member states. To do so, it identifies and distinguishes classes of AI based on their potential risks and impact on society. Under these classifications, the Act identifies: who must comply with the law by separating them into “Providers” and “Deployers” of AI systems3; and how compliance occurs by providing mandates based on the type and application of the AI system.
The Act identifies businesses as “Providers” if they develop AI systems and place them into commerce under their own name or trademark.4 Recognizing that developers of AI have a special responsibility to the public, the Act requires providers to comply with strict data governance measures, ensuring the data used to generate outputs from the system is of high quality, relevant, and free from bias. This includes a requirement to maintain accurate records of all data sources and processing activities. Providers of high-risk AI systems, classified as such because they, for example, comprise a safety component of a product, are related to biometrics or facial recognition, are used in education or employment, or are covered by EU health and safety harmonization laws, must abide by additional rules related to human oversight, robustness, accuracy, and security.5
Under the Act, “Deployers” are defined as “a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.”6 Recognizing the public-facing role deployers play, the Act requires that deployers disclose to end-users when users are interacting with AI technology in a clear and distinguishable manner at the time of first exposure. This allows end-users the opportunity to make an informed decision about whether to initiate or discontinue to engagement with AI for the task at hand.7
Like cookie preference and data collection notices, the above disclosure can be made in the form of an unavoidable pop-up or text box on the page where the user interacts with the AI system—a feature that is easy to accommodate.8 Beyond this immediate transparency, businesses must make GDPR-compliance disclosures to customers as well, such as whether and how they collect personal information, and whether and how that information is shared with third parties.9 These statements can be made as part of the business’s overall privacy policy.
The EU Artificial Intelligence Act was formally adopted on May 21, 2024 and is expected to be published this month (July 2024). It will go into effect twenty days after it has been published in the official journal of the European Union, and becomes fully applicable 24 months after taking effect, so businesses have a significant grace period to comply with the new rules.10
Absent comprehensive federal regulation, state legislatures across the United States are taking the initiative to enact their own rules and regulations governing the development, implementation, and use of AI tools and systems. Recent acts in Utah and Colorado have formed the basis of a patchwork of legislation focused on disclosure, much like that created in the wake of the EU’s GDPR, when state legislatures sought to emulate the European Union’s privacy protections. While state AI laws will vary, the basic principles of risk management and full disclosure will serve as guideposts for compliance.
Leading the charge in AI regulation was Utah, which in March 2024 enacted its Artificial Intelligence Policy Act (UAIP) effective May 1, 2024.11 The UAIP only concerns generative AI, defined as “an artificial system that (a) is trained on data; (b) interacts with a person using text, audio, or visual communication; and (c) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.”12 Under the UAIP, those who deploy generative AI in “regulated occupations” (i.e., roles that require a license or state certification, like physicians) must “prominently” disclose that a consumer is interacting with AI at the beginning of any interaction.13 Importantly, the disclosure must be made in the mode of interaction; for example, preceding a written conversation with generative AI, the disclosure must be written.14
Not far behind was Colorado, which entered the AI regulatory space in June 2024 with its landmark Artificial Intelligence Act (“CAIA”), adopting a risk-based approach to AI regulation that shares features of the EU’s AI Act. The law, set to take effect February 1, 2026, mandates that businesses deploying high risk AI systems in Colorado in public-facing applications conduct and publicly release the results of AI “impact assessments” designed to evaluate the potential consequences of the systems on privacy, security, and bias.15 The law defines a “high risk” AI system to be those that make or play a role in making “consequential decisions.” Consequential decisions are defined as decisions that have a material legal or similarly significant effect on a consumer’s educational, employment, or financial opportunities.16 And regardless of whether the system is classified as high risk, the law requires all AI deployers operating in Colorado disclose to consumers that they are interacting with an AI system unless said interaction would be obvious to a reasonable person.17 According to the Act, a Deployer must disclose “in a manner that is clear and readily available on the Deployer’s website: the types of high-risk artificial intelligence systems currently deployed; how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the AI system; and, in detail, the nature, source, and extent of the information collected and used by the Deployer.18
Because these laws and those that will emerge from states like New York and Massachusetts will have unique focal points ranging from consumer protection to discrimination prevention, the compliance challenge ahead will be to reconcile the shared features of these laws and supplement a basic compliance scheme with the unique requirements of the state laws to which one’s business must comply. We at Haug Partners are prepared to guide clients through this challenge so their business can make the best use of their new “employee.”
777 South Flagler Drive
Phillips Point East Tower, Suite 1000
West Palm Beach, FL 33401