By: Maricela Peña, Director of the Labor Law Department at Guardia Montes Abogados
The accelerated advancement of artificial intelligence (AI) has profoundly transformed workplace environments worldwide. Costa Rica, with its dynamic labor market and growing digitalization, is not exempt from this reality. While AI promises to optimize processes, reduce costs, and improve operational efficiency, its application without adequate controls raises important questions regarding Labor Law, corporate responsibility, and professional ethics.
An emerging phenomenon in the workplace is the use of artificial intelligence tools to perform tasks that traditionally corresponded to human workers, without proper supervision or human validation.
This practice raises fundamental questions: what happens when decision-making or the development of work products is completely delegated to an automated system, without the control or human judgment that professional responsibility requires?
It is important to remember that artificial intelligence, although advanced, does not possess discernment or ethical judgment. In fact, it could generate false, erroneous, or misleading results, presenting them as if they were real or truthful. The “hallucinations,” termed as such because this is how it tries to keep users engaged, that is, it gives plausible but incorrect responses, which demonstrates that its judgment is neither infallible nor a substitute for human reasoning. Therefore, the uncontrolled use of these tools can compromise work quality, information veracity, and the legal and reputational responsibility of those who employ them.
In a workplace environment, this improper use can generate reputational damage, conflicts, and bring responsibilities to the worker, the employer, the company, and of course those involved, whether third parties or clients.
Who is responsible for these types of errors or damages?
The case of large firms and multinational corporations illustrates the reputational risks derived from uncontrolled use of AI. In such cases (according to international reports), service provider firms allegedly issued documents, criteria, and analyses, partially generated with artificial intelligence tools, without sufficient human review or validation, containing hallucinations and errors not only in form but in substance. The result was the publication of erroneous information, which caused a reputation crisis and ethical questions about the professional diligence of said firms, with direct consequences for their clients.
In the international context, regulations such as the European Union’s General Data Protection Regulation (GDPR) already contemplate specific requirements when AI is used in the workplace. The GDPR requires data protection impact assessments (DPIA) when implementing AI systems that process employee personal data, transparency in the use of automated technology, and the right of workers not to be subject to decisions based solely on automated processing.
In the Costa Rican context, a similar situation could involve internal infractions and even a violation of the Law on Protection of Persons Against the Processing of Their Personal Data if personal data was used without consent or adequate safeguards.
The unverified use of AI could also constitute a serious breach by the employer or worker, depending on who authorized or executed the automated process, affecting not only the employment relationship but also the corporate image and public trust in the entity.
The American Bar Association (ABA), in its Formal Opinion 512 on generative artificial intelligence, establishes fundamental principles that can serve as a reference for responsible AI use in professional environments. The ABA emphasizes that professionals must maintain competence regarding the AI tools they use, protect information confidentiality, communicate transparently about the use of these tools, and ensure that any AI-related charges are reasonable and proportional to the time actually spent.
Several state bar associations in the United States, including California, Florida, New York, and Pennsylvania, have issued specific ethical guidelines on AI use. A common principle in all these guidelines is that the professional maintains ultimate responsibility for everything the AI tool produces, emphasizing that AI is a tool that assists but does not replace professional judgment and experience.
Costa Rica currently lacks specific regulations governing the use of artificial intelligence in the workplace; however, our country does provide a legal framework that could be interpreted in an integrative manner to guarantee fundamental rights such as:
- The right to dignified and safe work.
- The right to information and transparency in automated decision-making.
- The employer’s responsibility for acts executed through systems under their control or supervision.
- Responsibility for the direct consequences of their actions.
The incorporation of artificial intelligence in the Costa Rican workplace represents an opportunity for innovation, but also a legal and ethical challenge of the highest order. Labor Law must adapt to ensure that technology does not displace or dehumanize work, but rather complements it under principles of equity, transparency, and responsibility.
Cases that have been published in the world press serve as a warning: delegating critical tasks to AI without human verification not only can generate technical errors, but affects professional credibility and the legal responsibility of the organization.
The future of work in Costa Rica will depend, to a great extent, on the ability of companies, workers, and the State to regulate and supervise the use of artificial intelligence with human, ethical, and legal criteria.
If a situation arises where an employer detects that a worker is making improper use of artificial intelligence tools, whether in internal company matters or in external activities that could compromise their relationship with clients, suppliers, or even authorities, it is essential to carefully document the case. Based on that documentation, the application of proportional disciplinary measures should be evaluated, considering the severity of the conduct, the risk generated for the organization, and the correspondence between the offense and the sanction, which could even lead to termination with or without employer liability, depending on the circumstances.
However, before adopting any disciplinary decision, it will be crucial to verify the existence and validity of clear internal policies, codes of conduct, and procedures previously communicated to and accepted by the worker. The absence of these instruments or their inadequate application could weaken the employer’s legal position in the event of an eventual claim.
Is your company prepared to face these types of situations? Do your employment policies have the necessary adjustments to regulate the ethical and responsible use of artificial intelligence? In an environment where technology advances faster than regulations, legal prevention could make the difference between innovative management and an inevitable labor conflict.