Abstract
The emergence and fast proliferation of chatbot solutions have reshaped how we interact with customer services, professionals and organizations providing advisory services. Law firms and legal professionals have also been affected by chatbots, which have become an integral part of the legal market. They are used for training, discovery, legal research and various other tasks by multinational law firms and sole practitioners alike.
Besides their benefits, however, chatbot solutions also face a number of limitations and their use could raise both legal and ethical concerns. Unsupervised use of such solutions can lead to serious professional responsibility issues, while their use in certain cases, such as in cases involving acting as a defense counsel or advising on sensitive matters, can also raise ethical concerns, or endanger the trust in lawyers built by generations of legal professionals. The processing of certain data or confidential information can further raise privacy or confidentiality issues, especially with respect to the need for artificial intelligence (AI) based solutions to constantly rely on huge datasets.
Bearing the above in mind, the regulation of chatbots in the legal market is certainly a complex topic with many challenges. In this paper, I provide an overview of the use of chatbots in the legal market, summarize the main concerns regarding their use (especially including professional liability, privacy and ethical concerns) and also highlight the main challenges concerning AI and chatbot regulation and the potential approaches regulators could follow to prevent or minimize risks associated with the unlawful or unethical use of technology and disperse unnecessary fears by also supporting technological development and by preserving the positive effects of the use of chatbots in the legal market.
1 Introduction
Chatbot services have become widespread in today's digital economies. They support customer service and client management, make information more easily accessible and help companies and other organizations build trust. Chatbots can also be used for other purposes, including for everyday conversation or recreation. ChatGPT, for example, is a versatile tool and an interesting endeavor, started by OpenAI,1 and is capable of answering a wide number of questions, including rather complex or theoretical ones, as well as in engaging in everyday conversation with users. It is expected that such general-purpose chatbot solution will become almost omnipresent in the upcoming years and will be used by law firms and legal practitioners to build closer ties with clients and make their services more visible.
It is further emphasized that chatbot solutions and different AI tools can also be used to provide simple services and perform certain tasks independently. For example, the AI-powered legal chatbot solution DoNotPay, which was launched in 2015, first achieved prominence by helping appeal against more than 160,000 parking tickets in the course of 21 months.2 The solution has been used ever since to help users in simple legal work and other everyday tasks.
It is even more shocking to report that ChatGPT successfully passed four law school classes based on its final exam and was deemed to be able to graduate – at least theoretically, and with a barely sufficient performance.3 With time, the performance of legal chatbots as potential virtual lawyers is expected to improve and they would be able to undertake even more complex tasks, such as providing complex legal advice or drafting various legal documents.
In addition to the benefits that legal chatbot solutions offer, however, there are also ethical and professional liability considerations, which cannot be ignored. Can a law firm, the head of the relevant department or the lawyer using the given solution be made liable, for example, if a chatbot gives the wrong advice? And if so, to what extent? These are all questions, which are becoming more and more relevant and need to be addressed with due insight into both the societal and ethical implications of using chatbot solutions for providing legal services.
Bearing the above in mind, this paper highlights the potential impact of chatbots on the legal market, discusses the related professional, privacy and ethical considerations, as well as the regulatory challenges related to the appropriate regulation of legal chatbots and similar artificial intelligence (AI) solutions.
2 Chatbots and their impact on the legal market
The idea of chatbots has emerged through the work of a number of researchers and scientists. The first chatbot solution, called Eliza, was launched in the 1960s with the aim of simulating conversation with a therapist.4 It was followed by the chatbot PARRY, developed in 1972 by psychiatrist Kenneth Colby aimed at interacting like a patient suffering from schizophrenia.5 Following such endeavors, the language processing chatbot, ALICE, developed by Richard Wallace was released, which made conversation easier compared to earlier programs.6
Despite such early attempts, chatbots did not become widespread until the late 2000s when – due to the drastic development of AI solutions – tech companies and a wide range of other service providers began using them for customer service purposes and for strengthening relations with users and clients. This included virtual assistant solutions, such as SIRI, published by Apple in 2010 or Cortana, released by Microsoft in 2014.
Certain chatbot projects, however, also revealed the vulnerability of such solutions to manipulation, as well as their unpredictable nature. Tay, a bot launched by Microsoft on Twitter, for example, was called off within 24 h of its launch due to the fact that it adopted extremely discriminative language learned from interaction with users.7 It is also worth noting, that DoNotPay has faced widespread criticism and legal problems as of early 2023 with respect to DoNotPay's planned court appearance and a legal dispute in California regarding the allegedly poor quality work it provided to a customer.8
Despite such highly publicized failures, chatbots, however, have not stopped progressing and becoming popular among resourceful enterprises and everyday users alike. Chatbots are therefore used for a wide variety of purposes and are capable of providing both oral and written advice or undertaking conversations on various topics. In respect of the legal market, chatbots are used by almost all actors, including law firms and lawyers, company legal departments, courts and government agencies. The international law firm, Allen & Overy, for example, introduced a legal platform called Harvey, powered by natural language processing, to deliver many types of legal work, including due diligence and contract analysis. The work is reviewed by a lawyer to eliminate any flaws or non-compliance.9 Similarly, another international law firm, CMS adopted the machine learning tool, Brainspace in 2017 in Europe, a tool which can help clients with document analysis and discovery and can support due diligence as well.10
Chatbots or other AI tools are also widely used by courts or authorities, especially for discovery, analysis, technical support and in some cases, also for decision-making. The Chinese court system and many levels of Chinese administration, for example, use AI solutions widely to support the decision-making of courts, in the framework of which judges are often required to take into account recommendations made by AI and provide an explanation in writing if they disagree.11 Naturally, such reliance on decisions made by AI could carry significant risks and could be criticized from an ethical viewpoint, since the use of chatbots or other AI solutions by authorities should not lead to the assumption of liability and decision-making power by AI without human supervision.
It is also beyond doubt, however, that a wide number of organizations recognized the potential of AI in assisting judiciary procedures and have undertaken to educate law students, judges and court officials on AI. An example could be the Massive Open Online Course on AI and the Rule of Law developed by UNESCO and The Future Society.12 Chatbots and other AI solutions also have the potential to help various segregated communities and members of society access legal information more easily or be provided with legal services that would otherwise be only accessible to wealthier or more educated groups.13 This is especially true with respect to the fact that legal services have historically appeared to be less measurable or quantifiable and a certain mystery still remains regarding the results of legal work.14
As regards the acceptance of chatbots and other Legal Tech solutions in the legal market, such solutions are also becoming more and more popular among clients and lawyers alike, despite initial fears and unwillingness to use such solutions with regard to ethical and professional liability concerns. This is especially true for various document assembly solutions, which were initially frowned upon by bar associations and a wide number of legal professionals. The North Carolina State Bar, for example, had a long lasting legal dispute with Legal Tech service provider, LegalZoom, over its legal document services, which was settled in 2015; the North Carolina State Bar initially alleged that such service provision could be regarded as ‘unauthorized practice of law’ with regard to which LegalZoom agreed to subject its documents for review by lawyers.15 As of today, such review has become a standard in many jurisdictions, especially for tasks usually undertaken by lawyers, including contract preparation and review, as well as the provision of legal advice. It is worth noting, however, that endeavors to replace human lawyers at court or in similar forms of legal representation have encountered more severe resistance from practice, as well as legal obstacles. An example could be DoNotPay's failed attempt to represent a plaintiff in courtroom in a case related to a traffic ticket.16
In addition to its positive effects, however, there is also a risk that the widespread use of chatbots and Legal Tech solutions could hinder paralegals and junior associates in mastering basic tasks including legal research, everyday administrative tasks, as well as proper communication with court and other officials, and in further progressing in their legal career. Some authors, however, have raised their voice in protection of AI tools and consider that the integration of such tools in law school curricula would be an important step forward and could also help young professionals in the market.17 This is especially true, since young professionals are extremely overworked; they have to help senior lawyers with everyday work, learn applicable law and practice and take the lion's share in administrative and marketing tasks at the same time. By using AI tools, they can be more effective and also save time for more important tasks. In this respect, some also argue that by learning about Legal Tech and similar AI solutions more extensively even during law school years, fresh candidates applying for junior lawyer positions can stand out and become top picks for major law firms.18 Such a mentality can also help shape how successful young lawyers relate to AI in their work and how such tools are accepted by the legal profession.
3 The regulation of chatbots and its challenges in the legal market
3.1 Professional liability
Naturally, one of the main concerns regarding the use of legal chatbots is malpractice, including mistakes made by the machine and related professional liability issues. If the chatbot is used, for example, for advisory services or for autonomously providing other types of legal services, such as preparing and submitting an appeal or drafting a contract, there is an inherent risk of malpractice. The advice provided may be incorrect, the appeal can be submitted late, and the ill-drafted contract can expose the client to severe risk of non-compliance or litigation. Malpractice by the machine is similar to malpractice committed by a human lawyer except for the fact that AI itself cannot be subject to professional or any other type of liability. Ultimately the law firm using the solution could bear civil law liability for mistakes made by an AI solution it uses, and the lawyer reviewing and approving the AI solution's work and/or his/her superior can be subject to a professional or ethical procedure. It is also worth noting that in cases in which the solution is provided by a third party (e.g. a Legal Tech company), such third party may be liable to the law firm under their contract, or in exceptional cases, even directly – in accordance with the applicable law – to the client suffering damage as a result of the ill-programed solution. This could be the case, for example, when an update fails to include new deadlines or mandatory contractual elements implemented by new legislation, despite a contractual promise by the programmer or service provider supporting the law firm and its client. It is worth noting in this respect that a number of arguments have been raised for reforming liability rules since current regimes can unevenly allocate risks and liability and can hinder the development of AI solutions in high-risk sectors, including law firms and legal services.19 In cases, for example, where legal practitioners face enhanced liability, it may dissuade them from using AI even if it would save significant time and effort. In cases, however, where developers and service providers would more likely be found liable, they could also feel less incentivized to develop or provide certain solutions and services.
The widespread use of AI tools by a wide number of legal professionals can also lead to the breach of various professional liability requirements or contractual promises concerning representation by a human lawyer or other professional (such as a reputable insurance law expert or a law professor specialized in copyright law). It would not be surprising to see future contracts for legal services include contractual promises by law firms to have certain professionals or leading experts personally oversee certain aspects of a matter or a transaction even in cases where AI solutions are extensively used.
In line with the above and due to the high risk of mistakes made by unsupervised AI, law firms generally require that a lawyer specialized in the given field review the work created by the chatbot or other AI solution before it is delivered to the client. In many instances, chatbots only help to connect clients with attorneys (for example by helping clients navigate on the law firm's webpage or access the right specialist) or undertake monotonous, repetitive tasks (such as a mass review of documents, a keyword search, or legal research in databases) supporting the law firm's advisory services or certain other activities.
In order to increase both the effectiveness of legal chatbots and other AI solutions used by law firms and to minimize the risk of algorithmic malpractice, law firms would need to put more focus on training their personnel to work closely and with confidence with disruptive technologies. In addition, special, AI-related professional liability insurance packages are expected to become more widespread to counter fears concerning professional misconduct and to raise trust in legal chatbots. There are still certain difficulties with the insurability of AI solutions, however, especially due to the fact that it is hard to assess the extent of damage AI can cause in certain cases or in a certain field, and AI can also frequently be unpredictable in terms of the decisions it makes.20 This also leads us to the so-called black box problem, according to which, in many cases it cannot be explained how and why the given AI solution arrived at a specific conclusion.21 This problem can make it hard to predict certain decisions the program makes. A solution for such a problem might be the alignment of liability to better react to the autonomy of the solutions used; where the AI solution has a wider autonomy, more emphasis should be put on the boundaries of its decision-making power, its transparent operation and the monitoring of its activities by a human expert.22
It is underpinned, however, that monitoring or systematic review by humans would not be possible in many cases, especially where the chatbot is used by larger organizations and interacts with thousands or even more potential users. In such case the possibility of mistakes (such as wrong communication style or inappropriate client management) cannot be excluded.23 In such cases, limiting the scope, setting a clear purpose for using the AI solution and transparent operation (including highlighting the potential effects of using the solution, ability to report/flag malfunction and request human revision) would most probably be of key importance.
It is also worth noting, however, that in the near future law firms and practitioners that do not use AI solutions to the extent or in the manner required in the given practice and who rely too much on the skills of human professionals may be more likely to be subject to professional liability claims. Therefore, even considering the difficulties in the insurability of AI, it is likely that new insurance packages and models tailored to specific types of AI solutions used in the legal profession would appear and become widespread in practice. As of that point, AI's unpredictable nature could less likely be used as a reference to avoid using AI solutions deemed reliable in practice or having appropriate AI insurance policies in place.
3.2 Privacy concerns
It is beyond doubt that chatbots generally rely on large datasets and are continuously fed with data to properly function and further develop. Such processing often appears to be unforeseeable to users, which also leads to privacy concerns, especially in case of vulnerable groups. The Italian data protection authority, for example, prohibited the provider of an app called ‘Replika’ from processing the personal data of Italian users. The app created a virtual friend and was often used by children who could not understand the processing of their personal data and were often exposed to inappropriate information which could negatively affect their development.24
The Italian data protection authority similarly imposed an immediate temporary limitation on the processing of the personal data of users from Italy in the case of ChatGPT. In its decision the authority highlighted the fact that OpenAI lacked a data basis for collecting a massive amount of personal data on its users, and used such data for training purposes, provided inadequate information to users on data processing and also lacked any age verification mechanism that could protect children from accessing inappropriate information with respect to their age.25 Besides the Italian data protection authority, other data protection authorities have also focused on the data protection aspects of ChatGPT. To foster cooperation between European data protection authorities in this respect, the European Data Protection Board (EDPB) recently created a dedicated task force.26
In addition to the opacity often surrounding chatbot solutions, such solutions also very often rely on automated profiling of users, and the information collected in this respect is also frequently used to further train the solution or to make information or analytics accessible to other organizations or to provide personalized services and to render more and more information about individuals public, especially if online profiles are also analyzed.27 In the case of law firms, the protection of information related to clients and their representatives is even more important compared to the data processing of most other service providers, bearing in mind that the relationship between the lawyer and his/her client presumes utmost confidence. Communication with clients, as well as information and documentation related to clients are therefore highly confidential in almost every jurisdiction.
Client information also often involves sensitive personal data, especially for lawyers and firms active in certain fields of law (e.g. criminal defense lawyers or lawyers focusing on medical malpractice cases). In some cases, lawyers can also gain access to negative or highly sensitive information about adversaries or other third parties, the use of which is generally subject to a number of laws and requirements, including privacy laws protecting the personal data of natural persons, as well as competition laws barring market players from disparaging competitors or unethically publishing negative information.
With regard to the above, feeding confidential information to a chatbot or other AI solutions can raise privacy and professional responsibility concerns, especially in cases in which the processing of personal data is not disclosed to clients and in cases in which it is not based on an appropriate legal basis, such as consent. As highlighted above, the use of client information can also infringe confidentiality requirements involving both personal and non-personal data, bearing in mind that to provide legal services the use of client information by AI solutions is generally not required.
Besides the above, the different data transfers undertaken regarding the personal data processed by AI solutions can also be non-compliant with relevant data protection requirements and remain unforeseen by the individuals affected. This is especially true for transferring data to different foreign state agencies operating under less restrictive privacy regimes, which can impose significant restrictions and sanctions on the affected individuals, their family members and associates, based on the information received. Companies in such regimes can also feel less inclined to guarantee a high level of data protection and data security unless such is required from them in the respective contract concluded with respect to the relevant data protection laws applicable in the jurisdiction of the person or entity transferring personal data.
In accordance with the above, law firms using chatbot and other AI solutions need to carefully assess the scope of information that they can use for certain purposes related to the operation of the given solution (for example, using certain information as training data, for internal analysis, business development purposes, etc.). In addition, legal practitioners processing personal data need to be transparent about their data processing practices and inform their clients and other related data subjects (e.g. client contacts) about the scope of client data collected for any subsequent purpose(s) of processing, as well as about any other essential aspect of such data processing (including, for example, the applicable retention period).
3.3 Ethical concerns
Probably of all the concerns involving the use of AI, the ethical ones appear to be the strongest, bearing in mind that the activities and data processing by AI in many cases involves the personal data of human beings, and decisions made by AI often have significant effects on individuals. In the United States, one of the cornerstone documents on human subject research, the Belmont Report published by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, laid down basic ethical principles for research involving human subjects, including 1) the respect for persons, 2) beneficence and 3) justice.28 The above principles and requirements highlight the need for human-centric, ethical research and can also be relevant in respect of algorithmic development and design.29
In the European Union, the guiding ethical principles of a trustworthy AI, rooted in European fundamental rights and principles, appear in the Ethics Guidelines for Trustworthy Artificial Intelligence, prepared in 2018 by the High-Level Expert Group on Artificial Intelligence, and include: 1) respect for human autonomy, 2) prevention of harm, 3) fairness and 4) explicability.30 The Guidelines were also welcomed by the EU Commission, which incorporated the principles referred to above into the seven key requirements of trustworthy AI, including 1) human agency and oversight, 2) technical robustness and safety, 3) privacy and data governance, 4) transparency, 5) diversity, non-discrimination and fairness, 6) societal and environmental well-being and 7) accountability.31
These main requirements could also be relevant for the use of AI solutions in the legal industry; however, the specific conditions and specialties of the legal profession also need to be taken into account, such as the relationship between the attorney and his/her client, as well as confidentiality rules. In this respect, the attorney-client privilege has been characterized as one of the key privileges of legal practice for centuries and the attorney-client relationship has been seen as one of the most sensitive professional relationships ever since the emergence of the justice system. Also, attorneys frequently act in sensitive or highly confidential matters and also need a strong sense of empathy to manage client relations and to treat clients with the understanding they deserve.
Even once a chatbot reaches the level where it can act as a ‘virtual attorney’ and work independently on complex legal matters, it would only be capable of doing so without any sense of sympathy. Although such an approach could carry less risk or even be regarded as desirable in business matters (such as with company registration or drafting business agreements), it can raise significant ethical concerns in cases where a chatbot or another AI solution would need to act as a defense attorney, provide family law advice or mediate between parents or former partners.
The appearance of robolawyers in open court can also undermine society's trust in a ‘human’ court system and lead to the violation of human rights and procedural principles. A robot, for example, speaking for the defense or a chatbot giving a declaration in the name of an absent party would dehumanize the justice system to a level where we would have a hard time recognizing it. The mass use of AI in litigation can also have other unwanted effects, such as encouraging unnecessary litigation or mass reports to authorities for harassment purposes.
In addition to the ethical considerations discussed above, the proliferation of legal chatbot services in an unregulated environment can further generate tensions within the legal profession. AI does not need to rest, needs no free-time or salary and can process information a thousand times faster than a human lawyer, which also means that after reaching a certain level, AI would be in a position to easily outcompete human legal professionals in terms of both quantity and quality. There will only be the ‘human side’ of lawyering left, which will remain the sole trump card of human lawyers. This would include practitioners focusing on – inter alia – marital or criminal law or other fields where ethical considerations or human aspects are stronger. We would most probably see fewer transactional lawyers in the coming decades, since their work would most likely be taken over by AI solutions; however, it also seems impossible that even the smartest AI solutions could fully dominate any field of law or practice without the involvement of human professionals. In addition, clients would in many cases be more likely to have a human lawyer oversee their work, also leaving plenty of room for human professionals, especially for those which can effectively adapt to the new environment and master different AI solutions.
In accordance with the above, it also seems probable that SMEs, sole entrepreneurs, NGOs and other smaller organizations would be more willing to retain cheaper robolawyers than human legal professionals in easier cases, for example, for a monthly subscription fee or a one-off fee in certain individual cases. Even bigger companies would most probably be more willing to use legal chatbots and similar solutions to execute simpler transactions and business-contracts, as well as for corporate housekeeping, reducing money spent on external lawyers and in-house counsels. Bearing this in mind, legal departments in a wide number of sectors focusing on general matters would probably see a reduction in the need for a human workforce as well, but key members of the given team, as well as experts and executives would still be needed and play an essential role in everyday operations.
3.4 Regulatory challenges
Proper regulation concerning the use of chatbots in the legal market is becoming an increasingly pressing issue. As seen from earlier cases, a number of regulators have already chosen to open the door to a number of Legal Tech service providers and solutions which are able to solve simple matters or help clients find the relevant law or expert. Despite the legal market's willingness to involve disruptive solutions in everyday work, Legal Tech service providers are often required to involve lawyers at some stage, in an effort to protect clients from robotic malpractice and to guarantee high quality legal services.
The development and the proper use of legal chatbots, however, would require a more comprehensive and layered regulation, which should especially take into account:
the activities undertaken or services provided by chatbots and whether their activity could be regarded as a practice of law requiring a license under the relevant jurisdiction;
the legal services or functions regarding which, or with respect to which, the chatbot is used;
the branch of the justice system affected;
the autonomy of the AI solution;
the effects of the use of the solution on clients and other third parties;
the necessity of the oversight of work by a lawyer.
Although the above aspects should be borne in mind when creating the future regulation concerning chatbots and other similar AI solutions in the legal market, there are various other aspects which need to be taken into account as well, including – inter alia – the circumstances related to the application of the given chatbot and its effects on clients and other users and third parties. The future chatbot regulation should also focus on combatting misinformation, as well as bias due to lack of a sufficiently diverse dataset used for training the given solution.32
Inaccuracy or ineffectiveness can also markedly damage the perception of chatbot solutions by users. The US-based Identity Resource Center, for example, created a chatbot in 2021 in order to help respond to victims of identity theft outside customer service working hours; the chatbot, however, was criticized for ineffectiveness and for being unable to provide useful and up-to-date answers or to properly understand users' questions.33 This clearly shows how user experience can affect the usability of different solutions and why understanding and properly reacting to customer queries is so important in the life of a chatbot.
Also, more focus should be put on protecting users' mental health, especially with chatbots which can more likely access sensitive information or have apparently intimate conversations with users. For example, a chatbot user in Belgium committed suicide after his conversations with a chatbot exacerbated his existential fears and contributed to his taking his own life.34 Similarly, ill-programmed chatbots can persuade potential clients into following ill-conceived strategies, incurring risks, liabilities and undertaking huge investments for unpredictable results. Negative effects can be especially dangerous in cases where the chatbot is used by vulnerable groups (such as children, the elderly, crime victims or patients) or in highly sensitive matters. In these cases, a human lawyer would need to be involved and the chatbot would be required to detect and indicate the necessity of human involvement.
It is further noted that even a comprehensive and layered regulatory approach would face a wide number of challenges concerning the regulation of legal chatbots, and regulators would need to find a good balance between protecting the quality of legal work and the trust between lawyers and their clients on one side and opening the market for smarter and smarter AI solutions which decrease legal costs on the other.
It also seems reasonable and likely that legal chatbots would be barred – at least in democratic countries – from taking over certain functions reserved for human lawyers (e.g. acting as defense counsels) even when the technology reaches a level at which the chatbot could outperform human legal professionals. It further seems likely that human revision would be essential for a longer period to come, especially regarding complex matters usually requiring contributions from senior lawyers. It further needs to be emphasized that the tasks to be performed by AI solutions in the legal industry would largely depend – besides relevant future legislation – on the acceptance of certain methods or services undertaken or provided by AI solutions, as well as client needs and societal and economic changes.
4 Closing remarks
Legal chatbots have certainly made an impactful appearance in the legal market. They have the potential to help clients find information more efficiently and to make law more accessible to those who could not afford to pay for legal services. In addition, chatbots could also be useful tools for law firms on many levels. They can help connect lawyers and clients, raise client satisfaction and make daily work, intrafirm communication, as well as the management of administrative tasks, more efficient. Their operation also helps in cost reduction and frees up capacity for more complex tasks.
Despite the potential of chatbots in the legal market discussed above, there are still a number of professional, legal and ethical concerns regarding the use of the technology which need to be addressed. Chatbots and similar AI technologies have not yet reached a level where they could replace a human lawyer and provide advice, create and revise contracts or represent clients independently. It is further emphasized that even in a case in which they would be capable of replacing human lawyers, their use would be highly questionable or inappropriate regarding certain matters. For example, theoretically a legal chatbot could advise a victim of a violent crime about his or her procedural rights and obligations, as well as remedies that such a person could seek; however, a chatbot could not give the support and show the compassion that a professional, human lawyer could in such a case. Also, the collection and use of personal data collected from clients and persons acting in their names may appear to be less transparent. Nevertheless, this could be eliminated by duly informing the clients and such persons about the purposes for which their data would be collected and used thereafter, as well as about other important aspects of data processing, such as the retention period or a transfer to any third parties for further analysis or other use. It is highlighted, however, that despite the best efforts on the part of the entity using the AI solution, in many cases data processing by AI can lead to often unforeseen decisions and effects on a wide number of persons. In this respect, it is also essential to correctly set and limit the decision-making power of the solution used and to monitor its operation and the decisions it makes, measures which will play a central role in cases in which legal chatbots and similar AI solutions are used in the legal market. Future regulations also need to correctly address such risks, in accordance with ethical and professional liability aspects.
With respect to the above, the regulation of chatbots and other AI solutions in the legal market is currently in its infancy and will take years to evolve into a versatile and universally applicable set of rules. This also requires a multilateral approach, which takes into account the activities and different functions to be undertaken by the chatbot, as well as other aspects of its use, especially including its effects on clients and other third parties. The main focus of any regulatory concept, however, should still remain the interests of the clients and the wellbeing of a democratic society.
Literature
Asher-Schapiro, A. and Sherfinski, D., ‘Analysis: Chatbots in U.S. justice system raise bias, privacy concerns’, Reuters (10 May 2022) <https://www.reuters.com/legal/litigation/chatbots-us-justice-system-raise-bias-privacy-concerns-2022-05-10/> accessed 19 April 2023.
Bathaee, Y., ‘The Artificial Intelligence Black Box and the Failure of Intent and Causation’ (2018) 31 Harvard Journal of Law & Technology 889-938 <https://jolt.law.harvard.edu/assets/articlePDFs/v31/The-Artificial-Intelligence-Black-Box-and-the-Failure-of-Intent-and-Causation-Yavar-Bathaee.pdf> accessed 23 April 2023.
Brescia, R. H., McCarthy, W., McDonald, A., Potts, K. and Rivals, C., ‘Embracing Disruption: How Technological Change in the Delivery of Legal Services Can Improve Access to Justice’ (2015) 78 Albany Law Review 553-621 <https://www.albanylawreview.org/article/69707-embracing-disruption-how-technological-change-in-the-delivery-of-legal-services-can-improve-access-to-justice> accessed 23 April 2023.
Cerullo, M., ‘AI-powered “robot” lawyer won’t argue in court after jail threats’, CBS News (first published on 9 January 2023, updated on: 26 January 2023) <https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/> accessed 16 April 2023.
Choi, J. H., Hickman, K. E., Monahan, A. and Schwarcz, D. B., ‘ChatGPT Goes to Law School’ (2023) Minnesota Legal Studies Research Paper No. 23-03 1-16 <https://ssrn.com/abstract=4335905> accessed 16 April 2023.
Eliot, L., ‘How are law students using Artificial Intelligence?’, The National Jurist (23 November 2022) <https://nationaljurist.com/uncategorized/how-are-law-students-using-artificial-intelligence/> accessed 16 April 2023.
Fisher, D., ‘LegalZoom Settles Fight With North Carolina Bar Over Online Law’, Forbes (22 October 2015) <https://www.forbes.com/sites/danielfisher/2015/10/22/legalzoom-settles-fight-with-north-carolina-bar-over-online-law/?sh=3e994b8e3eb2> accessed 16 February 2023.
Gibbs, S., ‘Chatbot lawyer overturns 160,000 parking tickets in London and New York’, The Guardian (28 June 2016) <https://www.theguardian.com/technology/2016/jun/28/chatbot-ai-lawyer-donotpay-parking-tickets-london-new-york> accessed 16 February 2023.
Gilson, D., ‘Trust but Verify: Peeking Inside the “Black Box” of Machine Learning, Insights by Stanford Business’, Stanford Graduate School of Business (6 October 2022) <https://www.gsb.stanford.edu/insights/trust-verify-peeking-inside-black-box-machine-learning> accessed 22 April 2023.
High-Level Expert Group on AI (AI HLEG), Ethics Guidelines for Trustworthy AI (European Commission 8 April 2019) <https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419> accessed 22 April 2023.
Hohmann, B., 'Chatbotok a kormányzati platformok szolgálatában: Alkalmazási követelmények és átláthatósági hatások’ (Chatbots at the service of government platforms: application requirements and transparency implications) (2023) 71 Belügyi Szemle 691-709 <https://doi.org/10.38146/BSZ.2023.4.8> accessed 21 July 2023.
Holt, A. T., ‘Legal AI-d to Your Service: Making Access to Justice a Reality’, JETLaw (4 February 2023) <https://www.vanderbilt.edu/jetlaw/2023/02/04/legal-ai-d-to-your-service-making-access-to-justice-a-reality/> accessed 10 April 2023.
Komrij, E., ‘Bend or Snap: Embracing or Banning ChatGPT and its Future in Legal Education’, JETLaw (30 January 2023) <https://www.vanderbilt.edu/jetlaw/2023/01/30/bend-or-snap-embracing-or-banning-chatgpt-and-its-future-in-legal-education/> accessed 16 April 2023.
Lior, A., ‘Insuring AI: The Role of Insurance in Artificial Intelligence Regulation’ (2022) 35 Harvard Journal of Law & Technology 467-530 <https://jolt.law.harvard.edu/assets/articlePDFs/v35/2.-Lior-Insuring-AI.pdf> accessed 23 April 2023.
Maliha, G., Gerke, S., Parikh, R. B. and Cohen, I. G., ‘To Spur Growth in AI, We Need a New Approach to Legal Liability’, Harvard Business Review, Business Law (13 July 2021) <https://hbr.org/2021/07/to-spur-growth-in-ai-we-need-a-new-approach-to-legal-liability> accessed 16 April 2023.
Martino, C., ‘Conversational AI. A History of Chatbots and Voice Assistants’, Medium (13 September 2022) <https://medium.com/women-in-voice/a-history-of-chatbots-and-voice-assistants-e39ec598a92> accessed 7 April 2023.
Merken, S., ‘Lawsuit pits class action firm against ’robot lawyer’ DoNotPay’, Reuters (9 March 2023) <https://www.reuters.com/legal/lawsuit-pits-class-action-firm-against-robot-lawyer-donotpay-2023-03-09/> accessed 7 April 2023.
Morrison, R., ‘How do you regulate advanced AI chatbots like ChatGPT and Bard?’, Tech Monitor (8 February 2023) updated: 9 March 2023 <https://techmonitor.ai/technology/ai-and-automation/ai-regulation-chatgpt-bard> accessed 23 April 2023.
Oliver, C., ‘Married father kills himself after talking to AI chatbot for six weeks about his climate change fears’, Mail Online (30 March 2023) <https://www.dailymail.co.uk/news/article-11920801/Married-father-kills-talking-AI-chatbot-six-weeks-climate-change-fears.html> accessed 19 April 2023.
Pleasance, C., ‘China uses AI to ‘improve’ courts - with computers 'correcting perceived human errors in a verdict' and JUDGES forced to submit a written explanation to the MACHINE if they disagree’, Mail Online (13 July 2022) <https://www.dailymail.co.uk/news/article-11010077/Chinese-courts-allow-AI-make-rulings-charge-people-carry-punishments.html> accessed 17 February 2023.
Richardson, M., ‘Automated Profiling in New Media and Entertainment Markets: What to Protect, and How?’ in Bruun, N., Dinwoodie, G. B., Levin, M. and Ohly, A. (eds), Transition and Coherence in Intellectual Property Law. Essays in Honour of Anette Kur (Cambridge University Press 2021) 200-208. <https://doi.org/10.1017/9781108688529.022≥ accessed 7 April 2023.
Rouse, M., ‘Artificial Linguistic Computer Entity’, Techopedia (24 April 2020) <https://www.techopedia.com/definition/380/artificial-linguistic-computer-entity-alice> accessed 7 April 2023.
Thorbecke, C., ‘Chatbots: A long and complicated history’, CNN Business (20 August 2022) <https://edition.cnn.com/2022/08/20/tech/chatbot-ai-history/index.html> accessed 16 February 2023.
Victor, D., ‘Microsoft Created a Twitter Bot to Learn From Users. It Quickly Became a Racist Jerk’, The New York Times (24 March 2016) <https://www.nytimes.com/2016/03/25/technology/microsoft-created-a-twitter-bot-to-learn-from-users-it-quickly-became-a-racist-jerk.html> accessed 16 February 2023.
Links
Link1: ‘Introducing ChatGPT’, OpenAi (30 November 2022) <https://openai.com/blog/chatgpt/> accessed 27 February 2023.
Link2: ‘A&O announces exclusive launch partnership with Harvey’, Allen & Overy (15 February 2023) <https://www.allenovery.com/en-gb/global/news-and-insights/news/ao-announces-exclusive-launch-partnership-with-harvey> accessed 16 February 2023.
Link3: ‘Saving time and money with Brainspace’, CMS <https://cms.law/en/gbr/innovation/how-we-help-our-clients-innovate/cms-by-design/cms-by-design-success-stories/saving-time-and-money-with-brainspace> accessed 10 April 2023.
Link4: ‘AI and the Rule of Law: Capacity Building for Judicial Systems, Artificial Intelligence’, UNESCO <https://www.unesco.org/en/artificial-intelligence/rule-law/mooc-judges> 11 April 2023.
Link5: ‘Artificial intelligence: italian SA clamps down on ‘Replika’ chatbot. Too many risks to children and emotionally vulnerable individuals’, Garante per la protezione dei dati personali (3 February 2023) <https://garanteprivacy.it/home/docweb/-/docweb-display/docweb/9852506#english> accessed 23 April 2023.
Link6: ‘Artificial intelligence: stop to ChatGPT by the Italian SA. Personal data is collected unlawfully, no age verification system is in place for children’, Garante per la protezione dei dati personali (31 March 2023) <https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9870847#english> accessed 16 April 2023.
Link7: ‘EDPB resolves dispute on transfers by Meta and creates task force on Chat GPT’, EDPB (13 April 2023) <https://edpb.europa.eu/news/news/2023/edpb-resolves-dispute-transfers-meta-and-creates-task-force-chat-gpt_en> accessed 16 April 2023.
Link8: ‘AI Ethics’, IBM <https://www.ibm.com/id-en/topics/ai-ethics> accessed 22 April 2023.
Link9: ‘Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, Building Trust in Human-Centric Artificial Intelligence, Brussels, 8.4.2019, COM(2019) 168 final’ <https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58496> accessed 22 April 2023.
Link10: ‘The Belmont Report, The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research’ (18 April 1979) <https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf> 22 April 2023.
See Brescia et al. (2015) 562–65.
Lior (2022) 479.
Bathaee (2018) 894.
Hohmann (2023) 703–704.
Richardson (2021) 200.