Author:
Daniel Necz Pázmány Péter Catholic University, Budapest, Hungary
Eversheds Sutherland LLP, Ireland

Search for other papers by Daniel Necz in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-3416-6650
Open access

Abstract

The emergence and fast proliferation of chatbot solutions have reshaped how we interact with customer services, professionals and organizations providing advisory services. Law firms and legal professionals have also been affected by chatbots, which have become an integral part of the legal market. They are used for training, discovery, legal research and various other tasks by multinational law firms and sole practitioners alike.

Besides their benefits, however, chatbot solutions also face a number of limitations and their use could raise both legal and ethical concerns. Unsupervised use of such solutions can lead to serious professional responsibility issues, while their use in certain cases, such as in cases involving acting as a defense counsel or advising on sensitive matters, can also raise ethical concerns, or endanger the trust in lawyers built by generations of legal professionals. The processing of certain data or confidential information can further raise privacy or confidentiality issues, especially with respect to the need for artificial intelligence (AI) based solutions to constantly rely on huge datasets.

Bearing the above in mind, the regulation of chatbots in the legal market is certainly a complex topic with many challenges. In this paper, I provide an overview of the use of chatbots in the legal market, summarize the main concerns regarding their use (especially including professional liability, privacy and ethical concerns) and also highlight the main challenges concerning AI and chatbot regulation and the potential approaches regulators could follow to prevent or minimize risks associated with the unlawful or unethical use of technology and disperse unnecessary fears by also supporting technological development and by preserving the positive effects of the use of chatbots in the legal market.

Abstract

The emergence and fast proliferation of chatbot solutions have reshaped how we interact with customer services, professionals and organizations providing advisory services. Law firms and legal professionals have also been affected by chatbots, which have become an integral part of the legal market. They are used for training, discovery, legal research and various other tasks by multinational law firms and sole practitioners alike.

Besides their benefits, however, chatbot solutions also face a number of limitations and their use could raise both legal and ethical concerns. Unsupervised use of such solutions can lead to serious professional responsibility issues, while their use in certain cases, such as in cases involving acting as a defense counsel or advising on sensitive matters, can also raise ethical concerns, or endanger the trust in lawyers built by generations of legal professionals. The processing of certain data or confidential information can further raise privacy or confidentiality issues, especially with respect to the need for artificial intelligence (AI) based solutions to constantly rely on huge datasets.

Bearing the above in mind, the regulation of chatbots in the legal market is certainly a complex topic with many challenges. In this paper, I provide an overview of the use of chatbots in the legal market, summarize the main concerns regarding their use (especially including professional liability, privacy and ethical concerns) and also highlight the main challenges concerning AI and chatbot regulation and the potential approaches regulators could follow to prevent or minimize risks associated with the unlawful or unethical use of technology and disperse unnecessary fears by also supporting technological development and by preserving the positive effects of the use of chatbots in the legal market.

1 Introduction

Chatbot services have become widespread in today's digital economies. They support customer service and client management, make information more easily accessible and help companies and other organizations build trust. Chatbots can also be used for other purposes, including for everyday conversation or recreation. ChatGPT, for example, is a versatile tool and an interesting endeavor, started by OpenAI,1 and is capable of answering a wide number of questions, including rather complex or theoretical ones, as well as in engaging in everyday conversation with users. It is expected that such general-purpose chatbot solution will become almost omnipresent in the upcoming years and will be used by law firms and legal practitioners to build closer ties with clients and make their services more visible.

It is further emphasized that chatbot solutions and different AI tools can also be used to provide simple services and perform certain tasks independently. For example, the AI-powered legal chatbot solution DoNotPay, which was launched in 2015, first achieved prominence by helping appeal against more than 160,000 parking tickets in the course of 21 months.2 The solution has been used ever since to help users in simple legal work and other everyday tasks.

It is even more shocking to report that ChatGPT successfully passed four law school classes based on its final exam and was deemed to be able to graduate – at least theoretically, and with a barely sufficient performance.3 With time, the performance of legal chatbots as potential virtual lawyers is expected to improve and they would be able to undertake even more complex tasks, such as providing complex legal advice or drafting various legal documents.

In addition to the benefits that legal chatbot solutions offer, however, there are also ethical and professional liability considerations, which cannot be ignored. Can a law firm, the head of the relevant department or the lawyer using the given solution be made liable, for example, if a chatbot gives the wrong advice? And if so, to what extent? These are all questions, which are becoming more and more relevant and need to be addressed with due insight into both the societal and ethical implications of using chatbot solutions for providing legal services.

Bearing the above in mind, this paper highlights the potential impact of chatbots on the legal market, discusses the related professional, privacy and ethical considerations, as well as the regulatory challenges related to the appropriate regulation of legal chatbots and similar artificial intelligence (AI) solutions.

2 Chatbots and their impact on the legal market

The idea of chatbots has emerged through the work of a number of researchers and scientists. The first chatbot solution, called Eliza, was launched in the 1960s with the aim of simulating conversation with a therapist.4 It was followed by the chatbot PARRY, developed in 1972 by psychiatrist Kenneth Colby aimed at interacting like a patient suffering from schizophrenia.5 Following such endeavors, the language processing chatbot, ALICE, developed by Richard Wallace was released, which made conversation easier compared to earlier programs.6

Despite such early attempts, chatbots did not become widespread until the late 2000s when – due to the drastic development of AI solutions – tech companies and a wide range of other service providers began using them for customer service purposes and for strengthening relations with users and clients. This included virtual assistant solutions, such as SIRI, published by Apple in 2010 or Cortana, released by Microsoft in 2014.

Certain chatbot projects, however, also revealed the vulnerability of such solutions to manipulation, as well as their unpredictable nature. Tay, a bot launched by Microsoft on Twitter, for example, was called off within 24 h of its launch due to the fact that it adopted extremely discriminative language learned from interaction with users.7 It is also worth noting, that DoNotPay has faced widespread criticism and legal problems as of early 2023 with respect to DoNotPay's planned court appearance and a legal dispute in California regarding the allegedly poor quality work it provided to a customer.8

Despite such highly publicized failures, chatbots, however, have not stopped progressing and becoming popular among resourceful enterprises and everyday users alike. Chatbots are therefore used for a wide variety of purposes and are capable of providing both oral and written advice or undertaking conversations on various topics. In respect of the legal market, chatbots are used by almost all actors, including law firms and lawyers, company legal departments, courts and government agencies. The international law firm, Allen & Overy, for example, introduced a legal platform called Harvey, powered by natural language processing, to deliver many types of legal work, including due diligence and contract analysis. The work is reviewed by a lawyer to eliminate any flaws or non-compliance.9 Similarly, another international law firm, CMS adopted the machine learning tool, Brainspace in 2017 in Europe, a tool which can help clients with document analysis and discovery and can support due diligence as well.10

Chatbots or other AI tools are also widely used by courts or authorities, especially for discovery, analysis, technical support and in some cases, also for decision-making. The Chinese court system and many levels of Chinese administration, for example, use AI solutions widely to support the decision-making of courts, in the framework of which judges are often required to take into account recommendations made by AI and provide an explanation in writing if they disagree.11 Naturally, such reliance on decisions made by AI could carry significant risks and could be criticized from an ethical viewpoint, since the use of chatbots or other AI solutions by authorities should not lead to the assumption of liability and decision-making power by AI without human supervision.

It is also beyond doubt, however, that a wide number of organizations recognized the potential of AI in assisting judiciary procedures and have undertaken to educate law students, judges and court officials on AI. An example could be the Massive Open Online Course on AI and the Rule of Law developed by UNESCO and The Future Society.12 Chatbots and other AI solutions also have the potential to help various segregated communities and members of society access legal information more easily or be provided with legal services that would otherwise be only accessible to wealthier or more educated groups.13 This is especially true with respect to the fact that legal services have historically appeared to be less measurable or quantifiable and a certain mystery still remains regarding the results of legal work.14

As regards the acceptance of chatbots and other Legal Tech solutions in the legal market, such solutions are also becoming more and more popular among clients and lawyers alike, despite initial fears and unwillingness to use such solutions with regard to ethical and professional liability concerns. This is especially true for various document assembly solutions, which were initially frowned upon by bar associations and a wide number of legal professionals. The North Carolina State Bar, for example, had a long lasting legal dispute with Legal Tech service provider, LegalZoom, over its legal document services, which was settled in 2015; the North Carolina State Bar initially alleged that such service provision could be regarded as ‘unauthorized practice of law’ with regard to which LegalZoom agreed to subject its documents for review by lawyers.15 As of today, such review has become a standard in many jurisdictions, especially for tasks usually undertaken by lawyers, including contract preparation and review, as well as the provision of legal advice. It is worth noting, however, that endeavors to replace human lawyers at court or in similar forms of legal representation have encountered more severe resistance from practice, as well as legal obstacles. An example could be DoNotPay's failed attempt to represent a plaintiff in courtroom in a case related to a traffic ticket.16

In addition to its positive effects, however, there is also a risk that the widespread use of chatbots and Legal Tech solutions could hinder paralegals and junior associates in mastering basic tasks including legal research, everyday administrative tasks, as well as proper communication with court and other officials, and in further progressing in their legal career. Some authors, however, have raised their voice in protection of AI tools and consider that the integration of such tools in law school curricula would be an important step forward and could also help young professionals in the market.17 This is especially true, since young professionals are extremely overworked; they have to help senior lawyers with everyday work, learn applicable law and practice and take the lion's share in administrative and marketing tasks at the same time. By using AI tools, they can be more effective and also save time for more important tasks. In this respect, some also argue that by learning about Legal Tech and similar AI solutions more extensively even during law school years, fresh candidates applying for junior lawyer positions can stand out and become top picks for major law firms.18 Such a mentality can also help shape how successful young lawyers relate to AI in their work and how such tools are accepted by the legal profession.

3 The regulation of chatbots and its challenges in the legal market

3.1 Professional liability

Naturally, one of the main concerns regarding the use of legal chatbots is malpractice, including mistakes made by the machine and related professional liability issues. If the chatbot is used, for example, for advisory services or for autonomously providing other types of legal services, such as preparing and submitting an appeal or drafting a contract, there is an inherent risk of malpractice. The advice provided may be incorrect, the appeal can be submitted late, and the ill-drafted contract can expose the client to severe risk of non-compliance or litigation. Malpractice by the machine is similar to malpractice committed by a human lawyer except for the fact that AI itself cannot be subject to professional or any other type of liability. Ultimately the law firm using the solution could bear civil law liability for mistakes made by an AI solution it uses, and the lawyer reviewing and approving the AI solution's work and/or his/her superior can be subject to a professional or ethical procedure. It is also worth noting that in cases in which the solution is provided by a third party (e.g. a Legal Tech company), such third party may be liable to the law firm under their contract, or in exceptional cases, even directly – in accordance with the applicable law – to the client suffering damage as a result of the ill-programed solution. This could be the case, for example, when an update fails to include new deadlines or mandatory contractual elements implemented by new legislation, despite a contractual promise by the programmer or service provider supporting the law firm and its client. It is worth noting in this respect that a number of arguments have been raised for reforming liability rules since current regimes can unevenly allocate risks and liability and can hinder the development of AI solutions in high-risk sectors, including law firms and legal services.19 In cases, for example, where legal practitioners face enhanced liability, it may dissuade them from using AI even if it would save significant time and effort. In cases, however, where developers and service providers would more likely be found liable, they could also feel less incentivized to develop or provide certain solutions and services.

The widespread use of AI tools by a wide number of legal professionals can also lead to the breach of various professional liability requirements or contractual promises concerning representation by a human lawyer or other professional (such as a reputable insurance law expert or a law professor specialized in copyright law). It would not be surprising to see future contracts for legal services include contractual promises by law firms to have certain professionals or leading experts personally oversee certain aspects of a matter or a transaction even in cases where AI solutions are extensively used.

In line with the above and due to the high risk of mistakes made by unsupervised AI, law firms generally require that a lawyer specialized in the given field review the work created by the chatbot or other AI solution before it is delivered to the client. In many instances, chatbots only help to connect clients with attorneys (for example by helping clients navigate on the law firm's webpage or access the right specialist) or undertake monotonous, repetitive tasks (such as a mass review of documents, a keyword search, or legal research in databases) supporting the law firm's advisory services or certain other activities.

In order to increase both the effectiveness of legal chatbots and other AI solutions used by law firms and to minimize the risk of algorithmic malpractice, law firms would need to put more focus on training their personnel to work closely and with confidence with disruptive technologies. In addition, special, AI-related professional liability insurance packages are expected to become more widespread to counter fears concerning professional misconduct and to raise trust in legal chatbots. There are still certain difficulties with the insurability of AI solutions, however, especially due to the fact that it is hard to assess the extent of damage AI can cause in certain cases or in a certain field, and AI can also frequently be unpredictable in terms of the decisions it makes.20 This also leads us to the so-called black box problem, according to which, in many cases it cannot be explained how and why the given AI solution arrived at a specific conclusion.21 This problem can make it hard to predict certain decisions the program makes. A solution for such a problem might be the alignment of liability to better react to the autonomy of the solutions used; where the AI solution has a wider autonomy, more emphasis should be put on the boundaries of its decision-making power, its transparent operation and the monitoring of its activities by a human expert.22

It is underpinned, however, that monitoring or systematic review by humans would not be possible in many cases, especially where the chatbot is used by larger organizations and interacts with thousands or even more potential users. In such case the possibility of mistakes (such as wrong communication style or inappropriate client management) cannot be excluded.23 In such cases, limiting the scope, setting a clear purpose for using the AI solution and transparent operation (including highlighting the potential effects of using the solution, ability to report/flag malfunction and request human revision) would most probably be of key importance.

It is also worth noting, however, that in the near future law firms and practitioners that do not use AI solutions to the extent or in the manner required in the given practice and who rely too much on the skills of human professionals may be more likely to be subject to professional liability claims. Therefore, even considering the difficulties in the insurability of AI, it is likely that new insurance packages and models tailored to specific types of AI solutions used in the legal profession would appear and become widespread in practice. As of that point, AI's unpredictable nature could less likely be used as a reference to avoid using AI solutions deemed reliable in practice or having appropriate AI insurance policies in place.

3.2 Privacy concerns

It is beyond doubt that chatbots generally rely on large datasets and are continuously fed with data to properly function and further develop. Such processing often appears to be unforeseeable to users, which also leads to privacy concerns, especially in case of vulnerable groups. The Italian data protection authority, for example, prohibited the provider of an app called ‘Replika’ from processing the personal data of Italian users. The app created a virtual friend and was often used by children who could not understand the processing of their personal data and were often exposed to inappropriate information which could negatively affect their development.24

The Italian data protection authority similarly imposed an immediate temporary limitation on the processing of the personal data of users from Italy in the case of ChatGPT. In its decision the authority highlighted the fact that OpenAI lacked a data basis for collecting a massive amount of personal data on its users, and used such data for training purposes, provided inadequate information to users on data processing and also lacked any age verification mechanism that could protect children from accessing inappropriate information with respect to their age.25 Besides the Italian data protection authority, other data protection authorities have also focused on the data protection aspects of ChatGPT. To foster cooperation between European data protection authorities in this respect, the European Data Protection Board (EDPB) recently created a dedicated task force.26

In addition to the opacity often surrounding chatbot solutions, such solutions also very often rely on automated profiling of users, and the information collected in this respect is also frequently used to further train the solution or to make information or analytics accessible to other organizations or to provide personalized services and to render more and more information about individuals public, especially if online profiles are also analyzed.27 In the case of law firms, the protection of information related to clients and their representatives is even more important compared to the data processing of most other service providers, bearing in mind that the relationship between the lawyer and his/her client presumes utmost confidence. Communication with clients, as well as information and documentation related to clients are therefore highly confidential in almost every jurisdiction.

Client information also often involves sensitive personal data, especially for lawyers and firms active in certain fields of law (e.g. criminal defense lawyers or lawyers focusing on medical malpractice cases). In some cases, lawyers can also gain access to negative or highly sensitive information about adversaries or other third parties, the use of which is generally subject to a number of laws and requirements, including privacy laws protecting the personal data of natural persons, as well as competition laws barring market players from disparaging competitors or unethically publishing negative information.

With regard to the above, feeding confidential information to a chatbot or other AI solutions can raise privacy and professional responsibility concerns, especially in cases in which the processing of personal data is not disclosed to clients and in cases in which it is not based on an appropriate legal basis, such as consent. As highlighted above, the use of client information can also infringe confidentiality requirements involving both personal and non-personal data, bearing in mind that to provide legal services the use of client information by AI solutions is generally not required.

Besides the above, the different data transfers undertaken regarding the personal data processed by AI solutions can also be non-compliant with relevant data protection requirements and remain unforeseen by the individuals affected. This is especially true for transferring data to different foreign state agencies operating under less restrictive privacy regimes, which can impose significant restrictions and sanctions on the affected individuals, their family members and associates, based on the information received. Companies in such regimes can also feel less inclined to guarantee a high level of data protection and data security unless such is required from them in the respective contract concluded with respect to the relevant data protection laws applicable in the jurisdiction of the person or entity transferring personal data.

In accordance with the above, law firms using chatbot and other AI solutions need to carefully assess the scope of information that they can use for certain purposes related to the operation of the given solution (for example, using certain information as training data, for internal analysis, business development purposes, etc.). In addition, legal practitioners processing personal data need to be transparent about their data processing practices and inform their clients and other related data subjects (e.g. client contacts) about the scope of client data collected for any subsequent purpose(s) of processing, as well as about any other essential aspect of such data processing (including, for example, the applicable retention period).

3.3 Ethical concerns

Probably of all the concerns involving the use of AI, the ethical ones appear to be the strongest, bearing in mind that the activities and data processing by AI in many cases involves the personal data of human beings, and decisions made by AI often have significant effects on individuals. In the United States, one of the cornerstone documents on human subject research, the Belmont Report published by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, laid down basic ethical principles for research involving human subjects, including 1) the respect for persons, 2) beneficence and 3) justice.28 The above principles and requirements highlight the need for human-centric, ethical research and can also be relevant in respect of algorithmic development and design.29

In the European Union, the guiding ethical principles of a trustworthy AI, rooted in European fundamental rights and principles, appear in the Ethics Guidelines for Trustworthy Artificial Intelligence, prepared in 2018 by the High-Level Expert Group on Artificial Intelligence, and include: 1) respect for human autonomy, 2) prevention of harm, 3) fairness and 4) explicability.30 The Guidelines were also welcomed by the EU Commission, which incorporated the principles referred to above into the seven key requirements of trustworthy AI, including 1) human agency and oversight, 2) technical robustness and safety, 3) privacy and data governance, 4) transparency, 5) diversity, non-discrimination and fairness, 6) societal and environmental well-being and 7) accountability.31

These main requirements could also be relevant for the use of AI solutions in the legal industry; however, the specific conditions and specialties of the legal profession also need to be taken into account, such as the relationship between the attorney and his/her client, as well as confidentiality rules. In this respect, the attorney-client privilege has been characterized as one of the key privileges of legal practice for centuries and the attorney-client relationship has been seen as one of the most sensitive professional relationships ever since the emergence of the justice system. Also, attorneys frequently act in sensitive or highly confidential matters and also need a strong sense of empathy to manage client relations and to treat clients with the understanding they deserve.

Even once a chatbot reaches the level where it can act as a ‘virtual attorney’ and work independently on complex legal matters, it would only be capable of doing so without any sense of sympathy. Although such an approach could carry less risk or even be regarded as desirable in business matters (such as with company registration or drafting business agreements), it can raise significant ethical concerns in cases where a chatbot or another AI solution would need to act as a defense attorney, provide family law advice or mediate between parents or former partners.

The appearance of robolawyers in open court can also undermine society's trust in a ‘human’ court system and lead to the violation of human rights and procedural principles. A robot, for example, speaking for the defense or a chatbot giving a declaration in the name of an absent party would dehumanize the justice system to a level where we would have a hard time recognizing it. The mass use of AI in litigation can also have other unwanted effects, such as encouraging unnecessary litigation or mass reports to authorities for harassment purposes.

In addition to the ethical considerations discussed above, the proliferation of legal chatbot services in an unregulated environment can further generate tensions within the legal profession. AI does not need to rest, needs no free-time or salary and can process information a thousand times faster than a human lawyer, which also means that after reaching a certain level, AI would be in a position to easily outcompete human legal professionals in terms of both quantity and quality. There will only be the ‘human side’ of lawyering left, which will remain the sole trump card of human lawyers. This would include practitioners focusing on – inter alia – marital or criminal law or other fields where ethical considerations or human aspects are stronger. We would most probably see fewer transactional lawyers in the coming decades, since their work would most likely be taken over by AI solutions; however, it also seems impossible that even the smartest AI solutions could fully dominate any field of law or practice without the involvement of human professionals. In addition, clients would in many cases be more likely to have a human lawyer oversee their work, also leaving plenty of room for human professionals, especially for those which can effectively adapt to the new environment and master different AI solutions.

In accordance with the above, it also seems probable that SMEs, sole entrepreneurs, NGOs and other smaller organizations would be more willing to retain cheaper robolawyers than human legal professionals in easier cases, for example, for a monthly subscription fee or a one-off fee in certain individual cases. Even bigger companies would most probably be more willing to use legal chatbots and similar solutions to execute simpler transactions and business-contracts, as well as for corporate housekeeping, reducing money spent on external lawyers and in-house counsels. Bearing this in mind, legal departments in a wide number of sectors focusing on general matters would probably see a reduction in the need for a human workforce as well, but key members of the given team, as well as experts and executives would still be needed and play an essential role in everyday operations.

3.4 Regulatory challenges

Proper regulation concerning the use of chatbots in the legal market is becoming an increasingly pressing issue. As seen from earlier cases, a number of regulators have already chosen to open the door to a number of Legal Tech service providers and solutions which are able to solve simple matters or help clients find the relevant law or expert. Despite the legal market's willingness to involve disruptive solutions in everyday work, Legal Tech service providers are often required to involve lawyers at some stage, in an effort to protect clients from robotic malpractice and to guarantee high quality legal services.

The development and the proper use of legal chatbots, however, would require a more comprehensive and layered regulation, which should especially take into account:

  • the activities undertaken or services provided by chatbots and whether their activity could be regarded as a practice of law requiring a license under the relevant jurisdiction;

  • the legal services or functions regarding which, or with respect to which, the chatbot is used;

  • the branch of the justice system affected;

  • the autonomy of the AI solution;

  • the effects of the use of the solution on clients and other third parties;

  • the necessity of the oversight of work by a lawyer.

Although the above aspects should be borne in mind when creating the future regulation concerning chatbots and other similar AI solutions in the legal market, there are various other aspects which need to be taken into account as well, including – inter alia – the circumstances related to the application of the given chatbot and its effects on clients and other users and third parties. The future chatbot regulation should also focus on combatting misinformation, as well as bias due to lack of a sufficiently diverse dataset used for training the given solution.32

Inaccuracy or ineffectiveness can also markedly damage the perception of chatbot solutions by users. The US-based Identity Resource Center, for example, created a chatbot in 2021 in order to help respond to victims of identity theft outside customer service working hours; the chatbot, however, was criticized for ineffectiveness and for being unable to provide useful and up-to-date answers or to properly understand users' questions.33 This clearly shows how user experience can affect the usability of different solutions and why understanding and properly reacting to customer queries is so important in the life of a chatbot.

Also, more focus should be put on protecting users' mental health, especially with chatbots which can more likely access sensitive information or have apparently intimate conversations with users. For example, a chatbot user in Belgium committed suicide after his conversations with a chatbot exacerbated his existential fears and contributed to his taking his own life.34 Similarly, ill-programmed chatbots can persuade potential clients into following ill-conceived strategies, incurring risks, liabilities and undertaking huge investments for unpredictable results. Negative effects can be especially dangerous in cases where the chatbot is used by vulnerable groups (such as children, the elderly, crime victims or patients) or in highly sensitive matters. In these cases, a human lawyer would need to be involved and the chatbot would be required to detect and indicate the necessity of human involvement.

It is further noted that even a comprehensive and layered regulatory approach would face a wide number of challenges concerning the regulation of legal chatbots, and regulators would need to find a good balance between protecting the quality of legal work and the trust between lawyers and their clients on one side and opening the market for smarter and smarter AI solutions which decrease legal costs on the other.

It also seems reasonable and likely that legal chatbots would be barred – at least in democratic countries – from taking over certain functions reserved for human lawyers (e.g. acting as defense counsels) even when the technology reaches a level at which the chatbot could outperform human legal professionals. It further seems likely that human revision would be essential for a longer period to come, especially regarding complex matters usually requiring contributions from senior lawyers. It further needs to be emphasized that the tasks to be performed by AI solutions in the legal industry would largely depend – besides relevant future legislation – on the acceptance of certain methods or services undertaken or provided by AI solutions, as well as client needs and societal and economic changes.

4 Closing remarks

Legal chatbots have certainly made an impactful appearance in the legal market. They have the potential to help clients find information more efficiently and to make law more accessible to those who could not afford to pay for legal services. In addition, chatbots could also be useful tools for law firms on many levels. They can help connect lawyers and clients, raise client satisfaction and make daily work, intrafirm communication, as well as the management of administrative tasks, more efficient. Their operation also helps in cost reduction and frees up capacity for more complex tasks.

Despite the potential of chatbots in the legal market discussed above, there are still a number of professional, legal and ethical concerns regarding the use of the technology which need to be addressed. Chatbots and similar AI technologies have not yet reached a level where they could replace a human lawyer and provide advice, create and revise contracts or represent clients independently. It is further emphasized that even in a case in which they would be capable of replacing human lawyers, their use would be highly questionable or inappropriate regarding certain matters. For example, theoretically a legal chatbot could advise a victim of a violent crime about his or her procedural rights and obligations, as well as remedies that such a person could seek; however, a chatbot could not give the support and show the compassion that a professional, human lawyer could in such a case. Also, the collection and use of personal data collected from clients and persons acting in their names may appear to be less transparent. Nevertheless, this could be eliminated by duly informing the clients and such persons about the purposes for which their data would be collected and used thereafter, as well as about other important aspects of data processing, such as the retention period or a transfer to any third parties for further analysis or other use. It is highlighted, however, that despite the best efforts on the part of the entity using the AI solution, in many cases data processing by AI can lead to often unforeseen decisions and effects on a wide number of persons. In this respect, it is also essential to correctly set and limit the decision-making power of the solution used and to monitor its operation and the decisions it makes, measures which will play a central role in cases in which legal chatbots and similar AI solutions are used in the legal market. Future regulations also need to correctly address such risks, in accordance with ethical and professional liability aspects.

With respect to the above, the regulation of chatbots and other AI solutions in the legal market is currently in its infancy and will take years to evolve into a versatile and universally applicable set of rules. This also requires a multilateral approach, which takes into account the activities and different functions to be undertaken by the chatbot, as well as other aspects of its use, especially including its effects on clients and other third parties. The main focus of any regulatory concept, however, should still remain the interests of the clients and the wellbeing of a democratic society.

Literature

Links

  • Collapse
  • Expand

Senior editors

Editor-in-Chief: 

  • Éva JAKAB (Károli Gáspár University of the Reformed Church, Department of Civil Law and Roman Law, head of Doctoral School of Political Science and Law, Hungary)

Editors:

  • Fruzsina GÁRDOS-OROSZ (HUN-REN Centre for Social Sciences, Institute for Legal Studies, Hungary; Eötvös Loránd University, Faculty of Law, Hungary)
  • Miklós KÖNCZÖL (HUN-REN Centre for Social Sciences, Institute for Legal Studies, Hungary; Pázmány Péter Catholic University, Faculty of Law and Political Sciences, Hungary)
  • Viktor LŐRINCZ (HUN-REN Centre for Social Sciences, Institute for Legal Studies, Hungary)
  • Tamás HOFFMANN (HUN-REN Centre for Social Sciences, Institute for Legal Studies, HU; Corvinus University of Budapest, Institute of International, Political and Regional Studies / Department of International Relations, Hungary)
  • Eszter KOVÁCS SZITKAY (HUN-REN Centre for Social Sciences, Institute for Legal Studies, HUNGARY; Ludovika University of Public Service, Doctoral School of Law Enforcement, Hungary)

Editorial Board

  • Attila BADÓ (University of Szeged, Faculty of Law and Political Sciences, Hungary)
  • Mátyás BÓDIG (University of Aberdeen, King's College, School of Law, United Kingdom)
  • Zoltán CSEHI (Eötvös Loránd University, Faculty of Law, Hungary; Pázmány Péter Catholic University, Faculty of Law and Political Sciences, Hungary)
  • Péter CSERNE (University of Aberdeen, King's College, School of Law, United Kingdom)
  • Balázs GELLÉR (Eötvös Loránd University, Faculty of Law, Hungary)
  • András JAKAB (Paris Lodron Universität Salzburg, Faculty of Law, Business and Economics, Austria)
  • Miodrag JOVANOVIĆ (University of Belgrade, Faculty of Law, Serbia)
  • Miklós KIRÁLY (Eötvös Loránd University, Faculty of Law, Hungary)
  • György KISS (National University of Public Service, Faculty of Public Governance and International Studies, HUNGARY; University of Pécs, Faculty of Law, Hungary)
  • Jan KUDRNA (Charles University, Faculty of Law, Czech Republic)
  • Herbert KÜPPER (Institut für Ostrecht, DE; Andrássy Universität, Chair of European Public Law, Hungary)
  • Konrad LACHMAYER (Sigmund Freud University, Faculty of Law, Austria)
  • Andzrej Stanislaw MĄCZYŃSKI (Jagiellonian University, Faculty of Law and Administration, Poland)
  • Guido PFEIFER (Goethe University, Faculty of Law, Germany)
  • Miklós SZABÓ (University of Miskolc, Faculty of Law, Hungary)
  • Zoltán SZENTE (HUN-REN Centre for Social Sciences, Institute for Legal Studies, Hungary)
  • G.J.J. Heerma VAN VOSS (Leiden University, Institute of Public Law; Labour Law and Social Security, Netherlands)
  • Bernd WAAS (Goethe University, Faculty of Law, Germany)
  • Fryderyk ZOLL (University of Osnabrück, European Legal Studies Institute, Germany)

Advisory Board

  • Péter ERDŐ
  • Gábor HAMZA
  • Attila HARMATHY
  • László KECSKÉS
  • Tibor KIRÁLY
  • László KORINEK
  • László SÓLYOM
  • Lajos VÉKÁS
  • Imre VÖRÖS

Hungarian Journal of Legal Studies
P.O. Box 25
HU–1250 Budapest,Hungary
Phone: (36 1) 355 7384
Fax. (36 1) 375 7858
E-mail: acta.juridica@tk.mta.hu

Indexing and Abstracting Services:

  • Information technology and the Law
  • International Bibliographies IBZ and IBR
  • Worldwide Political Science Abstracts
  • SCOPUS
  • Cabell's Directories
  • HeinOnline

 

2023  
Scopus  
CiteScore 1.2
CiteScore rank Q2 (Law)
SNIP 1.024
Scimago  
SJR index 0.204
SJR Q rank Q3

Hungarian Journal of Legal Studies
Publication Model Hybrid
Submission Fee none
Article Processing Charge 900 EUR/article
Printed Color Illustrations 40 EUR (or 10 000 HUF) + VAT / piece
Regional discounts on country of the funding agency World Bank Lower-middle-income economies: 50%
World Bank Low-income economies: 100%
Further Discounts Editorial Board / Advisory Board members: 50%
Corresponding authors, affiliated to an EISZ member institution subscribing to the journal package of Akadémiai Kiadó: 100%
Subscription fee 2025 Online subsscription: 572 EUR / 628 USD
Print + online subscription: 648 EUR / 712 USD
Subscription Information Online subscribers are entitled access to all back issues published by Akadémiai Kiadó for each title for the duration of the subscription, as well as Online First content for the subscribed content.
Purchase per Title Individual articles are sold on the displayed price.

Hungarian Journal of Legal Studies
Language English
Size B5
Year of
Foundation
2016 (1959)
Volumes
per Year
1
Issues
per Year
4
Founder Magyar Tudományos Akadémia  
Founder's
Address
H-1051 Budapest, Hungary, Széchenyi István tér 9.
Publisher Akadémiai Kiadó
Publisher's
Address
H-1117 Budapest, Hungary 1516 Budapest, PO Box 245.
Responsible
Publisher
Chief Executive Officer, Akadémiai Kiadó
ISSN 2498-5473 (Print)
ISSN 2560-1067 (Online)

Monthly Content Usage

Abstract Views Full Text Views PDF Downloads
Oct 2024 0 389 175
Nov 2024 0 419 246
Dec 2024 0 441 179
Jan 2025 0 208 137
Feb 2025 0 501 126
Mar 2025 0 258 70
Apr 2025 0 0 0