Application of Legal Technology: Critical evaluation of some use cases 3

By 0
Application of Legal Technology: Critical evaluation of some use cases 3

Under conditions where there are limited resources, some AI-powered solutions may be ideal to be employed to be used to promote productivity, bridge the gap in resources, and still provide excellent and efficient services to their clients. In this third part of the application of legal technology series, we shall discuss some more AI-Powered solutions for firms.

LexisNexis or Lexis+, over the period, have continuously afforded small and mid-size law firms with limited access to resources a go-to legal research platform, thus through an integration of the content and tools needed to aid an efficient and thorough legal research. Its cutting-edge technology allows lawyers to uncover opinions, identify cases, and connect cases that may otherwise have been overlooked. It also aids in providing lawyers insights on judges, lawyers, law firms, and courts so they can use well-researched data, and have an advantage in making superior fact-based arguments. Its inherent ability to unearth cases and put them online quicker gives it an advantage over other solutions. With its Shepard feature, it creates focused research opinions based on citation patterns, by extracting legal concepts from a lawyer’s brief and sources. Lexis+ has over the years proven that it is a smart solution for firms with limited resources, by accelerating the lawyer’s tasks and finding efficiencies to relieve them of being overwhelmed and overworked. It applies a combination of machine learning and natural language processing. It also helps firms with limited resources to have an estimation of the litigation timeline for a case before a specific judge or courts and even determine the appropriate venue that may suit their client’s case. They can also assess their opposing lawyer’s abilities in similar cases and design a litigation strategy to pursue their case. With its litigation analytics, lawyers can explore ways of acquiring new clients, setting pricing expectations through data on the average time consumed to resolving matters in a certain jurisdiction. It aids them in operating efficiently through data-driven insights.

Kira Systems is another AI-powered solution that aids in executing a more accurate due diligence review on contracts via searching, highlighting, and extractions of relevant content for analysis. It also allows continuous reviews by other team members using extracted information with links to the original source. It is estimated that this AI completes tasks up to 40% faster for first-time usage and up to 90% for those with experience using it. This AI uses patented machine learning for the identification, extraction, and content analysis of the contracts and documentation fed to it. This patented AI can extract concepts and data points at high-efficiency rates and accuracy, which either was not possible with traditional rules-based systems. Aside from its patent, its quick study, partner ecosystem, built-in intelligence, and adaptive models make it uniquely different from the rest.

Its suitability for law firms with limited resources (i.e., human capital or financial) includes the fact that the software is also available on the cloud or on the premises. This affords the firms diverse opportunities to choose the most appropriate, based on their limited capabilities. Because of its multiple use case scenarios on due diligence, compliance, project management, knowledge management, finance, M & A deal points, and lease abstractions, it can serve small firms well in improving their overall efficiencies.

Lawdroid AI, a chatbox AI, can be used by firms with limited resources. Amanda Caffall, Executive Director, The Commons Law Centre, stated that LawDroid helps our non-profit start-up law firm, sort the vast unmet market for legal services into people we can help and people we can refer to other resources, saving us precious time while enabling us to make much-needed referrals. They are mainly hosted on the websites of the law firm and make them available to potential clients 24/7. Using videos and responsive conversations creates and builds trust with potential clients and captures their information as new leads for the firm. It also allows having an in-depth knowledge of your clients to make data-driven decisions. Using some conditional logic, it can intelligently create robust documents gathered from clients. Firms can scale up their expertise and services and charge for self-serve legal documents, issue spotting and legal guidance whilst business is asleep. It applies natural language processing to readily provide answers to legal questions from clients it engages with. The 2020 Legal Trends Report found that 79% of potential clients expect a response within 24 hours of reaching out. Thus, Lawdroid and another chatbox AI come in handy to respond to this need in seconds. Overall, Lawdroid AI helps to save time and money, and improve efficiency and profitability whilst providing an efficient customer service experience and satisfaction.

Just like any other technology, Artificial Intelligence powered solutions within the legal industry have their own limitations too. The OECD in a 2019 report on algorithms in the society described the AI system lifecycle as design, data and modeling, verification, deployment, operation, and monitoring. The initial phase of the AI lifecycle design, data, and modeling have their own limitations inherent.

Data is one of the limitations. AI-powered solutions use machine learning, deep learning, neural networks, and natural language processing. These feed on big data to help train the AI model to power the solution. For example, with machine learning, patterns identified by humans may not have been detected easily. The patterns are detected based on the training data available and may not know other existing patterns outside the big data used in training them. Thus, the data may be very accurate or complete but still lack the contextual patterns that may exist outside the training data. Thomas Redman in his article titled, ‘If Your Data is Bad, Your Machine Learning Tools Are Useless’, explained that to train properly a predictive model, historical data must meet exceptionally broad and high-quality standards. First, the data must be right: it must be correct, properly labeled, and so forth. But you must also have the right data–lots of unbiased data, over the entire range of inputs for which one aims to develop the predictive model. Shlomit Yanisky-Ravid and Sean Hallisey on Equality and Privacy by design indicated that the key attributes of data are volume, velocity, variety, and veracity. On veracity, they argued limitations arise based on the deviation of the data from the real world. Thus, where a selection bias existed, the training of the model will not exhibit the actual condition due to errors in sampling data. Also, AI for predictive analytics is limited by unavailable data. Nate Silver also reiterated that a lack of meaningful data is one of the two principal factors that limit the success of predictive analytics. Another limitation on data is that, where the AI-powered solutions perform predictive analytics, most of the data it relies on are in their generic nature, factual distinctions between these cases are therefore difficult to track. Most of the evaluations are based on the published opinions on facts the courts found relevant, excluding the entire factual records, in that case, hence the predictive model will be limited in finding meaningful factual similarities between past cases and prospective cases, which is why Lex Machina for example, makes use of trial-level records in their analysis. Finally, on limitation due to data, where training data has some inherent patterns of inequality, discrimination, or prejudice, those patterns will be clear in the model’s outcome.

The design also introduces its limitations to the AI-powered system. In modeling an AI-powered solution; the human element is very critical. This makes the AI susceptible to human biases from the design stage. Kate Crawford wrote that “Like all technologies before it, artificial intelligence will reflect the values of its creators. So, inclusivity matters–from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old biases and stereotypes’’. This perfectly highlights the possibility of bias in the design stage, which replicates the model created, thus churning out a biased outcome. Kleinberg et al. identify three design choices that can lead to algorithms operating in a discriminatory manner: the choice of output variable; the choice of input variables (candidate predictors); and the choice of the training procedure.

These solutions would have some limitations if the data fed it is not updated to reflect changes in the requisite laws, policies, or regulations. An example is a rule-based AI solution relying on repealed law to still give automated answers and decisions to clients. Where there are changes in the regulations but not updated in the solution, the outcome churned out will be wrong. Accountability cannot be said to be a limitation but can be characterized as a limitation of the system of governance. Thus, questions around accountability are related to issues of transparency, explainability, and interpretability. On transparency, the process of outputs from algorithms is opaque on how the inputs were utilized. This is referred to mostly as the black box problem. The UK Information Commissioner’s Office states: ‘The complexity of the processing of data through such massive networks creates a “black box” effect’. This causes an inevitable opacity that makes it very difficult to understand the reasons for decisions made because of deep learning.

On the limitation of bias and legal ceiling to be applied, with proper regulation, algorithms can help to reduce discrimination. But the key phrase here is “proper regulation,” which we do not currently have. If properly designed and used, algorithmic systems can be used to effectively demonstrate bias in human endeavours and, therefore, be a positive force for equity. Brian Sheppard on trade secrecy in AI tools indicated that secrecy makes it harder for consumers to realize the full benefits of a competitive marketplace. Thus, further regulation around the development of AI systems will have enormous benefits for lawyers.

In conclusion, AI-powered solutions in their diverse ways have impacted the legal industry positively as stated in this and previous articles, per their unique contributions towards efficiency, operational strategy, excellence, and the profitability of the legal firms that employ their use.

Author: Ing. Bernard Lemawu, BSc Elect Eng, MBA, LLB, LLM Cand. | Member, Institute of ICT Professionals Ghana

For comments, contact author


Leave a reply

Your email address will not be published. Required fields are marked *

Your Name:

Your Website

Your Comment