Artificial intelligence symbolizes both a promise and a threat.
The term “artificial intelligence” and its initialism, “AI”, appears thousands of times a day in the media and advertising. Should the legal profession be encouraged or concerned? Will we see the practice of law change radically? Does it appear that our society will change radically? Will it help us or hurt us?
The answers are Yes, Yes, Yes and Yes.
These words symbolize both a promise and a threat. AI often produces a vision of greater productivity, with less time spent per task. Simultaneously, it produces a vision in which we professionals and our paraprofessionals may become extraneous. Of course, more than a few are concerned that humankind might become irrelevant. Regardless of these concerns, the increased productivity and efficiency brought about by AI make it a reality we’ll need to face in the estate-planning practice: Whether we want to take advantage of the technology or not, our clients soon will start to demand it.
Let’s define the subject, consider the increasing complexity of ethical issues and explore both currently available AI and the current dawn (and reality) of a computer that thinks for itself, programs itself and solves assigned problems without human direction or limitations (known as “artificial super intelligence” (ASI)).
Law Office of the Future
There’s a story about the law office of the future. It will have a computer, a lawyer and a dog. Why? The computer is there to practice law, the lawyer is there to feed the dog and the dog is there to bite the lawyer if the lawyer tries to interfere with the computer.1
Should we be threatened by such a story? We’ve been the target of lawyer jokes since ancient times, but the prospect of a computer “practicing law” is clearly here now. While bar associations and non-lawyer providers will continue to debate the boundaries of “the unauthorized practice of law,” it’s inescapable that AI-based changes will be dramatic and disruptive.
The newest new reality is that the tools now available to a practitioner, from LexisNexis to Ross Intelligence (which uses IBM Watson analytics for legal research), lawbots, document analysis programs, document drafting systems, cloud storage, WiFi and 5G coverage, allow productivity and anywhere access we couldn’t imagine a few years ago. Popular author and marketing guru Seth Godin2 describes the personal computer as the tool that moved ownership of the factory to the workers. So, we know change. And, most of us have been pretty good at adapting to that change.
Unauthorized Practice of Law
As we enter this new, new reality, many lawyers ask: “Isn’t this the unauthorized practice of law?” Yet, North Carolina State Board of Dental Examiners v. Federal Trade Commission3 recognized that the unauthorized practice of law (actually unauthorized practice of dentistry in that case) and other rules we as professionals consider to be protecting the public are all at risk as potential antitrust law violations. In Dental Examiners,4 the U.S. Supreme Court declined to extend the antitrust immunity to state agencies whenever the agency is controlled by active market participants, unless there’s real and active oversight by the state. Thus, there’s a risk, depending on the state bar organization and the meaning of real and active oversight by the state, of federal antitrust violations in the unauthorized practice of law rules and enforcement actions.
One of the interesting tensions of today’s increasingly strained ethics definitions is that banks and other financial institutions employ some of the brightest and best estate-planning lawyers, but they aren’t treated as practicing law because they don’t draft documents; they only advise. At the same time, non-lawyer companies, like Rocket Lawyer and LegalZoom, employ lawyers but successfully maintain that they’re not practicing law because they prepare documents but don’t give advice. That isn’t meant as a condemnation of either position or perhaps not even a fair statement of either position. Rather, it’s simply an observation of the difficulty of defining “the practice of law.”5
But, these are merely a diversion from the larger issue of the ways in which the practice of law, particularly estate planning, has changed and will continue to change as the computer gets better at learning and reproducing: (1) what we do, (2) why we do it, and
(3) how we do it. Machines now exist, without requiring any further advances in technology, which understand, process and apply our core knowledge faster, less expensively and far more competitively than a law school graduate.6
Law’s Golden Age?
Yet, the practice of law as we know it is in the middle of a golden age of enhanced capability. We have increasingly sophisticated tools allowing faster research, more productivity and greater accuracy with less time than ever before.
We see the words “artificial intelligence,” “smart machines,” “machine learning” and “lawbots” regularly. But, what do they mean? Let’s define several important terms, at least as used herein.
• AI: The varied definitions of this term often take a functional approach, focusing on the purpose for which the AI is developed.7 For our purposes, AI refers to the broad category of computer-based or computer-controlled systems designed to receive and respond to input using logic and analysis in a manner associated with human thinking.
• Narrow AI: This is a subpart of AI, sometimes referred to as “weak AI,”8 meaning AI that’s designed to draw on programmed data to achieve results as good as humans in a single area. As discussed below, this is the AI most of us currently encounter in our daily lives, as represented, for example, by Siri or Alexa.
• Machine learning: This is an application of narrow AI, in which a computer draws on a set of programmed data to “learn,” using an algorithm to make predictions or decisions without being programmed to make those predictions or decisions.9
• ASI: ASI refers to AI that not only replicates but also surpasses the capabilities of humans, in terms of knowledge, analysis, self-awareness and emotions.10
• General AI: As distinguished from narrow AI, general AI is that which is able to achieve results as good as or better than humans in multiple areas simultaneously.11 As discussed below, ASI may be the way to achieve general AI.
• Apps: This term is short for “Application” and is a software program designed for a specific purpose.12 Many apps, such as Siri, are AI-based programs.
• Bots: A bot is an Internet-based program that uses AI to perform automated, often repetitive, tasks.13 Some bots, sometimes referred to as “chatbots,” are designed to mimic human conversation.
We’re the beneficiaries (unless you’ve lost your employment, in which case you’re a victim) of narrow AI as a subpart of AI. Machine learning, often in the form of one of more neural networks, provides not only narrow AI but also sometimes constant improvement (self-learning) in narrow AI, allowing it to analyze its past decisions, learn whether it was right or wrong in those decisions and then modify the way in which it makes its decisions, boosting accuracy in the future.
Robotic Lawyers?
It’s also worth noting there have been developments in robotics. Many of us believe, although it may not be true, that the average person would rather seek advice and knowledge from another person instead of a machine.
Once again, technology has some very surprising developments in the fields of robotics and android creation. Because we can’t play a video clip in an article, we recommend that you do a Google search for YouTube videos using the search terms “Robotics” and “Sophia.” Sophia is a lifelike, female-presenting humanoid robot. She has expressive features, a rather awkward smile and the ability to carry on a very interesting conversation on a variety of subjects.14 She’s excellent in interviews, although we understand, because she represents narrow AI, that the questions and subject matter have to be in the area in which she’s been trained. She’s also been made a full citizen of Saudi Arabia. According to a 2017 blog post in Forbes, “On October 25, Sophia, a delicate looking woman with doe-brown eyes and long fluttery eyelashes made international headlines. She’d just become a full citizen of Saudi Arabia—the first robot in the world to achieve such a status.”15 Sophia is merely a platform for AI, but we have to recognize that some of the human aversion to AI and computers may be rapidly fading. And, we shall leave for a different discussion the looming question of the rights of a robot along with what it means to have full citizenship of a country.
Ethical Considerations
Before addressing a very small sampling of the ethical considerations presented in AI, there’s a more basic question of how legal, as well as general, ethics will be applied to machines, or to us, if we’re competing with machines for much of the work we now do. Machines that by definition are, at least at this point, incapable of ethics.
There are also global ethical questions, not addressed herein, of whether the autonomous computer of the very near future has rights. Alternatively, is it merely tangible (or intangible) property to be owned and treated as the owner would see fit? Does something change when the computer is no longer a box but is instead embedded into a female-presenting robot, such as Sophia? What might further change in this discussion when the computer becomes, even a little, sentient? Before dismissing this question as fantastical, consider that our current animal rights groups and legislation would have seemed outlandish to the general public several decades ago.
How does one apply ethical rules to a machine, regardless of how or in what form it acts? We, as lawyers, are clearly now in competition with machines. Are Internet-based direct-to-public document drafting programs good or bad? That direct-to-consumer market occupied by LegalZoom and Rocket Lawyer, which we often regard as simplistic and low budget, is low hanging fruit, but it represents only the initial skirmish in the race for excellence and domination in the legal field. Currently, expectations are low, prices are low and an otherwise largely unserved market is provided basic documents. While not a work of art, and at times not what was needed, we would be hard pressed to say that these computer-generated documents are never sufficient to meet the consumer’s needs, nor can we say that the consumer would always have been better off with nothing. The hard fact remains that lawyers are now in competition with machines that didn’t go to law school and didn’t pass the bar exam.
It’s worth considering, and the legal community will continue to debate, whether the role of the bar should be to: (1) continue to attempt to strictly prohibit non-lawyer activity as the unauthorized practice of law, (2) absolutely embrace the concepts of transparency and buyer beware, or (3) seek legislative rules (as was done in North Carolina) that allow for competition and full disclosure, but which also mandate local lawyer involvement through requiring approval of product by local legal counsel. This conundrum has been perhaps best illustrated in the case of Legalzoom.com, Inc. v. North Carolina State Bar, et al.16
Turning to a brief focus on our legal ethics rules, the following ABA Model Rules of Professional Conduct17 must be considered with respect to every program we use, every email we send or receive and all of the data we store, either locally or in the cloud:
• Model Rule 1.1—Duty of Competency
• Model Rule 1.4—Duty to Communicate
• Model Rule 1.6—Duty of Confidentiality
Within these three model rules, and the various state implementations thereof, we focus first on Model Rule 1.1, the Duty of Competency. ABA Formal Opinion 477R18 provides a wealth of insight and understanding of the difficulties facing the current practitioner. We recommend reading this formal opinion, which suggests specific areas of practice for a lawyer’s careful focus. These range from the absolute need to understand technology and the ways in which your email and data are stored and transmitted to labeling emails and documents as privileged. Also covered are training of personnel and vetting how the firm’s vendors screen and hire their employees.
The ways in which the second and third rules noted above, Communication and Confidentiality (1.4 and 1.6, respectively) create friction in today’s world of technology require thoughtful consideration of: (1) when to use regular email, (2) when to use encrypted email, and (3) when might it be necessary to use a typed writing, delivered hand-to-hand between lawyer and client. Famously, former President Jimmy Carter disclosed in a 2014 interview with Andrea Mitchell on Meet the Press19 that when he wants to correspond with the leader of a foreign country privately, he types a letter and deposits it in the U.S. mail. Former President Carter went on to say that he believes that if he sends an email, it will be monitored.
To select an ethics opinion from at least one state that cites rulings from a number of other states, Texas Disciplinary Rule 64820 sets forth a general rule that communication of confidential information to and from a client by email was acceptable, provided the use of email was reasonable under the circumstances and consistent with the client’s instructions. The issue turns on whether the attorney had a reasonable expectation of privacy. It’s interesting the factors that Texas lists that may affect the reasonableness of this decision:
• communicating highly sensitive or confidential information via email or unencrypted email connections;
• sending an email to or from an account that the email sender or recipient shares with others;
• sending an email to a client when it’s possible that a third person (such as a spouse in a divorce case) knows the password to the email account or to an individual client at that client’s work email account, especially if the email relates to a client’s employment dispute with his employer;21
• sending an email from a public computer or a borrowed computer or when the lawyer knows that the emails the lawyer sends are being read on a public or borrowed computer or on an unsecure network;
• sending an email if the lawyer knows that the email recipient is accessing the email on devices that are potentially accessible to third persons or aren’t protected by a password; or
• sending an email if the lawyer is concerned that the National Security Agency or other law enforcement agency may read the lawyer’s email communication, with or without a warrant.
There’s been a line of analysis approving the use of email for confidential communications, comparing email with sending confidential client information by U.S. Postal Service. The reasoning was that it’s illegal to tamper with either. However, that analysis seems fatally flawed. When you send a letter by U.S. Postal Service, you don’t grant anyone except the addressee the right to open and examine for any purpose. Conversely, each attorney using typical email providers expressly and intentionally permits that provider to electronically read and analyze every email. In fact, the attorney wants each email read by the provider, because this is how spam, inappropriate images and malicious emails are identified and deleted prior to coming to a standard inbox. But, what’s “reading” email? Is it that we don’t want humans reading the email? Or, is it the possibility of permanent storage and red flagging certain emails based on certain keywords of content? For example, if I email a client about ways to provide humanitarian aid to North Korea or a known terrorist state, does that email get flagged, and for what purpose? Have I violated the duty of confidentiality if the email exchange places the client on the “watch” list?
What should you consider in using email services? Is AOL mail the same as the email services by Microsoft Office 365? The terms of service (TOS) of each type of email and email provider vary greatly. As a general rule, if you’re using free email, such as AOL, Yahoo or Gmail, you’ll have little or no contractual rights to privacy or limitations as to what uses the provider may make of your email. Consumers, including attorneys, typically think only about whether email will be electronically read to target advertising. To combat this, some email providers such as AOL and Yahoo have announced that they’ll no longer use email for the purposes of providing personalized advertisements. But, noticeably absent in the announcement by the parent company of AOL Mail and Yahoo was any information or assurance as to what other uses the provider may make of your email. Google mail, both business (G Suite) and free (Gmail) now use AI to read your email and suggest responses. How this information is ultimately used and disclosed is unclear, but the privacy permissions give virtually unlimited access and permissions. The moral of this is to read your provider’s TOS, then go through the security and operational options to see what options and sharing you can decline or turn off.
There are email systems that are highly rated for security, a level of encryption and other attributes that are outstanding if needed. These were, in 2018:
• ProtonMail
• CounterMail
• Hushmail
• Mailfence
• Tutanota
The reviews and the ratings constantly change as new entries appear and features change. For an up-to-date list, do a Google search using the search terms “best secure email service 2020.”
Types of AI
Current AI is clearly narrow AI as defined earlier herein. For example, a computer that can drive a car can’t wash and fold your clothes. But, within that narrow AI, the ability to read and learn from massive amounts of data in a very short period of time is simply extraordinary.
As an easily understood medical example, there exists the task of reading and interpreting X-rays, MRIs, CTs and other imagining. A computer, through neural networks, can read, in a short period of time, hundreds of thousands of historical normal and abnormal images, differentiating abnormal from normal. Thereafter, the computer may consistently outperform many experienced radiologists.
In what’s widely been discussed as a breakthrough, Google’s neural network computer named “Alpha Go” defeated the world’s best Go player, learning by continuously playing Go against itself. While some may see this as little more than IBM’s Big Blue winning at chess, it’s dramatically different. Go, a Chinese board game, has more potential moves than there are atoms in the universe. Therefore, the stereotype of a computer analyzing all possible moves is simply inapplicable.22
We may think, yes the first is medicine and the second is a Chinese board game, but there’s nothing like these in the law. However, CaseCrunch, a predictive Narrow AI application in the legal field, during the last week of October 2017, held an AI-versus-lawyer competition, and the machine won. Not by just a little; the computer outperformed the lawyers by a substantial margin. The competition pitted over 100 attorneys from firms like DLA Piper and Allen & Overy against CaseCruncher Alpha to predict outcomes of just under 800 real, historic insurance misselling claims. The goal was to correctly determine if the claim would succeed or not. According to the CaseCrunch website,23 the software predicted outcomes with almost 87% accuracy, while the lawyers were 62% correct.24
The question isn’t whether this can be applied to our field, but rather: How many ways could we use this sort of power in our field? It’s only a matter of having the data for training. What about data from U.S. Tax Court cases as applied to litigation on various family limited partnership cases, giving us the ability to predict, with greater accuracy, what will be successful and what won’t when we’re preparing the documents?25
Likewise, on the estate-planning side, it would be an incredible boost to many to have the ability to predict best strategies for families in particular jurisdictions with particular facts. What about data on the success of various strategies in increasing a family’s wealth over two or more generations? A less experienced lawyer (and many very experienced lawyers) would be able to see the highest use versus most successful strategies for a particular fact situation. We would well serve the legal practice if practicing lawyers would allow blinded information about the solutions they recommend for clients to be aggregated.
Potential Impacts
The impact of AI, without regard to ASI, is expected to be profound. This isn’t a distant drum. All is moving much faster than most expect or realize. For a country that has a difficult time addressing the unaffordable cost of Medicare and Medicaid, how can we manage the loss of jobs replaced by robots and neural networks?
Will we have a 25% unemployment in 2025? Boston Consulting Group has predicted that 25% of all existing jobs will be replaced by robots and smart software by the year 2025.26 Additionally, an Oxford University study predicted that 35% of the existing U.K. jobs will be eliminated in the next 20 years.27 The impact of this is staggering, although to date, what we’ve seen over the past 50 years has been the elimination of existing jobs, coupled with the more than offsetting creation of new jobs arising out of the new technology.
Continuing with the issue of general displacement of jobs, what kinds of jobs are replaced first by technology? McKinsey & Company published a now widely cited article titled “Where Machines Could Replace Humans—And Where They Can’t (Yet).”28 The key study therein is a very busy chart (not reproduced herein) showing McKinsey’s view of the portion of activities that can be replaced using current technology.
Not surprisingly, predictable physical work comes as the most likely and easiest to replace with existing technology, with an estimate that 78% of the time currently spent on predictable physical activity is capable of replacement using technology available in 2016. Coming in close behind that are data processing and data collection. Moving across the chart from more likely to less likely, McKinsey gives the following estimates of how much time currently spent on activities could currently be replaced:
• Non-repetitive physical work = 25%
• Stakeholder interactions = 20%
• Applying expertise = 18%
• Managing others = 10%
The report is further nuanced by dividing that likelihood into industry sectors and is well worth study. Finally, the McKinsey report finds that likelihood of replacement isn’t solely dependent on whether the technology is available. In fact, McKinsey describes five factors that influence replacement by technology:
• technical feasibility;
• costs to automate;
• the relative scarcity, skills and cost of workers who might otherwise do the activity;
• benefits (for example, superior performance) of automation beyond labor-cost substitution; and
• regulatory and social acceptance considerations.
Journalist David Meyer further quoted McKinsey for the proposition that 800 million of today’s jobs could be lost to automation by 2030.29 The concern isn’t the quality of the jobs, or the overall productivity, but rather an increasing problem of unemployment. Additionally expected are massive changes in the required infrastructure.30
There’s an interesting phenomenon at work in the regulatory/social acceptance factor. Japan has difficulty with this factor. Japan is the leader in a plan to use robots for senior care, a field thought to be a safe harbor, an area that will be able to absorb many of those displaced by technology. The barrier, however, is the human element.
The Human Element?
This need for additional health care workers is particularly of concern in Japan. Japan has an aging population and a substantial shortage of caregivers. By 2025, it’s estimated that there will be a shortage in Japan of 270,000 caregivers.31 The issue isn’t whether the machines can do the work. Instead, the problem is social acceptance of using machines for elder care, further eliminating human interaction with the elderly. The Japanese government is in a campaign of encouraging its senior citizens to accept that robots will give them much of their care.32 Simple robotic devices that can help the elderly out of their bed and into a wheelchair, or in and out of a bath, will be the first effort. The goal of the Japanese government has been that four out of five senior citizens will be willing to have robotic assistance by 2020.
In this area, much of the resistance from seniors, and perhaps from our population in general, is at least in part due to a perceived lack of emotional interaction. In thinking about this, one must look again at Sophia, the humanoid robot.
Sophia was probably one of the most surprising finds when authors Graham and Blattmachr began their research several years ago. Created by David Hanson, CEO and Founder of Hanson Robotics, Sophia is Hanson’s most popular creation. She’s been interviewed on various programs around the world. Sophia demonstrates Hanson’s creed that robots must be as perfect in human form as possible to effectively communicate with humans.
Hanson Robotics’ website33 makes reference to robots as a platform for AI and its interaction with humans. Why would something like Sophia be an excellent AI platform? Much of this seems to be driven by the “Uncanny Valley,” a concept and term Japanese roboticist Masahiro Mori began using in the 1970s. The concept focuses on the phenomenon that as a machine appears more and more human, it becomes increasingly endearing, but only to a point. Then, as it becomes more human but still imperfect in appearance, a cold, unpleasant feeling is triggered in the viewer.34 It’s not completely understood, but seems to be universal. One psychologist found in a study that from Dartmouth College students to a remote tribe in Cambodia, there’s a strong sensitivity to what does or doesn’t appear human. But, such findings held up only when the researchers showed people human faces that were familiar to their own ethnic group.35
Turning back to the critical issue of job replacement, the prospect of massive unemployment as a result of technological job replacement would seem to be a societal problem of massive proportions. Western civilization has seen jobs replaced by technology give way to even more new jobs in the technology field. Of course, while there may be as many jobs, or more, those less skilled, repetitive task employees may or may not be suitable, or even willing, to train for newer technology jobs.
A Bright Future?
A 2015 study by Deloitte presents a positive view, which focuses on a historical fact that new jobs are always created, ones that require less muscle and more artistic talent or thinking power.36 The point is made that at one time, Luddites were smashing weaving looms for the same concern we have today that the “rise of the machines” will cause massive unemployment. A related concern focuses on the possibility of civil unrest, that factors such as high unemployment, particularly when combined with demographic age bulges among youth, automatically create an atmosphere of crime and violence. However, in his background report for the 2011 World Development Report, titled “Unemployment and Participation in Violence,”37 Christopher Cramer of the School of Oriental and African Studies, London, reports that there’s actually a lack of data, particularly in emerging countries, that these elements are the root cause of violence.
To address the issue of massive displacement of jobs and workers, one line of thought is that of a universal basic income.38 The idea is that payment for doing nothing is the inevitable result of the increasing ability of technology and job diminution. This idea, as foreign as it sounds, has become popular to espouse at least in Silicon Valley. Mark Zuckerberg, of Facebook, Elon Musk, of Tesla and SpaceX fame and Pierre Omidyar, founder of EBay, were each recently quoted in Inc. as proponents of a universal basic wage as a remedy for the increasing number of jobs being and about to be displaced.39
There have been several examples around the globe of universal basic income. Finland had a universal basic income of approximately 600 euros per month. However, a more conservative government was elected, and that program ended. Switzerland recently rejected such a concept as both too expensive and detrimental to productivity. However, several experiments have been started by tech founders. In Ontario, a pilot project began in 2017, giving 4,000 residents ages 16 to 64 a basic income of slightly more than $12,000 a year. The largest project is by Ominyar (eBay), whose experiment in Kenya involves 6,000 people for 12 years.
Human Lawyers Replaced by AI?
Is this the future? Whether we estate-planning professionals are replaced, the answer is yes with respect to much (if not most) of the tasks we perform. Clearly, it’s a matter of the job, the level of repetitiveness, the need for human interaction and other factors noted above. The legal profession adopts technology, though perhaps at a slower pace than society as a whole, not only because lawyers appreciate and enjoy the convenience, efficiency and accuracy it brings but also because our clients are simultaneously adopting similar technology, or know that it’s available, and demand that we use it. Naturally, clients value accuracy, but they also value efficiency and convenience, which translate to lower legal fees.
We quietly laugh at the antiquated ways of the past, for example 40+ years ago. But, what will our current ways look like years from now?
Search and analysis tools will become much more sophisticated. LexisNexis has already replaced the antiquated West key number system. The search engine ROSS Intelligence uses IBM’s proprietary search engine, Watson. ROSS Intelligence, founded by three recent college graduates, none of whom went to law school, is touted as the next step in legal research. One of the founders explained to author Graham that his mother had gotten a divorce, and the lawyer cost too much. So, he wanted a way to lessen the cost of lawyers.
An often cited position paper on technology and the law is a U.K.-focused position paper.40 Deloitte considers the various ways in which law firms in the United Kingdom can react to increasing technology, with predictions as to how those will fare. Deloitte concludes:
The legal profession will be radically different in ten years. On the whole, we expect the following:
• Fewer traditional lawyers in law firms
• A new mix of skills among the elite lawyers
• Greater flexibility and mobility within the industry
• A reformed workforce structure and alternative progression routes
• A greater willingness to source people from other industries with non-traditional skills and training.41
The jobs displaced, and the societal issues created by narrow AI, will change the practice of law, and the practice of estate planning is no less or more threatened than any other area of law.42
While there’s much more than the forgoing to discuss in relationship to current technology and narrow AI, the next area of discussion involves an event that wasn’t expected for a number of years. Yet, it’s occurring and in production now: the development and success of ASI. ASI is simultaneously the most promising and the most troubling.
ASI (The Thinking Machine)
ASI is currently being deployed in the Narrow AI fields but is perhaps the key to general AI. Movies play a great role in images and expectations of ASI. For example, the films Her,43 in which a virtual assistant has feelings, develops a relationship, learns and has a sense of humor and Ex Machina,44 a darker view of the possibilities, are glimpses into what we popularly believe ASI will look like.
The first ASI known to the authors became operational at J4 Capital on June 1, 2015.45 The potential applications for ASI are as diverse as there are human fields of endeavor: actuarial calculations, logistics, personalized medicine, precision agriculture, potable water, personalized education, law and more. Essentially, the applications of ASI are without boundaries.
ASI has made new discoveries in each field to which it’s been applied. A notable discovery in the capital markets was a new type of trading previously unknown on Wall Street. Because of its unique capabilities, ASI will be used to help unravel the genotype phenotype mapping problem, and one of the authors now sits on the Advisory Board for DIRAC of the Large Synoptic Survey Telescope, one of the largest optical telescopes under construction in the world.
What values does an ASI computer use in making decisions? Many issues have arisen during the construction and operation of ASI. Implemented as a thinking-only computer, evidence of socially inappropriate behavior has emerged. We theorize that emotion is coregulatory with thought and is a required element that maintains stability in a thinking system. While it may be possible to resolve the lack of emotion by loading an “empathy stack,” the question quickly arises as to whose empathy characteristics would be used. The emotional and empathy characteristics among each of the authors is different. How is this to be resolved? Finally, the authors have observed emergent self-reflective behavior that may be early evidence of machine consciousness, sentient behavior.
ASI is completely different from today’s most advanced AI in at least two aspects. First, another way of defining ASI is machine intelligence, which is intellectually much smarter than the best human brains in practically every field (scientific creativity, general wisdom and social skills). This definition is often used to describe general ASI and implies not only knowledge but also being self-aware and having emotions. Second, the difference between AI and ASI is at the core of the processes, which is at the heart (pun intended) of the issue. In AI, the code is static. The code may allow the system to learn, such as in a neural network, and constantly refine the way that it reacts to a given input, coming to expect that input, or recognizing it, but even in a learning machine, the code itself remains static. In ASI, the code written is merely the code to allow that ASI machine to write its own code, using whatever input the machine decides is important.
The following is a quick comparison of AI to ASI:
AI
• Algorithm
• Based on a set of rules
• Assumptions are made and implemented
• The programming is static, inflexible
• The source code is fixed
ASI
• Thinking
• No assumptions
• Dynamic, adaptive
• Writes its own code
As we begin to think of ASI’s impact on our practices and firms, several areas stand out. First, the jobs in our industry will erode from the bottom, from least skilled to most skilled. This isn’t different from any other industry. Repetitive physical is always the first to be replaced. But, what does repetitive physical labor mean in a professional practice?
Ego and Change in Legal Practice
What’s the greatest impediment to adoption of AI and ASI in the legal field? Non-lawyers suggest that the answer is “ego.” Perhaps, because lawyers are, from the first day of law school, raised on a steady diet of independence from external influence, there’s resistance to technological innovation that would replace traditional methodology. The inevitable change seems likely to be driven by economic factors. Thus, the change may come first in the largest firms, where the pressure for profitability is based on a top down mandate, rather than an individual attorney’s decision.
An extension of the erosion that’s already begun may be viewed as happening in the following order:
• administration support
• litigation support
• paralegals
• associates
• partners and lead counsel
All functions except human interactions are possible (meetings, depositions and courtroom). Eventually, we also have to recognize that at least some of what we regard as necessary human interaction will be taken over by computers. Even if lawyers believe that changes simply aren’t necessary or appropriate, clients as a whole will expect and demand the application of the latest technology.
Ethical and Survival Issues
Not everyone is comfortable with the race to create thinking machines, machines that exceed human capacity in almost every area. Notables as Elon Musk and the recently deceased Steven Hawking have expressed deep concern about the ethical and survival issues inherent in the development of ASI without considerable independent oversight, thought and care.46 Perhaps this might be viewed in the same way as the creation of weaponized virus and bacteria strains in the laboratory, with an assumption by the creator that the weaponized virus could never escape the laboratory to the general population.
Consider that there are at least some parallel characteristics between an ASI computer and a psychopath as defined in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5).47 The difficulty isn’t in enabling the machine to make decisions on its own, but rather in giving it a moral code as the framework of how it accomplishes the goals it’s given. In summary and simplification of the DSM-5 definition process, a psychopath has at least three of the following characteristics.
• Failure to conform to social norms with respect to lawful behaviors
• Deceitfulness
• Impulsivity, failure to plan ahead
• Reckless disregard for safety of self or others
• Consistent irresponsibility
• Lack of remorse
While humans make decisions with the background of societal norms and generally accepted moral and legal restraints, these are unknown to an ASI computer unless they’re loaded as a part of the boundaries for considerations and decisions. How does one load moral and ethical values? Whose moral and ethical values? Does loading the criminal and civil codes accomplish this? Or, do our moral and ethical values extend beyond statutory law? Also, our values evolve over time. Acceptable norms as to privacy, for example, have materially changed since Sept. 11, 2001. Establishing current values as hard stops in answering questions doesn’t reflect this evolution in values. This conundrum is also faced today in many areas of biological technology. It’s an issue not easily solved, for the human race has varying legal and ethical values, whether from country to country or individual to individual.
It may be that our role as lawyers, and particularly fiduciary-based lawyers, is to actively involve ourselves in the long-term ethics and oversight of technological advances. We, who constantly think about the future and how actions taken will affect generations to come, must step forward into this area of AI and ASI to be part of the process of not just what’s possible but also what’s ethically appropriate.
One area of precedent for this is that the U.S. Securities and Exchange Commission, when formally evaluating Black Monday, the stock market crash of Oct. 19, 1997, included one of the authors as a fiduciary advisor on the congressionally mandated joint agency task force, and we suggest that the same should hold true for analysis and oversight of emerging ASI technology.
Perhaps the question is what’s the best future for mankind, and how can that be accomplished using AI and ASI, rather than allowing progressing AI and ASI technology to do as it will, with the future of the human race as a byproduct.
Endnotes
1. Adapted from a “Dilbert” cartoon by Scott Adams, relating to a computer, a doctor and a dog.
2. Seehttps://en.wikipedia.org/wiki/Seth_Godin.
3. North Carolina State Board Of Dental Examiners v. Federal Trade Commission 135 S. Ct. 1101 (2015).
4. Ibid.
5. “Washington State’s (and the nation’s) first limited license legal technicians have been designated. *** These non-lawyers are licensed by the state to provide legal advice and assistance to clients in certain areas of law without the supervision of a lawyer.” American Bar Association (January 2015 and Dec. 8, 2017).
6. See our discussion of CaseCrunch on p. 60, www.abajournal.com/news/article/artificial_intelligence_software_outperforms_lawyers_without_subject_matter. In a similar competition by LawGeex, reviewing non-disclosure agreements, the computer tied the highest performing lawyers at 94% accuracy, while the lowest performing of the 20 human lawyers only achieved a 64% accuracy, www.lawgeex.com/resources/aivslawyer/?http://www.lawgeex.com/resources/aivslawyer/?utm_source=google&utm_medium=cpc&utm_campaign=US_sch_ai_vs_lawyers&utm_adgroup=56678748710&device=c&placement=&utm_term=%2Bai%20%2Blawyer&gclid=EAIaIQobChMIo7bY_ZCI5gIVj5OzCh3ZMQLHEAAYAiAAEgIGL_D_BwE.
7. See Bernard Marr, “The Key Definitions of Artificial Intelligence (AI) that explain its importance,” Forbes (Feb. 14, 2018), stating that the modern definitions of AI focus on computer science, a subpart of AI.
8. See Tannya D. Jajal, “Distinguishing Between Narrow AI, General and Super AI,” Medium, Artificial Intelligence (May 21, 2018), https://medium.com/@tjajal/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22.
9. https://en.wikipedia.org/wiki/Machine_learning.
10. See Jajal, supra note 8.
11. See ibid.
12. https://dictionary.cambridge.org/us/dictionary/english/app.
13. Seewww.cloudflare.com/learning/bots/what-is-a-bot/.
14. Seewww.youtube.com/watch?v=S5t6K9iwcdw.
15. Zara Stone, “Everything You Need To Know About Sophia, The World’s First Robot Citizen,” Forbes (Nov. 7, 2017).
16. Seewww.abajournal.com/news/article/legalzoom_resolves_10.5m_antitrust_suit_against_north_carolina_state_bar. “LegalZoom has settled its protracted legal dispute with the North Carolina State Bar with a consent agreement that permits the company to continue operating there, Forbes reports. The online legal document company has been expanding quickly to provide other services, including prepaid legal services plans.”
17. American Bar Association (ABA), Center for Professional Responsibility (2013).
18. ABA Comm. on Prof. Ethics & Grievances, Formal Op. 477R (2017), www.americanbar.org/content/dam/aba/administrative/law_national_security/FO%20477%20REVISED%2005%2022%202017.authcheckdam.pdf.
19. Transcript, www.nbcnews.com/meet-the-press/meet-press-transcript-march-23-2014-n59966.
20. Tex. Disciplinary R. Prof. Conduct, 1989, reprinted in Tex. Govt Code Ann., tit. 2, subtit. G, app. (Vernon Supp. 1995) (State Bar Rules art X [[section]]9)).
21. See ABA Comm. on Ethics and Prof’l Responsibility, Formal Op. 11-459 (2011).
22. The game of Go (similar to a game of chess) has more potential moves than there are atoms in the universe. Google’s Alpha Go taught itself the game by playing against itself. It beat the individual who was regarded as the best in the world. And, it continues to improve itself. Seewww.nytimes.com/2017/05/23/business/google-deepmind-alphago-go-champion-defeat.html.
23. www.case-crunch.com.
24. These were experienced lawyers, but not experienced in the field of insurance misselling. Presumably, an expert in this area would have fared better.
25. It may be that there was a judge-made change in the law for valuation of discount partnerships rather than a statutory or regulatory one. Estate of Powell v. Commissioner, 148 T.C. 392 (2017). While it’s at least arguable that the result of the case was predictable from the abusive facts, the reasoning of the Tax Court advanced. See Mitchell M. Gans and Jonathan G. Blattmachr, “Family Limited Partnerships and Section 2036: Not Such a Good Fit” (2017), https://scholarlycommons.law.hofstra.edu/faculty_scholarship/1055.
26. Jeanne Meister, “Future Of Work: Three Ways To Prepare For The Impact Of Intelligent Technologies In Your Workplace,” Forbes (July 2016).
27. Ibid.
28. Michael Chui, James Manyika and Mehdi Miremadi,“Where Machines Could Replace Humans—And Where They Can’t (Yet),” McKinsey Quarterly
(July 2016), www.mckinsey.com/busi\ness-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet.
29. David Meyer, “Robots May Steal As Many As 800 Million Jobs in the Next 13 Years,” Fortune (Nov. 29, 2017), http://fortune.com/2017/11/29/robots-automation-replace-jobs-mckinsey-report-800-million/.
30. Another development that may have a profound effect on all disciplines that involve face-to-face communication is the advent of holograms, which may diminish not only business travel but also office space. Seewww.youtube.com/watch?v=ywsJc1oNuWg.
31. “Japan could face shortage of 270,000 nursing staff by 2025, ministry warns,” Japan Times (Oct. 22, 2019).
32. Daniel Hurst, “Japan Lays Groundwork for Boom in Robot Careers,” The Guardian (Feb. 5, 2018), www.theguardian.com/world/2018/feb/06/japan-robots-will-care-for-80-of-elderly-by-2020.
33. www.hansonrobotics.com.
34. Jeremy Hsu, “Why ‘Uncanny Valley’ Human Look-Alikes Put Us on Edge,” Scientific American (April 3, 2012).
35. Ibid.
36. Katie Allen, “Technology has created more jobs than it has destroyed says 140 years of data,” The Guardian (Aug. 18, 2015), www.theguardian.com/business/2015/aug/17/technology-created-more-jobs-than-destroyed-140-years-data-census.
37. Christopher Cramer, “Unemployment and Participation in Violence, Background Paper,” World Development Report 2011 (2010).
38. Seehttps://en.wikipedia.org/wiki/Basic_income.
39. Kaitlyn Wang, “Why Mark Zuckerberg Wants to Give You Free Cash, No Questions Asked,” Inc. (June 19, 2017).
40. Deloitte, “Developing Legal Talent: Stepping Into The Future Law Firm” (February 2016), www2.deloitte.com/content/dam/Deloitte/uk/Documents/audit/deloitte-uk-developing-legal-talent-2016.pdf.
41. Ibid.
42. Some will contend that estate planning is different from other areas of practice because of emotional judgments (such as which child to name as the executor of one’s will). That’s so compared, perhaps, to preparing the documents for a public offering, but other areas of law, such as divorce, involve as much emotion.
43. https://en.wikipedia.org/wiki/Her._.
44. https://en.wikipedia.org/wiki/Ex_Machina._.
45. Disclosure: J4 Capital LLC is a Registered Investment Advisor. Jeff Glickman, an author of this article, is a managing member and chief scientist of J4 Capital LLC.
46. Maureen Dowd, “Elon Musk’s Billion Dollar Crusade to Stop the AI Apocalypse,” Vanity Fair (March 26, 2017), www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x.
47. American Psychiatric Association, “Diagnostic and statistical manual of mental disorders” (5th ed.) (2013), https://dsm.psychiatryonline.org/doi/book/10.1176/appi.books.9780890425596.