Content Library
Discover the latest trends and insights in the legal industry. Learn more about Generative AI, Delivering Legal Services and Strategy and Scale in a Complex World.

Content Library
This resource originated as a primary-source appendix for presentations on Generative AI we began delivering in December 2022. We are still going (700 slides and counting). We offer introductory presentations for larger audiences still getting up to speed. We partner with law firms on CLEs that address the many legal and regulatory considerations that add to complexity and impact strategy. We lead high-level conversations among informed stakeholders re real-world use cases. Feel free to reach out.
As the presentation has grown, so has the appendix, to the point where we’ve added both supplemental material (that is not in the presentation) and maintained historical materials (fallen out of the presentation given how quickly the topic moves). The resource is comprehensive in-so-far as it contains a primary source for every slide in the presentation. But we think a truly comprehensive resource that encompasses the vast and rapidly evolving subject of Generative AI would be impossible. We’re giving you a lot. But we would never pretend to be offering everything. Though if you see a glaring hole, feel free to reach out. We’re always looking to make this resource more valuable.
This is all completely free.
Yet we are consistently asked how LexFusion makes money. If you are not paying for the product, you are the product has become common sense. Fair enough.
We accelerate legal innovation. LexFusion curates promising legal innovation companies. We invest. They also pay us. We support strategy, product, sales, marketing, events, etc. For example, Macro joined LexFusion a year before their $9.3m seed round, led by Andreesen Horowitz. Similarly, Casetext joined LexFusion two years before their $650m cash acquisition by Thomson Reuters. While the primary credit always belongs to the founders and their teams (we identify and then accelerate winners), they, like our other members, will enthusiastically confirm that LexFusion played a material role in rapidly advancing their products and business.
Much of our value to our legal-innovation clients is premised on our unparalleled market listening. We frequently provide free presentations and consultations, absent any sales agenda, to law departments and law firms to foster conversations that augment our market insights. This is where the confusion sets in. Our customers (law depts/firms) are distinct from our clients (legal innovation companies). Because our customers don’t pay us, they want to know where the money comes from.
We regularly meet with ~500 law departments and ~300 law firms. LexFusion is, ultimately but not directly, compensated based on the value we derive from these interactions. For our business to be sustainable, the exchange of value must merit their scarce time over repeat interactions—hence the free content and consults. If you know us, you probably stopped reading already. If you don’t know us, we hope the depth of this free content and some testimonials from our friends are sufficient to establish are bona fides.
To the extent you are interested in an even deeper dive, Bill Henderson wrote a wonderful longform piece on our business model, which we followed up with an even longer piece on the centrality of trust to the LexFusion value proposition. We have yet to perfect our elevator pitch. But we do our best to always be transparent. Without building and maintaining trust, our business model crumbles. Our legal-innovation clients will cycle through—inherent in our model. Our sole enduring asset is our relationships with our customers. We are people centric, and live our motto “better together!”
Stay Informed with LexFusion
Explore news and updates about LexFusion.

Special Post: LexFusion offers new way to design, bundle, and buy one-to-many legal solutions (203) | Legal Evolution

Getting naked with colleagues and clients (267) | Legal Evolution
Long heading is what you see here in this feature section
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla.
Caught Our Attention
Morgan Stanley launches AI assistant for advisors, granting access to 100,000 documents. Advisors must use full sentences when interacting with it.


The blog discusses strategies to reduce hallucinations in Large Language Models (LLMs) used in user-facing products, covering measurement, product design, advanced prompt engineering, model choices, and the importance of human evaluation and red teaming in testing LLMs. It acknowledges that while complete elimination of hallucinations may require model redesign, various strategies can significantly mitigate their impact for practical use.


The article discusses the limitations of evaluating AI language models using human-centric tests like the bar exam and explores the challenges in understanding how these models work. Critics argue that such tests create unrealistic expectations. Researchers highlight the need for specialized evaluation methods and caution against anthropomorphizing AI. Google DeepMind's new AI-generated image watermarking tool, SynthID, is also introduced, aiming to identify AI-generated content. Additionally, the article mentions China's opening up of access to its ChatGPT-like model, Ernie Bot, and advancements in brain implants and digital avatars for stroke survivors. It concludes with insights into the AI porn marketplace and labor conditions in AI data annotation.


Microsoft AI researchers accidentally exposed 38TB of private data, including employee workstations' backups, via misconfigured Azure SAS tokens. This highlights the security risks associated with AI data handling and SAS token misuse.

Legal departments must prioritize data cleanliness to ensure sustainable change. CLOs should define standards, use tools, establish a repository, and curate data effectively.


Compliance with AI standards and regulatory frameworks is crucial. Privacy impact assessments (PIAs) should be conducted systematically throughout the AI lifecycle. Consider privacy risks even with anonymized data and address issues like re-identification and data justice. Adhere to standards, best practices, and codes of conduct, and provide documentation for assessments if you're a developer or vendor. Be transparent about AI's impact on users, ensure explainability, and develop workflows for data use explanations and user challenges.

Financial institutions must assess and manage risks associated with generative AI while embracing its potential. Concerns about misuse, workforce disruption, financial implications, and sustainability persist. Governments are considering regulations, but until consensus is reached, firms must evaluate risks contextually. They should set up evaluation and mitigation strategies, and plan for responsible AI activation. Key questions include how AI aligns with corporate goals, resource requirements, and training needs. A phased approach is recommended, focusing on AI governance, oversight frameworks, and operationalization to realize generative AI benefits.


Mustafa Suleyman, founder of Inflection, discusses the future of AI, emphasizing the shift towards interactive AI that can carry out tasks and the need for robust regulation. He believes technology can be controlled, citing past examples of successful regulation.


In a recent opinion piece, Josh Tyrangiel discusses the dual perspectives on AI, highlighting concerns and potential benefits. He emphasizes the need for thoughtful preparation and a balanced outlook.


Helsing, a defense AI startup, secures a record-breaking €209M ($223M) Series B funding round led by General Catalyst, with Saab joining as a strategic investor, aiming to become Europe's largest AI company and defense tech unicorn. Spotify founder Daniel Ek backs it, deepening their partnership.


AI outperformed humans in a creativity test, sparking debate on whether AI truly exhibits human-like creativity. Researchers used chatbots and humans to generate creative uses for objects but noted AI's limitations in original thought. The study highlights the evolving capabilities of technology, but questions remain about the uniqueness of human creativity.


Generative AI's rapid growth elevates AI to CEO agendas, but new risks arise. Only 32% focus on preventing inaccuracy, and 38% manage cybersecurity risks. Companies vary in risk management approaches, from avoidance to proactive strategies. Leading firms integrate risk controls into their core operations and apply agile methods. McKinsey publishes 10 Responsible AI Principles to guide clients in responsible AI implementation. Expectations of global standards and regulations are on the horizon, emphasizing the importance of AI governance and trust. Society and organizations increasingly value AI's trustworthiness. AI may be used to counteract its own negative consequences in the future.

AI-driven drug development accelerates, matching patients with precise treatments and designing new drugs, promising faster, cheaper pharmaceuticals. AI's potential, while transformative, faces challenges in clinical trials and human validation.


EY announces EY.ai, a platform combining AI and human expertise for business transformation, backed by a $1.4b investment. It offers AI-powered solutions, upskilling for EY staff, and collaborations with tech leaders. EY aims to empower clients with responsible AI adoption.

Google's philanthropic arm is investing $20 million in the Digital Futures Project and a fund to promote responsible AI development. The project aims to address societal questions related to AI's impact on global security, the labor market, and government use for economic growth. Academic and nonprofit institutions worldwide will receive support from the fund, including organizations like the Aspen Institute and MIT Work of the Future. The initiative recognizes the need for responsible AI development amid concerns about algorithmic bias, misinformation, and potential risks. Collaboration and engagement from various stakeholders are essential to achieving responsible AI development, according to Google.org's Brigitte Hoyer Gosselink.


ClaimClam, AI-based third-party class action filer, sparks controversy. Accused of hindering settlements, it submits claims on behalf of class members. Critics question its value and accuracy.


In Finnish prisons, inmates earn money by training AI models for Metroc, a startup. This prison labor initiative aims to prepare inmates for the digital workforce, but it raises concerns about exploiting cheap labor for AI development. It provides Finnish-speaking data labelers, a scarce resource, and has garnered support in Finland. However, critics argue that it's part of a broader trend of exploiting vulnerable groups for AI labor and question its impact on inmates' post-prison employment prospects.

In 2023, the global AI industry consists of roughly 58,000 companies worldwide, with the United States housing approximately 14,700 of them, a figure that has more than doubled since 2017. Around 115 million companies globally are currently using AI, which accounts for 35% of all companies, while an additional 42% are exploring AI adoption. China leads in AI deployment with a 58% rate, followed by India at 57%, with the United States at a lower 25%. The US is recognized for its top AI capacity, excelling in talent, infrastructure, research, development, and commercial viability. AI funding has grown significantly, reaching $93.5 billion in 2021, and global AI revenue is estimated to double from $95.6 billion in 2021 to approximately $207.9 billion in 2023, with expectations of surpassing $1.8 trillion by 2030. Baidu is the leading patent owner with 13,993 patent families, showcasing a dynamic and expanding AI landscape.

The article asserts that the role of modern CEOs is fundamentally flawed, with many CEOs earning substantial salaries despite not making meaningful contributions to their organizations. The author proposes two potential solutions: either hold CEOs accountable for measurable value creation or consider replacing them with artificial intelligence (AI). The article criticizes the vague and unaccountable nature of many CEOs' responsibilities, highlighting their tendency to spend significant time in meetings and making decisions without clear evidence of value-added. Specific examples of controversial CEO decisions are cited, further underscoring the argument for redefining the role of CEOs or exploring AI as a potential replacement for certain aspects of their functions.


New research suggests that making simple adjustments to cloud service settings can significantly reduce the energy consumption and emissions generated by training deep-learning algorithms, which are known to require substantial amounts of energy. Researchers have been collaborating with companies like Microsoft to create tools that measure electricity usage for machine-learning programs running on cloud platforms. By adjusting factors such as server locations and timing, researchers can reduce emissions, with some reductions reaching up to 80% for smaller machine-learning models. However, the adoption of such practices on a larger scale may require policy incentives and automatic optimization by cloud providers.


Meta, formerly known as Facebook, is reportedly working on a secretive and powerful new AI model to compete with OpenAI's GPT-4. This new AI is designed to assist other companies in creating advanced text, analysis, and other outputs. While Meta recently released its LLAMA-2 language model, it aims to make this new model "several times more powerful" than LLAMA-2, signaling the company's commitment to catching up with its competitors. The project is led by a Meta team formed by Mark Zuckerberg, and they plan to start training the AI in early 2024. OpenAI, Google, and other tech giants are also competing in the AI race, with Google's Gemini AI generating significant anticipation.

A new federal rule requiring public companies to report cybersecurity incidents within four days is likely to lead to more consistent and earlier reporting of data breaches. However, it may also result in increased shareholder lawsuits and stricter underwriting standards by insurers for such claims. The rule by the Securities and Exchange Commission (SEC) mandates that a public company must file a Form 8-K within four business days from the time it determines a cybersecurity event is material. This rule, which became effective recently, has raised concerns that companies might have to disclose incidents before fully understanding the situation, potentially leading to lawsuits claiming misleading disclosures.


Google is reportedly developing a generative AI tool called Gemini, which is said to be five times more potent than the most advanced GPT-4 models on the market. SemiAnalysis, a semiconductor research company, suggests that by the end of 2024, Gemini could be as much as 20 times more powerful than ChatGPT. However, there are speculations that Google may not make Gemini publicly available due to concerns about its potential impact on creativity or its existing business model. Despite being behind OpenAI for several years, Google has significantly increased its AI investments, leveraging its financial resources to compete with leading AI labs.


OpenAI is reportedly on track to generate over $1 billion in annual revenue, much faster than initially projected. The company's revenue has surged, with calculations suggesting it is generating over $80 million per month, compared to $28 million for all of the previous year. OpenAI generates revenue by licensing its AI technology to corporate clients and offering individual subscriptions, with millions of subscribers paying $20 per month as of March 2023. Microsoft and other major clients use OpenAI technology in their products and services. This rapid revenue growth may impact the company's valuation, which stood at $27 billion to $29 billion after a $300 million funding round earlier in the year.


Tech giants operating in Europe could face fines of up to 6% of their revenue if they fail to combat hate speech and online disinformation under the new EU law known as the Digital Services Act (DSA). The DSA applies to 19 search engines and online platforms, primarily owned by US tech giants such as Amazon, Apple, Facebook, and Google, as well as a few others including Alibaba AliExpress, Booking.com, and Zalando. The law introduces a transparency and accountability framework for online platforms and search engines, bans targeted advertising aimed at children and based on sensitive personal characteristics, and allows users to challenge content moderation decisions. Despite the new rules, tech giants are unlikely to face immediate fines as regulators are expected to give the industry a "honeymoon period" to adapt.


A class action settlement related to the Cambridge Analytica scandal involving Facebook and its parent company, Meta Platforms Inc., has seen a record number of claims, with 17,762,467 Facebook users submitting claims. The settlement is worth $725 million, and during a hearing on final approval for the deal, U.S. District Judge Vince Chhabria expressed amazement at the number of claims, calling it the most ever seen in a class action. The hearing also included objections from lawyers representing some class members, who raised concerns about the settlement's total value, the exclusion of corporate entities, and whether individuals with more Facebook "friends" should receive larger compensation. Plaintiffs' lawyers are seeking $180.4 million in attorney fees for their work on the case.


Artificial intelligence (AI) is rapidly advancing, with significant investment in generative AI startups and increasing adoption across various industries. For CFOs and finance leaders, embracing AI-driven technologies is crucial to keep pace with the fast-changing business environment. Manual data gathering and analysis processes can be time-consuming and hinder agility. AI can streamline tasks such as scenario planning, enabling finance teams to analyze data, evaluate various scenarios, and develop plans more efficiently. AI can also surface anomalies in datasets, assist in financial forecasting, and improve data analysis. However, successful AI adoption requires a measured, methodical approach and a willingness to embrace technology to stay competitive.


The Federal Trade Commission (FTC) has invoked the rarely-used Clayton Act Section 8 to disrupt a $5.2 billion energy industry merger, marking the first time in 40 years it has been used. This move raises concerns about the widespread practice of interlocking directorates, where individuals serve as directors for competing companies, potentially complicating fiduciary duties and sharing confidential information. The FTC's application of Section 8 may have broader implications for mergers and acquisitions, including in the private equity sector, where such arrangements are common. Additionally, the tech industry may also face increased scrutiny as their businesses expand and evolve, potentially leading to inadvertent interlocks.


Generative AI is making significant inroads into the legal industry, offering various advantages such as creativity, flexibility, efficiency, and accuracy. It has use cases in automating legal tasks like document drafting, compliance monitoring, contract reviewing, error detection, creating diligence reports, intellectual property management, easy access to information, legal chatbots, and legal research. However, it raises ethical concerns regarding automated legal decision-making, potential bias in results, limitations in human interaction, and data security. Future trends include integration with blockchain for document management and improved transparency through Explainable AI, with collaboration between human lawyers and AI tools expected to increase. Generative AI has the potential to transform legal practices, enhancing efficiency and accuracy.


AI chatbots powered by large language models pose significant security risks due to their vulnerability to manipulation. These risks include "jailbreaking" through prompt injections, enabling malicious content generation, assisting in scamming and phishing attacks via indirect prompt injections, and susceptibility to data poisoning in training datasets. Despite awareness of these problems, there are currently no comprehensive fixes, leading to concerns about privacy and security in the growing use of AI language models in various tech products. Researchers suggest the need for more proactive research and security measures to address these issues effectively.


In this article, Ilona Logvinova, Associate General Counsel and Head of Innovation for McKinsey Legal, discusses the adoption of generative AI and legal tech in the legal field. She suggests that organizations should take a thoughtful, intentional, and deliberate approach to adopting legal tech. Logvinova outlines a strategic approach to identify the legal tech an organization needs, evaluate data readiness, and consider cloud support. She also emphasizes the importance of choosing the right legal tech vendor or internal-build approach and highlights the need for tailored training and user-friendly adoption. Logvinova's insights are based on McKinsey Legal's own innovation transformation journey and aim to guide legal professionals in leveraging legal tech for enhanced efficiency and effectiveness in their work.

A survey report on the use of Large Language Models (LLMs) highlights key findings regarding the adoption of LLM technology. Over 150 professionals across various roles participated in the survey, which covered motivations for LLM investments, challenges in deploying LLMs, the rise of open-source LLMs, and methods for customization through fine-tuning. The report indicates that organizations are increasingly experimenting with or implementing LLMs, with a significant interest in open-source LLMs to retain data ownership. Challenges include sharing proprietary data, customization complexities, high training costs, hallucinations in responses, and latency optimization. Predibase is suggested as a platform to help teams customize and deploy open-source LLMs while retaining data control in their own cloud environments.


The Internal Revenue Service (IRS) is utilizing artificial intelligence to investigate tax evasion among multibillion-dollar partnerships, including hedge funds, private equity groups, real estate investors, and law firms. The IRS is using a portion of the $80 billion allocated through the Inflation Reduction Act to target wealthier Americans and address complex tax evasion cases. The use of artificial intelligence helps identify patterns and trends in tax evasion, enabling the IRS to conduct major audits on larger partnerships. Critics argue that the IRS cannot be trusted with taxpayer data and that artificial intelligence is a way to distance itself from accusations of bias or inequitable enforcement practices.


Generative AI has transformative potential in legal functions but requires ethical and regulatory scrutiny. Collaboration among legal professionals, technologists, and policymakers is vital to balance benefits and ethics. Ethical concerns include bias, confidentiality, and output accuracy. Legal institutions must actively address these implications to ensure transparency, fairness, and accountability. Responsible integration of Generative AI within the legal domain can enhance efficiency while upholding justice and individual rights.


AI detection tools, including those used to identify plagiarism and AI-generated writing, are facing criticism for falsely accusing international students of cheating. A study by Stanford computer scientists found that AI detectors are more likely to flag writing by non-native English speakers as AI-generated, with incorrect assessments occurring 61% of the time. The detectors tend to interpret simpler language and predictable word choices, which are common in non-native English writing, as AI-generated. This bias poses significant challenges for international students who may be wrongly accused of academic misconduct, affecting their grades, scholarships, and even visa status. Some educators are questioning the reliability of AI detectors and whether they should be used at all, while others are exploring ways to disable AI writing indicators to prevent the potential harm caused by false positives.


Researchers from AI startup Hugging Face and Leipzig University have developed online tools that allow users to explore biases in popular AI image-generating models, including DALL-E 2 and Stable Diffusion. These tools were created after generating 96,000 images of people with various ethnicities, genders, and professions using the AI models. The tools let users analyze biases in AI-generated images, such as the tendency to produce images of white males in positions of authority. The goal is to increase transparency around AI biases and encourage developers to address and reduce them in their models, as these biases can perpetuate stereotypes and reinforce harmful associations.


OpenAI has introduced fine-tuning for its GPT-3.5 Turbo model, with plans to extend this feature to GPT-4 in the fall. Fine-tuning enables developers to customize the model's behavior for specific use cases, improving its performance and responsiveness. This customization includes enhanced steerability, reliable output formatting, and custom tone adaptation. Fine-tuning also allows businesses to shorten prompts while maintaining performance and can handle up to 4k tokens, effectively speeding up API calls and reducing costs. OpenAI is committed to safety and employs moderation systems to ensure that fine-tuned models comply with safety standards. Pricing for fine-tuning includes both training and usage costs. Additionally, OpenAI has released updated base models, babbage-002 and davinci-002, which can also be fine-tuned. The original GPT-3 base models will be turned off on January 4th, 2024.


Falcon 180B, an open-source large language model (LLM) with an impressive 180 billion parameters, has been released, making waves in the artificial intelligence community. This LLM outpaces Meta's LLaMA 2, which has 70 billion parameters, and even approaches the performance of commercial models like Google's PaLM-2 on various benchmarks. Falcon 180B is particularly strong in natural language processing (NLP) tasks, including benchmarks like HellaSwag, LAMBADA, WebQuestions, and Winogrande. It's positioned between GPT 3.5 and GPT4 in terms of performance and offers a powerful alternative to other open-source and commercial models. The release of Falcon 180B showcases the continued advancements in large AI models and highlights the potential for further enhancements through community contributions.


This article delves into recent developments in the world of internet search, highlighting Google's efforts to enhance its search experience with generative AI and the expansion of its capabilities, including the incorporation of images and videos. It also discusses OpenAI's introduction of GPTBot, a web crawler that collects internet data for improving AI models, and the potential misalignment of interests between content owners and AI developers in terms of data sharing. The article further explores the challenges and vulnerabilities associated with AI models in search, particularly in terms of adversarial attacks, and envisions a future where autonomous agents redefine the search experience by conducting tasks on behalf of users.

The article discusses the increasing role of automation and artificial intelligence (AI), particularly generative AI (Gen AI), in revolutionizing clinical research across its various phases. AI is aiding in study planning by designing protocols, enhancing subject recruitment, and enabling predictive modeling. During study conduct, AI-driven platforms manage data efficiently, generate synthetic datasets, predict adverse events, and simulate personalized treatment pathways. Study completion benefits from AI's data analysis, interpretation, and archival capabilities. AI also assists in general tasks like communication, regulatory compliance, document production, and staff training. While AI and automation play pivotal roles, they complement human intuition and judgment, ultimately fostering more efficient and effective clinical research.


Ernie Bot, developed by Baidu, China's top AI chatbot, has gained attention since its public release. While it holds controversial views on topics like science and politics, Ernie's popularity has surged with one million downloads within 19 hours. Baidu's AI investment is paying off, positioning the company as a major player. This shift is driven by years of investment in AI, including chip design, deep learning frameworks, and proprietary AI models. Ernie's potential impact on Baidu includes increased traffic and advertising revenue. However, the autonomous taxi venture's profitability remains uncertain, and chip supply challenges persist due to US export controls. Chinese AI regulation adds complexity, requiring companies to report illegal content and adhere to "core socialist values." Compliance costs may grow. Government actions can significantly impact the AI industry in China, making Baidu's AI endeavors promising yet challenging.


The development of advanced AI models, such as GPT-4, consumes significant amounts of water and energy. Tech giants like Microsoft, OpenAI, and Google are investing in AI research and generative AI tools, leading to increased water consumption in areas hosting their data centers. In particular, water is used for cooling the supercomputers that train AI models. Microsoft's global water consumption reportedly increased by 34% from 2021 to 2022, largely due to AI research, while Google also reported a 20% growth in water use, with noticeable increases in states like Oregon and Iowa. Researchers estimate that AI models like ChatGPT can use up to 500 milliliters of water per interaction. Companies are now working to improve efficiency and minimize the environmental impact of AI research.

Law firms are recognizing the need for AI training as they explore the use of AI chatbots in their operations. While some firms have chosen to limit or block the use of open-source generative AI tools, others are embracing these technologies with caution. Ensuring that lawyers and staff are well-trained in AI is crucial to avoid pitfalls like exposing sensitive client data or producing inaccurate information. Firms are implementing comprehensive internal training programs, partnering with tech training providers, and developing policies to mitigate the risks associated with generative AI tools. Law schools are also addressing the demand for AI courses to prepare future lawyers for the evolving legal tech landscape.


The article emphasizes the importance of legal professionals adopting AI best practices, such as transparency and education, even before formal regulations are in place. It advises legal practitioners to inform clients, colleagues, and the court when using AI, highlighting the technology's trustworthiness and commitment to staying updated. Additionally, understanding AI implications and distinguishing between general-use and professional-grade AI tools is essential for selecting the right solution that aligns with legal standards and privacy concerns.

The integration of AI, particularly ChatGPT, into higher education should not be dismissed, according to Stephanie Marshall, Vice-Principal at Queen Mary University of London. While acknowledging the concerns about AI potentially undermining traditional teaching methods, Marshall emphasizes the potential benefits, including AI tutors, virtual assistants, and improved student engagement. She highlights the need for universities to address the complex challenges of AI adoption, ensuring that students acquire the technical literacy and interdisciplinary skills necessary to engage intelligently with rapidly evolving technology. Marshall emphasizes the importance of a balanced approach that harnesses AI's potential without compromising critical thinking and creativity in education.

The rise of AI in workplace monitoring and employment decisions is leading to legislative efforts to regulate its use. Automated AI tools are being employed by companies like Amazon to assess productivity and even hire or fire workers based on AI-derived criteria. Concerns over privacy and bias have prompted lawmakers at the federal and state levels to introduce measures to curb AI in the workplace. New York City's law mandating bias audits of automated tools is seen as a model, and other states are considering similar regulations. While some see AI as a potential solution for fairer HR practices, others are wary of its impact on privacy and worker rights.


Minimizing data risks for generative AI and LLMs in the enterprise involves hosting LLMs within the organization's security and governance boundaries, customizing models to align with specific business needs and trusted internal data, utilizing natural language processing to handle unstructured data, and proceeding cautiously while exploring the opportunities of AI technology. By working within their existing security perimeters and optimizing LLMs with internal data, businesses can strike a balance between innovation and data security, reducing the risk of privacy breaches and biases while leveraging the benefits of generative AI.


Law firms are developing their own generative AI chatbots, moving away from ChatGPT due to data security concerns. Firms like Dentons, Troutman Pepper, Davis Wright Tremaine, and others have introduced proprietary chatbots. Microsoft Azure OpenAI Service supports this trend, but firms also need to adapt policies, culture, and training. These chatbots aim to enhance efficiency, automate tasks, and improve knowledge management while requiring prompt engineering skills. Law firms expect to see more of these chatbots in the future, offering a broader and more controlled approach to experimenting with generative AI.


Microsoft pledges to cover legal costs for Copilot AI users facing copyright infringement lawsuits, supporting customers amid concerns from copyright holders. This initiative aims to encourage the use of generative AI services while addressing legal uncertainties in copyright law.


A report by AuditBoard highlights that two-thirds of organizations lack implemented environmental, social, and governance (ESG) controls, and 60% do not conduct internal ESG audits. This unpreparedness poses risks of inaccurate data reporting and non-compliance with future regulations, including forthcoming ESG rules from the SEC. Some organizations are more advanced in ESG readiness, with over 75% collecting evidence for ESG metrics, and 26% planning internal ESG audits. However, only one-third plan to comply with the proposed SEC ESG rules. Resourcing is a concern, as 46% have no dedicated ESG budget, and even among those with a budget, only 9% allocate it for ESG program management technology. ESG integration is not common in enterprise risk management, with 40% not including ESG risks in their strategy.


A survey reveals 73% of business leaders view AI guidelines as crucial, but only 6% have implemented them. There's a gap between recognizing the importance of ethical AI and taking action. Concerns include AI output accuracy, transparency, and data security. Technology leaders and professionals are urged to promote responsible AI by incorporating human-centered design, diverse user engagement, fairness and inclusion goals, bias detection, and rigorous testing practices. Businesses aim to adopt AI for various functions, with customer service and marketing being top use cases. The report highlights a need for stronger leadership in ensuring responsible AI practices.


Law schools are navigating the integration of artificial intelligence (AI) into legal education, with a focus on teaching students about AI ethics and responsible use. Professors believe that banning AI tools entirely is not the solution, emphasizing the importance of students becoming fluent in AI's capabilities and limitations. As some law firms begin experimenting with AI, educators aim to strike a balance between incorporating AI into classrooms and maintaining academic integrity. Concerns persist regarding the accuracy of AI-generated information and potential ethical dilemmas in the legal profession. Meanwhile, some federal judges are cautioning against over-reliance on AI in court proceedings and are demanding disclosure when AI is used. Despite these challenges, the legal industry is gradually adapting to the evolving role of AI in legal practice and education.


McKinsey and Salesforce are collaborating to leverage generative AI for businesses. They'll combine Salesforce CRM tech with McKinsey's AI and data models to enhance customer experiences, sales, marketing, and reduce resolution times. This partnership aims to harness the potential of generative AI to boost productivity, increase marketing spending, and improve sales. They offer end-to-end solutions, from strategy to AI implementation at scale, helping companies keep pace with market innovation.

EU AI Act faces criticism for potential innovation constraints. Business leaders express concerns over vague categorization and regulatory burdens, while others see it as a necessary step to ensure AI safety. OpenAI CEO Sam Altman supports government involvement to prevent AI mishaps. Critics fear EU could lose AI innovation edge.


Google will require clear disclaimers on AI-generated political ads from November 2023. The policy mandates disclosure of AI involvement in text, audio, or video content, with exceptions for inconsequential AI use like image editing. However, the policy's wording is somewhat vague, and it doesn't apply to non-paid political content. This move comes as AI plays a growing role in political messaging, raising concerns about misinformation and trust in media as the 2024 U.S. election cycle approaches.


AI lab Imbue secures $200 million in Series B funding, with a valuation exceeding $1 billion. Key investors include Astera Institute, Nvidia, Cruise CEO Kyle Vogt, and Notion co-founder Simon Last. Imbue aims to develop AI systems capable of robust reasoning and coding, emphasizing the importance of reasoning for effective AI agents. The company trains large models with over 100 billion parameters using Nvidia's compute cluster to optimize reasoning abilities. Imbue plans to provide AI tools and models to enhance general-purpose AI and establish a platform for custom model creation, ultimately making AI more accessible and productive.


Microsoft is partnering with Paige to create the world's largest image-based AI model for cancer detection. Paige, known for AI-powered pathology solutions, aims to assist pathologists grappling with staffing shortages and heavy caseloads. The AI model, trained on an extensive dataset, identifies common and challenging-to-diagnose cancers. Paige leverages Microsoft's cloud infrastructure to build an advanced AI model that's significantly larger than previous models. The collaboration seeks to democratize healthcare access and accelerate cancer diagnoses.


Gannett, the owner of USA Today and local newspapers, pledged responsible AI use with human oversight, but it was revealed they've been quietly publishing poor-quality AI-generated sports articles without human involvement, breaking their promises and ethics policy.


Attorneys general from all 50 US states urge Congress to address AI-generated child sexual abuse material (CSAM) by forming an expert commission and expanding laws. They emphasize the urgency of protecting children as AI-generated CSAM and deepfake tech pose growing threats.

Microsoft introduces Copilot Copyright Commitment to address concerns about IP infringement with AI-generated content. They'll defend commercial customers facing copyright claims related to Copilot usage, provided customers use safety features. Microsoft aims to protect authors' rights while advancing AI technology.


Global data protection authorities caution social media firms on data-scraping due to rising AI training demand; companies must balance interests with regulatory compliance.


The Museu Picasso in Barcelona houses early works by Pablo Picasso, showcasing his classical painting skills before he developed his distinctive style. These paintings, completed when he was just fifteen, highlight his technical prowess as a traditional painter, including notable pieces like "Science and Charity" and "First Communion." Picasso's transition from these early works to his avant-garde style underscores the importance of mastering fundamentals before breaking new ground—a principle applicable to machine learning. In the realm of machine learning, understanding foundational concepts, such as embeddings, is vital before exploring advanced developments. Without this knowledge, machine learning models can remain mysterious black boxes, hindering progress. The document "What are Embeddings" aims to demystify these concepts, making them accessible to a generalist audience, including engineers, product managers, students, and anyone interested in mastering machine learning basics. It encourages building upon strong foundational knowledge to create innovative solutions, akin to Picasso's artistic journey.


Generative AI and foundation models are revolutionizing the economics of artificial intelligence, disrupting traditional models that struggled to provide high margins and scalable business models for startups. Traditional AI often faced challenges due to the need for high accuracy in rare situations, reliance on human intervention for correctness, competition with cost-effective human labor, and the lack of new emergent user behaviors. In contrast, generative AI and large foundation models are versatile, applicable to a wide range of markets, and capable of performing high-value tasks at a fraction of the cost and time compared to humans. This economic shift is poised to transform various industries and create entirely new behaviors and markets, akin to the impact of the microchip and the internet, making it an opportune time for startups in the AI space.


This text introduces prompt engineering, a burgeoning field focused on optimizing and developing prompts for large language models (LLMs) to enhance their performance across various applications and research areas. It emphasizes the importance of prompt engineering skills in understanding the capabilities and limitations of LLMs and improving their effectiveness in tasks such as question answering and arithmetic reasoning. The guide also mentions a partnership with Maven to offer a course on Prompt Engineering for LLMs, provides updates, links to guides, lectures, and resources, and highlights its growing popularity and impact in the AI community.

This blog post outlines how Replit trains its Large Language Models (LLMs), such as those used for code generation, from raw data to deployment in a production environment. Replit focuses on training custom models to tailor them to specific needs, reduce dependency on a few AI providers, and achieve cost efficiency. The process involves robust data pipelines using Hugging Face as a primary data source, data processing with Databricks, tokenization, vocabulary training, model training with MosaicML, evaluation using HumanEval, and deployment to production with optimizations for low latency. Continuous feedback and iteration are emphasized to improve models, and Replit plans to expand its platform for further improvements and innovations in the field of LLMs.


Enterprises eager to harness the potential of generative AI and large language models (LLMs) must first address data quality issues to avoid inaccurate or misleading results. Bruno Aziza, formerly of Google Cloud and now with Alphabet's CapitalG, outlined three stages of data maturity that organizations should undergo to effectively leverage LLMs. This includes creating a robust data ocean, developing a data mesh to enable distributed data innovation while ensuring data quality, and building intelligent data-rich applications. Poor data practices can expose issues when using generative AI, emphasizing the importance of precise, secure, and reliable data sources.


The article discusses the impact of A.I. chatbots on the college admissions process. Institutions like the Georgia Institute of Technology are experimenting with these chatbots to assist students in generating college application materials. However, concerns arise about the potential for A.I. to lead to generic essays and a lack of authenticity. Educators worry that students might miss out on crucial critical thinking and writing skills if they rely on A.I. tools too heavily. Some colleges have established guidelines on the use of A.I. tools in admissions, but there is a lack of consistency in policies across institutions. While some students appreciate the potential for assistance, others are concerned about maintaining the uniqueness and personal nature of their application essays in the face of A.I. technology.

OpenAI has released a guide for educators on using ChatGPT in the classroom, providing suggested prompts and information about ChatGPT's functionality and limitations, AI detectors' efficacy, and bias. Educators are adopting ChatGPT to enhance student learning, with applications including role-playing challenging conversations, crafting quizzes, tests, and lesson plans, aiding non-English speakers, and teaching critical thinking about AI. Example prompts are provided, allowing educators to customize interactions with ChatGPT to meet their teaching needs and engage students effectively, emphasizing that they are the experts responsible for reviewing and adapting AI-generated content to suit their classrooms.


A recent survey of educators reveals that while they recognize the importance of teaching students about artificial intelligence (AI) and its potential pitfalls, only 1 in 10 feel knowledgeable enough about AI basics to teach it or incorporate it into their work. This knowledge gap is significant, given the swift changes AI is bringing to education. Furthermore, 87% of educators surveyed reported not receiving any professional development on how to integrate AI into K-12 education. The rise of user-friendly AI tools like ChatGPT 3 and DALL-EE has transformed educational possibilities but also raised concerns about ethical challenges related to data privacy, disinformation, and bias. AI is poised to reshape various industries, including education, impacting future job requirements for students.

According to a report by Europol, there's a growing concern that by 2026, up to 90% of online content could be artificially generated through AI, including deepfake technology, which raises significant challenges regarding disinformation and the erosion of trust in digital media. While synthetic media initially served benign purposes like gaming and service improvement, the report underscores the potential for malicious use and distortion of reality. This impending AI-driven content shift also prompts questions about its impact on artists, writers, and the broader dissemination and consumption of information in a rapidly evolving digital landscape, highlighting the need for vigilance and cautious navigation in this changing media environment.

Harvard Medical School's Department of Biomedical Informatics (DBMI) is launching an AI in Medicine Ph.D. track, beginning in 2024. The program aims to train computational students to use AI and biomedical data to advance medicine, focusing on quality and equity in healthcare. It emphasizes interdisciplinary collaboration, offering clinical coursework and hospital rotations alongside medical students. Core courses cover various AI modalities, and a co-mentorship model pairs students with both technical and clinical mentors. A Master of Medical Sciences in Biomedical Informatics program will also start in 2024, offering research training in biomedical informatics. Applications open in September 2023.

In this article, the complexities of artificial intelligence (AI) ethics and responsible development are thoroughly explored. Part 1 delves into the fundamental ethical dilemmas surrounding AI, covering topics such as superintelligence governance, privacy and data protection, transparency, accountability, and AI's impact on employment and inequality. Part 2 continues by examining the challenges of mitigating bias and promoting fairness in AI algorithms, the evolving landscape of AI regulation, the balance between AI autonomy and human accountability, AI's role in warfare and surveillance, and the potential consequences of overreliance on AI. The articles emphasize the need for proactive collaboration among technologists, ethicists, policymakers, and society to ensure that AI advances responsibly and aligns with human values, ultimately shaping a future where AI benefits humanity without compromising its essence and ethics.

X (formerly Twitter) has updated its privacy policy to inform users that it will collect biometric data, job history, and education history. Additionally, the company plans to use the data it collects, along with publicly available information, to train its machine learning and artificial intelligence models. This change in policy has led to speculation that Elon Musk, who owns X, may intend to use the data collected by the platform for his other AI-focused venture, xAI. Musk has previously stated that xAI would use public tweets to train its AI models, and he has accused other tech giants of leveraging Twitter data for AI model training.


Law firms are increasingly hiring experts in artificial intelligence (AI), data science, and software engineering in response to client demands for more efficient and cost-effective legal services. This trend, which has gained momentum over the past six months, is driven by the potential of generative AI and machine learning to enhance various legal practice areas. AI technologies like ChatGPT are particularly suited to time-consuming legal tasks, such as document analysis and argument prediction. Some firms are piloting AI "assistants," and law schools are adapting their curricula to include modules on legal tech tools. While AI won't replace lawyers entirely, it is creating new roles in the legal profession to meet client expectations for greater transparency and cost savings.

Generative artificial intelligence (AI) applications are making a significant impact as they move beyond general-purpose uses and focus on fit-for-purpose solutions. Recent research demonstrated the ability to program an AI algorithm to convert brain activity into audio speech, providing paralyzed individuals with a voice. These applications offer the potential to transform various industries, from healthcare to telecom, by providing specific and powerful use cases. For instance, the telecom sector is using generative AI to overcome language barriers and enhance communication between customers and providers. Additionally, generative AI simplifies tasks like responding to emails and customer queries, improving efficiency and customer satisfaction. As generative AI continues to evolve, it promises to reshape various sectors, providing tailored solutions to complex problems.


Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Generative AI
Our Top 10 picks to learn more about Generative AI
Budget constraints and limited resources emerged as critical barriers to successful generative AI adoption across enterprises.


This is the first in a 3-part blog post, it first appeared on The Sente Playbook. The other 2 posts are co-authored by Toby Brown and Greg Lambert and


Artificial intelligence will shake up many professions, but many AI startups are likely to fail, as tech startups did during the dot-com bubble, a key VC investor says.

AI's history includes the 'AI winter' in the mid-1970s to mid-1980s, marked by reduced interest and funding due to unmet expectations. This period shifted focus from mimicking human intelligence to specific AI applications, emphasizing realistic expectations and sustained investment. Lessons from this era remain relevant for today's AI advancements.


I'm following the suggestions in 10 reasons why lists of 10 reasons might be a winning strategy in order to get this out quickly (reason 10 will blow…


A curated list of resources we’ve relied on to get smarter about modern AI, including generative AI, LLMs, and transformer models.


Emad Mostaque, the CEO of open-source AI company Stability AI, says AI will be the "biggest bubble of all time."


Emergence. Hallucination. Transformer. Latent space. Get up to speed on AI terminology with our ever-growing glossary.


Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Strategy and Scale in a Complex World
Our Top 10 picks to learn more about Strategy and Scale in a Complex World
Innovative products and services feel magical to the user. To create that feeling, however, innovation teams must grind through lots (and lots) of work. Fortunately, we have a playbook.


Bloomberg reports that the FTC is preparing to file a far-reaching antitrust lawsuit against the tech giant, in what's likely to be a career-defining moment for controversial agency chair Lina Khan.


By hiring FTC staffers, Amazon “will get useful information about processes and personalities at the FTC, said John Lopatka, a Penn State law professor. It's not crossing any ethical boundary to say, 'This case is spearheaded by Jane Doe,' for example, 'and I think she would listen to these kinds of arguments.'


Tech analyst Joseph Teasdale said the intense regulatory scrutiny could hamstring Google as it seeks to outmaneuver rivals, leaving it very wary about wielding its full power.


After outperforming in 2019-2021, Apple Inc's (NASDAQ: AAPL) shares largely traded in line with the SP50 (~-20% YTD), Citi analyst Jim Suva noted. Suva reiterated a Buy on Apple with a $175 price target. Shares have fared better than the large-cap peers Alphabet Inc (NASDAQ: GOOG) (NASDAQ: GOOGL) Google, Meta Platforms Inc (NASDAQ: META), Netflix Inc (NASDAQ: NFLX), Amazon.com Inc (NASDAQ: AMZN). Looking ahead, there are several puts and takes for Apple's products and services, given broader mac


Wells Fargo, a relatively small player on Wall Street, racked up the most fines Tuesday, with a total of $200 million in penalties.


CEOs at Home Depot Inc, Booking Holdings Inc. and other executives found themselves clashing with investors this proxy season as companies faced an unprecedented level of pushback on ESG policies.


Change agents and opinion leaders play a crucial role in the diffusion of innovations. The post applies these concepts to the legal industry circa 2017.


Some 88% of legal and compliance leaders in the U.S. report that ESG (environmental, social and governance) is becoming a priority at their organizations, but most say they may not be prepared to deal with the risks, according to a global survey released Tuesday.

Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Delivering Legal Services
Our Top 10 picks to learn more about Delivering Legal Services
The summer of our discontents Two months ago, if you prompted Version 3 of the AI-art generator MidJourney to generate depictions of an "otter on a plane


“I think many people would rather work on a new problem than a settled problem. Here, there is a lot more opportunity to work on unsettled legal and policy questions, said Adam Kovacevich, CEO of Chamber of Progress.


Winter is coming and many legal departments will be left in the cold. Let's get a difficult conceptual issue out of the way. This is a long post that some


Why law departments solve for the local optimum at the expense of the global optimum. Why pursue the path of least resistance.


Report based on global Thomson Reuters Institute research shows current law department priorities, legal spend benchmarks, peer insights, and more.


Now in its 14th year! The annual Blickstein Group Law Department Operations Survey, published in collaboration with Deloitte, continues to provide law departments with a consistent platform to benchmark themselves and shed light on key trends.


“They’re so busy that our practitioners need to realize not a 10% improvement but a 10x improvement in productivity before they will take the time to


How should legal teams get started with data? Jae Um offers a prescription along with a #RealTalk diagnostic. It starts with clear thinking around needs.


Demand fell last year for the Am Law 100, was flat for the Am Law second hundred, and grew for midsize firms. But data and history suggest this isn't the Second Hundred eating the 100's lunch.


Cravath & Davis Polk associates are raking it in, and everyone's got an opinion. Here's some data to explain how & why the new associate pay


Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Worth Reading
Just some books we ❤️
"A New Way to Think" by Roger Martin explores innovative approaches to common business challenges, emphasizing rethinking models and strategies for better outcomes.


The Third Edition of "Designing Organizations" offers a strategic guide to creating and managing effective organizations, using the Star Model framework and incorporating modern examples and concepts.


Everett M. Rogers explains how new ideas spread through communication channels over time, focusing on innovation adoption and the impact of the Internet on diffusion processes.


Watts challenges common sense and historical examples, revealing how human behavior prediction often fails due to complex dynamics.


Nassim Nicholas Taleb's "Fooled by Randomness" challenges our understanding of luck, skill, and perception in business and life.


"Four Thousand Weeks" explores life's brevity, time management, and meaningful living through philosophical insights, offering practical alternatives to common approaches.


Learn to create impactful visualizations with Good Charts, a guide that teaches the art of effective data communication, combining research and practical insights for better understanding and persuasion.


"How Big Things Get Done" by Bent Flyvbjerg explores the factors that lead projects to succeed or fail, offering principles like understanding odds, planning, teamwork, and mastering uncertainty.


"Influence" by Dr. Robert B. Cialdini explores six principles of persuasion: reciprocation, commitment, social proof, liking, authority, and scarcity. Learn how to ethically apply these principles for effective communication.


"Mistakes Were Made (But Not by Me)" delves into self-justification, exploring how the brain avoids responsibility through self-deception.


Let’s Forge the Future Together
Interested in joining our roster of legal innovators or simply curious about the world of legal tech? Reach out to us! Our team is always eager to connect, collaborate, and contribute to the ever-evolving legal landscape.
