Our Leaders

A veteran of the legal innovation world, having held domestic and global growth-leadership roles, Investor and advisor to numerous legal innovation companies.
A veteran of the legal innovation world, having held domestic and global growth-leadership roles, Investor and advisor to numerous legal innovation companies.

Driving innovation in legal professional services and technology for 20 years and founder of The Tampa Legal Network.
Driving innovation in legal professional services and technology for 20 years and founder of The Tampa Legal Network.

Over 18 years of experience counseling law firms and law departments on the right mix of people, process, and technology.
Over 18 years of experience counseling law firms and law departments on the right mix of people, process, and technology.

Over a decade of legal technology and innovation experience and the Creator and Co-Founder of two thought leadership and networking organizations.
Over a decade of legal technology and innovation experience and the Creator and Co-Founder of two thought leadership and networking organizations.

A litigator turned in-house counsel turned legal operations consultant, leveraging legal expertise through process and technology.
A litigator turned in-house counsel turned legal operations consultant, leveraging legal expertise through process and technology.

Over 16 years’ experience in legal tech/outsourcing industry, innovating with project management using Six Sigma techniques.
Over 16 years’ experience in legal tech/outsourcing industry, innovating with project management using Six Sigma techniques.
Generative AI
This resource originated as a primary-source appendix for presentations on Generative AI we began delivering in December 2022. We are still going (700 slides and counting). We offer introductory presentations for larger audiences still getting up to speed. We partner with law firms on CLEs that address the many legal and regulatory considerations that add to complexity and impact strategy. We lead high-level conversations among informed stakeholders re real-world use cases. Feel free to reach out.
As the presentation has grown, so has the appendix, to the point where we’ve added both supplemental material (that is not in the presentation) and maintained historical materials (fallen out of the presentation given how quickly the topic moves). The resource is comprehensive in-so-far as it contains a primary source for every slide in the presentation. But we think a truly comprehensive resource that encompasses the vast and rapidly evolving subject of Generative AI would be impossible. We’re giving you a lot. But we would never pretend to be offering everything. Though if you see a glaring hole, feel free to reach out. We’re always looking to make this resource more valuable.
Library - Generative AI
A Pew Research Center survey found that most Americans haven't used ChatGPT, with only 18% of U.S. adults having ever used it. Younger adults and college-educated individuals are more likely to have used ChatGPT. It's been used for entertainment, learning, and work, with younger adults using it more for education and amusement. The survey also revealed that Americans think chatbots will have a major impact on jobs like software engineers and graphic designers, but fewer believe it will impact their own jobs. Views on this differ by age, education, and industry. Additionally, only 15% of employed adults who have heard of ChatGPT believe it would be extremely or very helpful for their job.


Companies are grappling with the legal and ethical implications of generative AI, such as ChatGPT, as they seek to protect data, ensure accuracy, and navigate potential legal pitfalls. With concerns about security, data privacy, employment, and intellectual property law, many organizations are establishing AI oversight teams and developing policies to govern generative AI use. While some companies have banned internal use of the technology, others are implementing safeguards and approval processes. As AI regulation is considered in various jurisdictions, organizations are turning to regulators' questions as guidance for crafting AI governance policies.


Gary Marcus, a prominent scientist and entrepreneur in the AI field, has expressed skepticism about the potential impact of generative AI, stating that it may not live up to the high expectations. He highlighted serious, unsolved problems with generative AI, such as its tendency to generate false information and its instability over time. Marcus suggested that governments should reconsider their policies and investments in generative AI, as it may not be as world-changing as anticipated. He cautioned against building the world around the assumption that generative AI will have a massive impact.


Fine-tuning alone isn't a universal solution for domain-specific model refinement (DSMR). It's crucial to consider other DSMR techniques like prompt refinement, example selection, retrieval-assisted generation, and reinforcement learning with human feedback, depending on the specific task. Educating users about a broader set of approaches beyond fine-tuning is essential for solving diverse problems effectively.


During the largest-ever public red-teaming challenge at Def Con, independent hackers like Kelsey Davis probed AI technology for bias. They sought to identify and address inaccuracies and biases in AI models, including those like ChatGPT. By conducting diverse tests, these hackers aim to make AI more equitable and inclusive, helping technology companies eliminate biases and improve their products. The event emphasized the importance of red-teaming as AI becomes more widespread, with the White House encouraging tech companies to participate. It also highlighted the broader concerns of racial profiling and discrimination in AI applications.


Large language models like ChatGPT and Google's Bard are changing the landscape of automation, impacting office jobs that require cognitive skills and high education levels. These AI systems assist rather than replace workers, particularly in tasks that involve creativity and high-level thinking. While concerns about job displacement exist, AI tools help improve productivity and bridge skill gaps, particularly for junior employees. They also have the potential to reduce income inequality by opening up opportunities in elite jobs. Despite their capabilities, human skills like social interaction and empathy remain essential.


This series of blog posts discusses the concept of an AI-based Knowledge Brain System for law firms. It aims to use technologies like natural language processing, machine learning, and big data analytics to help legal professionals access, organize, and understand vast amounts of legal information securely. The series covers design considerations, including AI, data, security, and privacy. It emphasizes the need for strong security measures and compliance with data privacy regulations. The Knowledge Brain System can save time, improve decision-making, reduce risks, and drive growth for law firms. Potential solutions and benefits are also explored, including Legal ChatBots, contract review tools, litigation outcome prediction tools, and more. The series also introduces the concept of legal ontologies to ensure a tailored understanding of legal concepts and terminologies.


Google's AI-powered search engine displays inaccuracies in understanding geography and the alphabet. It generated false information about countries in Africa starting with the letter "K." Google's AI-infused search vacuumed up incorrect information from a blog post and struggled to filter out blatantly false data. The issues extended to other geography-related queries, showing the AI's limitations. Google acknowledges these flaws, emphasizing that the AI is still experimental and efforts are underway to improve it. Such inaccuracies highlight potential challenges with AI's integration into widely used services.


The U.S. faces a persistent worker shortage driven by various factors, including aging demographics and skill mismatches. Current AI technologies offer a solution by automating jobs that require "tacit knowledge," but the quality of AI needs improvement. The next AI wave, expected until 2030, will involve diverse models, cheaper GPUs, and open-source developments. To maximize its potential, we need better AI tools and should explore automation opportunities beyond traditional white-collar sectors like biology. Reevaluating the purpose of automation can lead to explosive productivity growth.

Nvidia's AI success spans over a decade, now thriving with $13.5B in Q2 revenue, primarily from AI chips. Data center revenue hit $10.3B. Strong projections indicate a continued AI boom, but competition and high demand present long-term risks.

Nvidia, once known for gaming hardware, has become a trillion-dollar company thanks to its AI chips. In the AI gold rush, Nvidia's revenue doubled to $13.5 billion, with a ninefold increase in net income. The company now primarily sells AI hardware, with its AI hardware division generating $10.3 billion in revenue. Nvidia's early investment in AI hardware, like CUDA, a parallel computing platform, gave it an edge. The H100 GPU is in high demand, and Nvidia has reportedly cornered up to 95% of the AI GPU market. Rivals like AMD, Intel, Google, Amazon, and Microsoft are investing heavily to challenge Nvidia's dominance.


This article discusses choosing between Retrieval-Augmented Generation (RAG) and model finetuning for improving Large Language Model (LLM) applications. It examines key factors such as the need for external data, model adaptation, hallucination minimization, training data availability, data dynamics, and result transparency. Recommendations are provided for various use cases, emphasizing the importance of aligning the method with specific requirements. The article also highlights additional aspects to consider, including scalability, latency, maintenance, ethics, integration, user experience, cost, and complexity. Ultimately, it stresses the need for a nuanced approach rather than assuming one method is universally superior.

As Congress faces difficulties in passing comprehensive AI legislation, many states are taking matters into their own hands. At least 25 states and the District of Columbia have introduced AI-related legislation since ChatGPT debuted in November, according to the National Conference of State Legislatures. Lawmakers are grappling with concerns such as deepfakes, discrimination and bias, the impact of AI on jobs, its influence on elections, and disinformation. Several states have formed commissions to study AI, while others are introducing bills aimed at regulating AI models. State-level AI regulation is becoming increasingly important as companies rush to incorporate AI into their applications and processes.


Hugging Face, an AI startup, has secured $235 million in a Series D funding round, valuing the company at $4.5 billion. The funding round included participation from notable tech giants such as Google, Amazon, Nvidia, Intel, AMD, Qualcomm, IBM, Salesforce, and Sound Ventures. Hugging Face offers data science hosting and development tools, including a hub for AI code repositories, models, and datasets, as well as web apps for AI-powered applications. The company's paid services include AutoTrain for automating model training, Inference API for hosting models, and Infinity for speeding up in-production model data processing. Hugging Face plans to use the funding to further support research, enterprise, and startups across various domains.


Educators should assume that students are using generative AI tools like ChatGPT on assignments and adapt their teaching methods accordingly. Schools should avoid relying on AI detector programs to catch cheaters, as they have limited accuracy. Teachers should focus on understanding the capabilities of generative AI and spend time using the technology themselves to appreciate its potential. Resources are available to help educators learn about AI, and they should treat the first full academic year post-ChatGPT as a learning experience for experimenting with new teaching methods that leverage AI. Students need guidance in using generative AI effectively, and schools should embrace the technology rather than seeing it as an enemy to be defeated.


Let’s Forge the Future Together
Interested in joining our roster of legal innovators or simply curious about the world of legal tech? Reach out to us! Our team is always eager to connect, collaborate, and contribute to the ever-evolving legal landscape.
