Home > Blog >

 

>

 Helping Future Experts: What We Learned from a Leadership Talk on AI in Legal, Regulatory & Compliance Work

Helping Future Experts: What We Learned from a Leadership Talk on AI in Legal, Regulatory & Compliance Work

By KK

June 23, 2025

SHARE THIS ARTICLE

Three experts — Heather Meeker (Tech Law Partners LLP), Stephanie Corey (CEO/Founder, UpLevel Ops), and Michelle Wu (CEO of NyquistAI) — got together for a special panel discussion for a leadership offsite event. They shared their thoughts on how AI is changing legal, regulatory, and compliance work for medical device companies. Their main goal was to help future knowledge workers (people who use information to do their jobs) using this special time of mid-year to brainstorm how to use AI in smart and safe ways.

1. AI Is Here to Help You, Not Replace You

“AI isn’t here to take our jobs. It helps us ask better questions and find better answers.” — Michelle Wu, NyquistAI

AI is excellent at reading lots of information really fast, like contracts or laws. That gives people more time to think about the big picture and make smart decisions. The panelists said the best way to use AI is by asking it clear questions and checking its answers carefully.

Use Case: Monitoring Regulatory Intelligence for Life Sciences

AI is helping life sciences companies stay compliant by automatically monitoring regulatory changes and alerting teams to updates. For instance, AI tools can continuously scan government and regulatory websites for any updates or changes in laws or regulations that could impact product development or marketing strategies. This keeps teams informed and ensures that companies remain compliant without manual tracking.

However, most AI models are not particularly good at finding all information in a target set, unless you tell it where to look. So, for monitoring regulations, you will need to check what it might be missing.

Action Items:

  • Teach your team how to ask the right questions when using AI.
  • Always have a human check the AI’s answers before making a decision.

2. How AI Can Be Used in Real Work

“When you turn expert knowledge into AI tools, everyone in the company can use it.” — Heather Meeker, Tech Law Partners LLP

Reviewing Contracts with AI

What AI Can Do: AI tools can automatically scan contracts, identify key clauses, and highlight any deviations from standard terms. For internal legal teams, AI can assist in reviewing large volumes of contracts in less time, flagging risky clauses or terms that need further review. This speeds up the review process and helps ensure nothing is missed.

Use Case: AI Helping Legal Teams Review Contracts

AI-powered contract review systems can automate routine contract tasks such as checking for compliance with company policies, and flagging potentially risky terms. Legal teams can use these tools to focus on more complex legal analysis and negotiations. In this way, AI accelerates the review process and reduces the chance of human error. One example might be looking for contract assignment restrictions in M&A due diligence, and creating tables of the contracts. But keep in mind that the more you tell the AI about what you are looking for, the better the results will be–for example, you might need to give examples of what you consider to be unacceptable assignment restrictions.

Tips:

  • Always use enterprise-grade tools that respect data privacy.
  • Regularly audit AI-driven outputs to ensure high quality.
  • Create dashboards to track changes and provide visibility into AI-assisted decisions.
  • Teach your team to provide detailed prompts

3. Keeping AI Safe and Responsible

“Good rules make sure AI does its job well and safely.” — Stephanie Corey, UpLevel Ops

Writing Code with AI

AI is being utilized to write code more quickly and efficiently. Some companies, such as Microsoft, have integrated AI into their development pipelines to help generate code automatically[1]. Sam Altman, CEO of OpenAI, has said that AI is “writing code at a pace and scale never seen before.[2]” Similarly, Eric Schmidt, former CEO of Google, noted that AI’s ability to automate coding tasks means junior coders may be replaced by AI, which can handle much of the repetitive coding tasks efficiently[3].

AI-powered tools can write simple code snippets, debug programs, and even suggest optimizations, helping developers focus on more complex problems. The use of AI in coding is becoming increasingly widespread, with Microsoft incorporating tools like GitHub Copilot to assist coders in writing more efficient and effective code.

Opportunities:

  • Speed and scale: Engineering teams can move faster—AI can write boilerplate code, auto-generate tests, and even refactor legacy systems.
  • Talent multiplier: It extends what your dev teams can do without hiring at the same pace.
  • Speedier Innovation: You can prototype faster, iterate more often, and reduce time-to-market

Risks to consider:

  • The law is in flux in the US and outside for all generative AI models. There are currently nearly 50 pending lawsuits about whether those tools infringe the copyrights of the data they were trained on. Keep alert for developments in the law.
  • Use Training Data Wisely. Don’t scrape training data from sites in violation of licenses or terms of service. Honor any “no-AI” robots.txt beacons.
  • Security, Quality, Explainability & Maintenance.

4. AI-Driven Decision Making and Leadership Support

“AI isn’t just about automating tasks; it’s about helping leaders make better, data-driven decisions.” — Michelle Wu, NyquistAI

AI can help leadership teams make more informed decisions by automating the process of monitoring regulatory changes and generating real-time reports. For instance, AI can scan legal and regulatory updates across multiple jurisdictions and automatically flag changes that require leadership attention. These tools can generate easy-to-read reports that provide summaries of relevant updates, enabling leadership to make decisions more quickly and accurately.

Use Case: AI Empowering Leadership Decision Making in Products

As of the end of May 2025, the FDA has cleared 1,169 medical devices with AI components. 80 are cleared in 2025 so far. In 2024, over 414 companies had at least one approved AI device, with nine companies, including Siemens, GE, Canon, Aidoc Medical, Shanghai United Imaging, Philips, RapidAI, Samsung and viz.ai, Inc., having more than 10 approvals. All the data illuminates the dynamic AI landscape and its impact on advancing patient care.

The pipeline of AI medical devices is robust, and as these technologies evolve, they will undoubtedly play a transformative role in shaping the future of medical care worldwide, impacting a greater number of patients.

As the panelists share the exciting future, there are considerations they hope to share:

Legal and Regulatory Risks

  • Bias and Harmful Decisions: When AI affects users (e.g., treatment recommendations or approvals), legal exposure increases, especially for biased or unfair outcomes.
  • FDA: Emphasizes security, fairness, explainability, and post-market surveillance for AI in medical products (e.g., SaMD).
  • EU (MDR + AI Act): Requires risk classification, transparency, and technical documentation for AI decision systems.

Real-World Adoption Examples

  • Dexcom: First GenAI-powered device approved by FDA. It links lifestyle data to glucose levels—showing precedent for embedded AI in medical devices. Being the first one helps to raise the bars for follow-on competitors.
  • Apple Watch: Apple collaborated with the FDA to co-create a pre-certification process, shaping future policy and innovation paths.

Leadership Imperatives

  • Engage Early with Regulators: Companies should work with policymakers to help shape AI regulations and gain a competitive advantage.
  • Ensure Transparency & Explainability: AI decision logic must be transparent and auditable to comply with legal expectations and build trust.

Legal Ops Role

  • Bridge Teams: Legal ops can connect legal, product, and technical teams to align on governance and compliance.
  • Audit Trails: Implement logging for AI inputs, outputs, and decisions to ensure traceability and support post-incident reviews.
  • Risk Playbooks: Create frameworks to classify AI decisions by risk (e.g., low vs. high impact) and apply appropriate controls.

Audience Takeaway

AI decision-making in products offers huge potential, but must be governed carefully to ensure fairness, transparency, and compliance with evolving regulations. Legal and leadership teams must proactively manage these risks to innovate responsibly.

5. What’s Next?

“The best results come when we mix smart tech with strong values.” — Heather Meeker, Tech Law Partners LLP

In the next few years, AI will help with big jobs like running fake court trials or planning document submissions. However, to succeed, teams require clear rules, new skills, and effective teamwork. As AI continues to evolve, it will become increasingly powerful; however, it’s essential to remember that the real power of AI lies in how we choose to utilize it.

The Need for Leadership and Soft Skills

The potential of AI is vast, but it will only be fully realized when leaders guide its use with a clear vision. AI can assist with tasks, but leadership remains crucial to ensure that AI is used in an ethical and responsible manner. It’s not just about implementing technology; it’s about building the right culture to support AI use, developing soft skills, and encouraging teams to experiment and adapt to new ways of working. Most of AI’s potential relies on people trying it out and learning from their experiences.

What Leaders Can Do Now:

  • List Your AI Tools: Take inventory of what AI tools you’re using and ensure there are clear roles for ownership.
  • Measure Success: Track how AI is impacting not just efficiency, but also compliance and decision-making quality.
  • Develop Skills: Equip your team with the skills they need to effectively work with AI, from prompt engineering to model auditing.

[1] AI now writes a big chunk of code at Microsoft and Google — and it could be coming for even more at Meta, link

[2] 90% of coding by AI, fewer jobs for software engineers: Zoho’s Sridhar Vembu and OpenAI’s Sam Altman give techies a reality check by Economist, link

[3] Former Google CEO Eric Schmidt Says AI Could Outthink Einstein and Write Laws by 2030 — ‘Smartest Human in Your Pocket’ May Be Just 5 Years Away, link

Experience the Future of Innovation

with Global Intelligence and AI-Powered Solutions