A McMillan x Provision Roundtable
Two questions come up at every demo: can AI replace what a construction lawyer does, and can ChatGPT do what specialized legal-tech does. The honest answers are not as simple as either side wants them to be. Annik Forristal (McMillan LLP), Chris Moran (Maple Reinders Group), and Luigi La Corte (Provision) sit down for a 60-minute conversation on what AI is actually changing in construction contract risk management, and what it isn't.
Partner, McMillan LLP
Group Head, National Infrastructure & Construction
Annik Forristal is a Partner at McMillan LLP and Group Head of the firm's National Infrastructure and Construction Group. A practicing engineer before becoming a lawyer, she advises owners, developers, contractors and municipalities on contract risk, project delivery and dispute avoidance across construction and infrastructure projects.

General Counsel
Maple Reinders Group
Chris Moran is General Counsel at Maple Reinders Group, where he leads all legal aspects of the business. With a career spanning private practice and senior in-house roles in construction, Chris is the current Chair of the Ontario Bar Association's Construction & Infrastructure Law Section and a frequent speaker on construction law and contract risk.

Co-Founder and CEO
Provision
Luigi La Corte is Co-Founder and CEO of Provision, the AI risk-review platform built for pre-construction and contract teams across general contractors, owners, and infrastructure firms.
How should GCs handle active "data rot" during construction — for example, a subcontractor's WSIB clearance revoked mid-month while the GC's static spreadsheet still shows them as compliant?
Chris: There are certain tasks I don't think can be put on AI — that's a project manager's responsibility. The PM's monthly job is to ensure sub-trades have their paperwork in order before work moves forward. AI can identify and flag risks in static documents at the contract stage, but ongoing operational compliance requires human ownership of the process. The expectation across my team — legal and broader — is that you use the tool, then verify. Provision saves time on contract review because I don't have to go as deep into the weeds, but I still own it at the end of the day.
What are your thoughts on implementing specific critical thinking or strategic training for junior users, so they can challenge and assess AI output?
Chris: When my team brings me AI-driven work, I force them to go back and explain why each part is right. It can't just be "I put it into ChatGPT." There has to be a thought process. If the answer holds up, great — as long as they've thought about it. I've never seen anything come out of AI that was perfect on the first try; the critical thinking should happen on the 20th iteration, not the first.
Annik: When I was in engineering at Waterloo 20 years ago, the first two years didn't allow calculators. The rationale: you need to be able to verify every number sophisticated software produces. You can't be reliant on tools as basic as a calculator. The same principle applies to AI today. We absolutely need to implement critical thinking training for juniors — I'm just not sure yet what that looks like in practice. Actively open to ideas.
Luigi:Welcome everyone to a webinar hosted by Provision. Today we're going to be talking about AI tools in construction contracts. I'm very blessed today to have two wonderful guests and two customers of ours. We have Chris Moran from Maple Reinders as General Counsel, and Annik Forristal from McMillan as external counsel. I'm Luigi, CEO and co-founder of Provision. Thank you so much for joining today. It's going to be a really good conversation.
It's really hard to find people who have an honest and transparent discussion about where AI is working within contract interpretation, contract AI, and general contracting. We have a very unique group here today because this is what the team here lives and breathes. Together you've probably either bid on or reviewed hundreds of billions, if not a trillion dollars of value between Canada and the US. Chris, I'll pass it over to you for an introduction, then Annik.
Chris:Sure, thanks Luigi. Chris Moran, as Luigi said, I'm General Counsel for Maple Reinders Group. I've been in construction pretty much my entire career. I started at a small construction litigation boutique, then moved on to another general contractor as in-house counsel there for about 10 years before making the jump to Maple. Been here about seven years now and growing the team. I haven't done the math on the trillion-dollar number, but I'll use it. Sounds good.
Annik:Annik Forristal — I am a Partner and Group Head of McMillan's National Infrastructure and Construction Group. My practice is really focused on the front end of project delivery, construction contract drafting, and negotiation. While I also don't have the math on the total value of the projects for which we've reviewed contracts, we're reviewing them every day, all the time. I'm really interested in the conversation we're going to have today about how we've been leveraging AI to help with those tasks and where we see that going.
Luigi:Let's jump into the meat of the conversation. AI, contracts — there's a general freak-out: are these tools going to make the legal profession irrelevant? Are general contractors going away? Let's step back and talk about the problem that actually needs to be solved when we're talking about AI in our domain. Annik, can you level-set with our audience? When you talk about applying AI to your domain, what is the problem space, and how are you currently interacting with these documents?
Annik:One of the problems AI really gets to the heart of is the time it takes to digest a 300-page detailed document. It helps expedite drafting and document review tasks. It can also summarize or contextualize information in a way that's succinct, so we can communicate with our clients more effectively. One example: when you get a document dropped on you 30 minutes before a call and the client wants to know your thoughts on whether they have a certain claim or a certain protection or limitation of liability under the contract — being able to plug that into AI software, ask the question, helps you get prepared for that call on a time frame that previously wouldn't have been possible. I used to joke that I didn't know how you practiced law before Control-F, and AI really is like Control-F on steroids.
Luigi:Would you also say that customer demands are changing as AI tools come out?
Annik:Absolutely. The question we get all the time is how AI is going to save them money on our services. What it's really going to do is let you spend more of the time you pay for on what you actually want, which is professional judgment and expertise on complex issues — as opposed to the time it takes to digest the information required to apply that judgment. AI will shorten the amount of time I need to review a document, or when I'm drafting, the time to get to the end product. On an hourly-rate model, that saves clients money because it saves us time. Of course, there's a broader conversation about whether we're moving away from the hourly-rate model entirely toward value-based pricing.
Luigi:Chris, what's your view of the problem when we're talking about AI tools for contracts and contracting?
Chris:Honestly, this AI revolution has shown me how old I've gotten. I'm now the old guy in my office, and that hurt my soul to admit. It took me probably longer than it should have to dip my toe into AI. Luigi helped push me along when we finally started having those conversations. It's not exactly what I thought it was going to be. I originally dove in hoping it would replace a body. It's not quite that. What it's done is made us better. It's given us the ability to be faster, and the more we trust it, the more efficient we get.
It reminds me of my first boss — an old-school lawyer who didn't trust technology, hated email. I'm now the AI version of him. Initially, AI would do the task and I'd do it anyway because I didn't trust it. The longer I used it, the more I trusted it. Now it's creating real efficiencies. My younger teammates are much more willing to accept AI as an all-in tool — they're easier to onboard than I was originally.
Luigi:For Annik it's time and expectations management with the owner. Chris, for you it's a collective speed and getting tasks done. What tasks are you referring to?
Chris:It started with just the legal team — using AI for contract review. I thought we were catching 100% of the problems, but Provision came in and we've looked at other programs too. Provision would catch things occasionally that we missed. It allowed us to look at things we wouldn't have looked at any other way. The bigger story is that we've now extended it to our full team — estimating, pre-con, and ops. Pre-con uses it for review of contracts, specs, and drawings to be more fulsome in their review. Ops uses it to review the contract. I still want them to call me when there's a problem, but the chat feature is golden. They can ask a quick question to see if it's in the contract before they call me — that makes our conversation more substantive instead of "hair on fire, I have a problem, here's what I think." Way more efficient.
Luigi:So AI has filled a couple of jobs. On Annik's side, it's getting fast answers from documents. On your side, it's empowering teams to understand their documents. In both cases, it's "I need answers from these documents very quickly." Where is AI not meeting expectations? Annik first — where is AI relative to jobs that currently can't be tackled, and where do you want it to go?
Annik:The first thing I'll flag is that AI will sometimes have plausible-sounding but inaccurate answers or judgment calls. When it's just a straight A-to-B — what is the change being made from a standard form document by these supplementary conditions — that's one thing. But the risk assessment associated with that change, the factors at play — those judgment calls and analyses are where I'm often not in agreement, or I'll see that the AI's assessment is just not correct. But it sounds smart. I get pushback from clients and internally: "I was told over here this was the answer." When the AI gives the response a client wants to hear — which a lot of these platforms are programmed to do — there can be resistance when I come in and say "actually, not quite."
Where AI is not doing the job is on professional judgment calls. AI can help find the relevant clauses, identify focus, find conflicts or inconsistencies, expedite drafting. But AI can't sit at the boardroom table and negotiate the terms of a contract while understanding the dynamics of the humans at the table. The people, their emotions about the relationship and the contract terms — there's a very human element AI doesn't get. It can tell you about a liquidated damages clause. It can't tell you the real emotional reaction you'll get from a contractor when they see one and how it affects their perception of the relationship, the challenges on that project, the risk associated with it.
Can AI help expedite our ability to get there and let us focus on providing that expertise? Yes. But when I'm trying to tell it "draft this in two paragraphs instead of two pages because that CEO isn't going to read it" — there are things it just doesn't get yet.
Luigi:It's still a human-to-human business. Being a lawyer is challenging in our system because you need to know what precedent is, what case law says, what's going to fly in the context, how much leverage your customer has. That's a lot of context the AI doesn't have, at least in current tools. Do you think that will change?
Annik:Frankly, I have no idea. What I've seen evolve over the last two years has been astounding — it's hard to predict what'll be possible six months from now, let alone five years. Part of the challenge we're facing is how we train juniors in this world. The ways I trained my juniors are now the tasks the AI performs — drafting, markup, detailed review. That work was the foundation of how we developed the knowledge and expertise needed to offer professional judgment. Now I'm trying to figure out how we do that. We hear conversations about: if you plug in everything about a judge and every one of their decisions, could AI predict how they'll decide your case? These are wine-bottle questions, but is it impossible? I don't think it's impossible. Whether we get there in the next few years, I have no idea.
Luigi:Chris, you must have a similar problem with training. Reading isn't going away, but it's not as popular as it once was now that you can effectively Control-F over everything. How do you think about liability for the general contractor in the context of these new tools, and what do you do to make sure liability isn't increased?
Chris:Liability — that's a scary word. The training piece Annik just referred to is what I'm most scared of. Being the old guy, I was the one who had to read and write and do all those things. My biggest value now is what I learned before. The people coming up now — I'm fearful they're not going to have that same experience. My more junior lawyer is more technically savvy and more ahead on AI than I am. She's happy to jump in and use the tools, which is great. I'm not saying they shouldn't. But if she — or the more junior part of our bar — only uses those tools, it doesn't hone their own. Eventually they won't be as useful as Annik and I are, having gone through the harder years of doing it manually.
Luigi:It has to be said over and over. We use AI a lot. There've been conversations where we're reviewing a deliverable and I say "I don't think this makes sense." The response can't be "well, this is what the AI gave me." I'm not employing the AI. I'm employing you, and I still expect you to review everything and take ownership. That means going over and above — not because I want extra work, but because you need to underwrite the answer. Annik, do you still require juniors to print out the documents and manually go through them?
Annik:I've never required printing them out. But to Chris's earlier point about not trusting the AI when we first started, we definitely had our associate do the review independently first and provide comments, and then we compared it to what the AI pumped out to see if we trust it. You start developing that trust where we're confident it's catching the key clauses, the key risk terms. But to your point about liability — we're a society that expects ownership and responsibility for your work product. If a bridge collapses and someone says "that was the AI," that's not accepted as an answer. There always needs to be someone behind it taking ownership.
We need to develop the systems — Chris's team, mine — for human oversight and intervention in deploying these tools. We can't have the next generation just blindly trusting everything that comes out of these machines. They need to be able to question and critically think independently. I have a new associate starting in September. I told her: I need to get you to a point where, if the power went out and the internet was down, you could still draft a clause or write an outline for what should go in a construction contract. That knowledge needs to be in your head independent of these things.
There's a movie I keep bringing up — Idiocracy. The premise is that Luke Wilson gets frozen, wakes up in the future, and is the smartest man alive because everyone became so dependent on technology they didn't retain the knowledge of how it worked. When the tech started failing, no one was available to fix anything. It was a comedy, but the loss of foundational knowledge because everyone relied on tech is something we need to be vigilant about as we transition into this AI age.
Chris:I'll jump in. The biggest value I add right now to my company is my experience. Provision can do a 100% perfect job reviewing a contract and identifying risk — but it's not going to tell me what my risk tolerance is. That's something my team and I need to do. AI identifies a risk, then we tell my team here's how we can manage it. I may use AI to help redraft, but the call comes from us. That's the human aspect.
I'm hopeful — and this isn't a shot at anybody — that AI helps external counsel be more commercial and practical, instead of just being straight legal advice. AI can do straight legal advice for me. What I want from outside counsel is "here's the risk we identified, here's how I think we can fix it." I have these conversations with my team internally all the time: use Provision or whatever tool to identify risk, but you still own that document. You still have to estimate the full document, not just what Provision tells you.
In the T&C section, I'm always worried someone snuck a warranty provision in the spec section, which I don't read. Now Provision catches that — it tells me there's a warranty clause in a weird place that conflicts with what's over here. That kind of catch has been phenomenal. But the human piece is what I'm afraid of losing for the younger generation. If they're just going to ChatGPT or Provision and saying "draft this," then bringing it to me with "here, great" without doing anything in between — that's not helping me.
Luigi:There's an interesting question in the chat from Spartak: "AI is fantastic for pre-construction contract review, but how should GCs handle active 'data rot' for construction? E.g., a subcontractor's WSIB clearance is revoked mid-month, and the GC's static spreadsheet still shows them as compliant." The question is specific about WSIB clearance, but digs deeper: what risks are you still paranoid about, and what's your process?
Chris:There are certain tasks I don't think can be put on AI. That's a project manager's responsibility — to ensure subtrades have their paperwork in order before they can move forward. That's a PM's or PC's monthly requirement. I have no idea how AI would ever catch that. Again, old guy in the room — maybe something exists. But my expectation for my team, legal and broader, is that you're still responsible. Use the tool to do the thing, then double-check it. Provision does a fantastic job at contract review. I still expect my team to verify and double-check. Now it saves time because I don't have to go quite as deep into the weeds as I once did. But I still own it at the end of the day.
Luigi:Let's switch gears to the discourse around contracts being "solved." There's chatter that AI tools are getting so good that contract review is effectively solved. When I look at ChatGPT and Claude, they're impressive but not great. Part of what we offer customers is verifiable accuracy. If a generic LLM is 75% accurate, does that mean contract review goes away? Annik, from your perspective, is contract review solved?
Annik:No. Going back to points we already made: if the existence of AI software means more people actually review their contract, that's a win for the industry. So often the case is no one ever read it before they signed it, and people only look at the document for the first time when they're already in dispute and it's too late. If these tools help people who might not have otherwise ever read their contract to have their attention drawn to risk factors, that's a huge win.
Do I think it solves contract review? No — yes, it can expedite it, bring your attention to key points, give you some commentary on the nature of those clauses. But that doesn't tell you the nature of the people sitting across the table from you. Once you've reviewed the contract, there's a negotiation about how it might get changed. How we go about changing a clause, how we have those conversations — those are also key elements.
Also, that advice — even if AI knows its client like Maple and their risk tolerance — the way you perceive and assess risk is very case-by-case and project-by-project. A limitation of liability clause isn't always unacceptable. It may be acceptable depending on the client, the scope of work, the relationships, the other risks in the contract. So many factors go into risk assessment that go beyond "we've identified that there's an indemnity clause here." It's not solved. But it's being significantly helped.
Luigi:Chris, what's your view?
Chris:I'm not against it. My fear is that they just accept it without any critical thinking. I'm getting more comfortable with using AI for different things, including drafting. Got a draft clause? Throw it in ChatGPT. It's a good base to start from. But as Annik says, AI will never know certain things I'm dealing with. Our risk tolerance for Owner A is 100% different than Owner B because we've done a hundred projects with Owner A and they're great, vs. Owner B who we've done two projects with and we're in constant litigation. Things are going to be different in how we look at those contracts. AI will never understand that. That's the human element you can't take away. Identify the thing — great. Applying it to your situation is very different.
Our industry is also Frankensteining contracts everywhere. Soapbox moment: people are cherry-picking from different contract models and creating new bespoke models. We've got a thousand years of construction using essentially three contracts, and now we've got a million. CCDC has a million different forms now.
Annik:On that point — there's a layer of knowledge and expertise that AI doesn't yet incorporate into its analysis, which is project delivery and the nature of risk allocation inherent in different project delivery models. IPD vs. progressive design-build vs. design-bid-build. You'll see IPD concepts suddenly showing up in a design-build contract where they don't make sense there. That's something AI, at this point, would maybe catch if you gave it workflows and the right questions, but it wouldn't necessarily pick up on that Frankenstein arm in the contract the way you would.
Chris:It may not identify it as a risk, but it would catch the change. So it does two things: it catches any change made to the base document. That's first. Then people like Annik and me come in, and if it doesn't flag it as a risk, we have to look critically and say "no, that is a risk." A straight CCDC 2 for a storage facility versus a straight CCDC 2 for a nuclear power plant — same contract, completely different risk profile. Provision gives the same analysis, but my team's critical analysis is completely different. Maybe someday AI has that capability, maybe it doesn't. That's the piece that keeps me employed for now.
Luigi:On the question of training — there's a question from Ashley Maxwell at EllisDon: "Regarding training, what are your thoughts on implementing specific critical thinking or strategic training for junior users so they know how to challenge or assess the AI's output?"
Chris:Hey Ashley — my former life was at EllisDon, say hi to Casey for me. That's exactly right. That's the next step — Annik with her juniors at McMillan, they're going to have to find ways to ensure juniors are critically thinking, whether we're killing trees and printing it off or not. Back in the day, you drafted something, gave it to someone, they got out a pen and marked it up. That doesn't happen anymore. We need to figure out how to recreate that. For me, I have a small team. If they come to me with something and it's clearly AI-driven, I force them to go back and redraft it and show me why — tell me why this is the perfect thing. It can't just be "I put it into ChatGPT and it spit this out." There has to be a thought process. If the answer is "this is the perfect thing," fine — as long as you've thought about it.
I've never seen anything come out of AI that was perfect. Maybe you go back to AI a second, third, fourth time and it does — but that's part of the critical thinking. It shouldn't be the first time; it should be the 20th time you've gone back and made it update or rethink.
Annik:I keep thinking back to 20 years ago when I was in engineering at Waterloo. The first two years they didn't let us use calculators. I was furious — what do you mean? I'm going to have calculators available for the rest of my life. They said: you need to be able to double-check and verify every number that sophisticated software programs are putting out. You need to be able to do it yourself. You can't rely on tools as basic as a calculator. That was long before AI, and part of their training, to my frustration at the time, was making sure I could do it without these tools — because that's going to be fundamental to your critical thinking and your ability to safely and successfully use these tools in your profession.
Those lessons need to be carried forward. We absolutely need to implement specific critical thinking and strategic training for juniors. I'm just not quite sure how to do that or what it looks like yet — but I'm actively open to ideas and working on it.
Chris:That's the next step. For my own kids — one's in high school, one's entering high school — I'm petrified for them, because they have to learn how to use these tools, but I don't want them to cheat. If I'm the stickler and say "you can't use AI for this assignment," their competition isn't necessarily doing that. We're in a weird place. I'm hoping I can help them navigate it. They're smarter than I am, so they'll figure it out. It's a worldwide issue, not just an industry issue.
Luigi:Where do you invest time given that AI is moving very quickly? There's a tension on people's minds — an existential fear about AI taking jobs — but here on the call, we're listing limitations. Chris, what's underlying your fear for your kids when you know the current limitations of AI?
Chris:My fear is for them getting into university. If somebody can use AI to get 100% on everything they do, and my kid only gets 95% because they actually learned critical thinking — the difference is mine learned how to think, the other person used AI to get the grade. The problem is the other person got the grade to get in. So it's a bit of a cheat code. I want to teach my kids that critical thinking, long-term, is probably more important than just getting the right answer. You need the right answer, of course. But being able to explain how you got there is, in my opinion, more important than getting it. I was in high school when the internet took off, and back then everyone was afraid of cheating using the internet. This is just a new version of that. But at the same time, my kids need to learn how to use these tools — they're too powerful not to.
Luigi:I want to talk about where the value will accrue. Annik mentioned moving to a more outcomes-based or value-based pricing model. Chris, we haven't touched on how productivity will manifest inside Maple Reinders. If you extrapolate where AI is going, how do you think value accrual will work in the post-AI world? Annik first.
Annik:Right now I don't know. The conversations about how relationship and pricing models will change are ongoing. In the short term, it's less a major shift and more that the task I used to estimate at 5 hours might now be 2-3 hours, which directly results in a lower estimate for those fees. We have clients asking us, when we put forward proposals, to speak to where and how we're leveraging AI and how that will result in cost savings. The market is starting to ask for that and wants to see it in a tangible way. I've also seen it on the contract side — owners saying "can we mandate AI use or include terms about how AI will be used on our projects to save time and money?" Expectations are shifting, and sometimes in ways that may not be realistic for what's possible today. But our clients are demanding we step up and respond. What it'll look like five years from now could be very different.
Chris:I'm not sure how owners would ever ask that. The market will dictate it. To the extent contractors leverage AI and become more efficient and cheaper, the market will dictate that. If Maple decides we're not doing AI, we'll lose the market. If EllisDon — to use them as an example — finds ways to use AI to be more efficient, and Maple doesn't, eventually we won't win and we'll go out of business. It's no different than any other tool. The market is going to drive AI implementation more than any owner clause will. I haven't seen it in a contract, and I don't know how I'd react. How do I show that I'm cheaper because of AI? There's no way to quantify that. I can say I'm cheaper than EllisDon today. Whether that's because of AI or something else — who knows. Market will dictate it.
Internally, to the extent AI helps us, we'll continue to implement it. We tiptoed into AI and Provision became helpful right away. We've grown with it. We look at other products occasionally — some weren't helpful, some are. Not all of them fit. ChatGPT is the easy, in-your-face one. To me, it's the Walmart of AI. Why we got into Provision is because it's industry-specific and helpful for what we're doing. That's where I think the value comes — from tools that are helpful for what you're actually doing.
Luigi:AI has also multiplied competition. Every week there's a new AI startup trying to tackle something similar. That's great because — I'm a huge advocate for increased profits in the industry. Given the risk GCs take, margins are crazy small. The beneficiaries will hopefully be GCs, and Annik, hopefully you too. More evenings with your family, less time hunched over a laptop Control-F'ing for "indemnify."
Annik:Somehow I don't expect that'll be the result. It'll just be that I'll spend the time saved on other things within the job. I don't think it's going to lend to more time with my feet up at the beach. But the fun part of the job is being an advisor, providing professional judgment, helping with tough calls and the human elements. If these tools make those less of how much time in a day I spend on reading 300 pages of text, that's fantastic. It leaves more time for the fun, interesting, and more valuable parts of what we do.
Luigi:After tools like ChatGPT and Provision came to market, have you seen increased or decreased demand for historical services? Anecdotally I've heard from firms like McMillan that you've never been busier.
Annik:Certainly I've not seen the number of contracts hitting our desks for review drop as a result of this software. It's more — to your point — what clients' expectations are for that review. Maybe they've already used their own AI software for a preliminary review. So the nature of how we get plugged in and incorporated into the team is changing, but the volume of work doesn't seem to have changed at least not yet. Knock on wood.
Luigi:Let's land. Final thoughts — Annik, around how people should be using AI and getting started with document review for what use cases.
Annik:We're not going to be able to avoid AI. It's a part of our lives and our world. Engaging with it, learning about it, is really important. The sooner we can get comfortable with its uses and utilities in our day-to-day, the better. Just don't do that at the expense of your critical thinking. Question what it says. Be mindful that sometimes it sounds really persuasive even though it's wrong. It's not a one-size-fits-all, perfect answer to everything. I think it's exciting to see where this is going. Enjoy yourselves while keeping a pinch of cynicism.
Luigi:Cynicism is healthy. We should be generally more cynical about tools and what they purport to do. One thing that's come out of this AI boom is free trials and pilots on the rise — no one's buying software without trying it first and verifying it actually works. That's a good thing. Chris, final thoughts?
Chris:Similar message to Annik. I like the way you put it, Luigi — be paranoid. Being paranoid isn't a bad thing. These are great tools. We're speaking on Provision, so I'll pump its tires — Provision has been a great tool for us. But I don't trust it 100% yet, and I don't think I ever will. That's the best way for me to use it. Making sure the next generation is able to critically think — I think these tools are even better for people who can critically think. But if we teach the next generation not to think critically, even Provision becomes a problem. Someone has to be able to take what Provision gives them and think critically about it.
Luigi:Agreed. Thanks everyone for joining — really appreciate it.