When artificial intelligence burst into the mainstream, most of the noise centered on what AI could do: generate content, automate workflows, power new products. Very few people were talking about what AI should do — or what it’s legally allowed to do.
That gap is exactly where Kiley Tan operates.
Kiley is both a tech founder and a specialist lawyer at The Legal Director, working as a fractional general counsel for startups and scale-ups. His journey into AI didn’t start in a lab or a research group. It started with frustration.
“We were speaking to a number of potential CTOs and lead developers… to just get an idea of what AI was and what it meant, and how we could utilize it in our particular startup,” he recalls. “And we got some really rather nebulous responses.”
So he did what good entrepreneurs do when the answers aren’t good enough.
“I thought, look, I’m not going to get a straight answer from people, I’ll just do it myself and figure it out.”
He signed up for a 12‑week, intensive, practical AI course. That single decision would change not only how he built tech, but how he advised clients on one of the thorniest emerging topics in law: AI risk.
From Learning AI to Seeing the Legal Gaps
Once Kiley understood AI from the inside — the models, the data, the workflows — his colleagues noticed.
“People at my firm… started tapping me on the shoulder going, ‘Oh well, since you’ve done this, would you like to discover what about the law? What about the legal side of it?’”
That simple question opened up a can of worms.
He quickly realized that while the technology was racing ahead, the law was not just lagging — in some areas it was actively blocking progress.
“I realized actually there’s a lot more going on — or not happening — in the law that needs to happen in order for people who use AI or thinking of developing AI to move forward.”
His vantage point is unusual: one foot in entrepreneurship, one foot in legal practice. That gives him a clear view of what founders think they’re building versus what the law actually allows.
The First Big Distinction: Your Data vs. Everyone Else’s
The first thing Kiley does with clients is strip away the mystique and start with basics: what data are you using?
Inside a business, there’s usually one very safe category:
“If you’re using data that’s generated within your own organization… and that’s not personal data, then AI away and analyze that to the nth degree. You can do whatever you wish to it… No problem at all. Go ahead.”
Internal, non‑personal data — sensor data, system logs, internal operational metrics — that’s your goldmine, as he calls it later.
But the moment you involve personal data, the landscape changes.
“When you start getting issues around personal data, then you’re into the UK data protection, UK GDPR provisions… because then it’s a data processing action that you’re doing.”
And if you’re using AI that generates content — so‑called generative AI or “gen AI” — you step into a second legal arena: intellectual property and copyright.
The real headache? These regimes don’t sit neatly side by side.
“More often than not, they… work together and sometimes you find that they work against each other depending on the scenario that you’re in.”
So before any sales deck about “AI transformation” excites investors, Kiley is quietly asking a much more basic question: what are you feeding the system, and what are you letting it output?
The Hidden Trap in UK Law: Text and Data Mining
For founders who want to build their own language models or train models on large corpora of text, Kiley points to an uncomfortable truth: the UK’s core copyright law predates modern AI entirely.
“We have something known as the Copyright, Designs and Patents Act of 1988. The date is important. 88. So it was made in 1988, when AI was just a dream.”
That act contains a rule that strikes at the heart of how large language models (LLMs) are built:
“Text and data analysis is prohibited except for non‑commercial use… for commercial purposes you can’t do that.”
LLMs depend on large‑scale text and data mining.
“We know the bedrock of building an LLM is essentially text and data analysis. You need to feed it lots and lots of data.”
There are, in practice, only three ways around this in the UK today:
- Use your own data
- Use data where you’ve obtained clear consent
- Use data where the license explicitly allows this type of analysis
But if you’re hoping to compete with Anthropic, OpenAI, or Google using those limited datasets, you’re handicapped from the start.
“You just can’t build a large language model large enough and big enough and compute‑complex enough to deal with the sort of stuff that they can. So you are already hampered by that.”
There was talk about modernizing the law and flipping the presumption — letting text and data mining be generally allowed unless specifically prohibited, similar to the EU.
“If the government is true to its word and wants to make the UK an AI superpower, then it needs to really change this.”
Until then, UK founders need to be brutally realistic about what kind of AI they can build using third‑party data.
Who Owns AI-Generated Content?
Even if you solve the data input problem, you immediately face another: who owns the output?
Kiley explains that authorship is the bedrock of copyright law.
“Authorship here is very, very important because it’s the bedrock of intellectual property law and copyright.”
The 1988 act tried to deal with computer‑generated works, but it was written in the era of Microsoft Paint.
Back then, “computer‑generated” meant you told the machine exactly what to draw: a certain circle, a specific color, a precise shape.
“You had a kind of agency and authorship of that particular graphic.”
Now consider a modern prompt to an image model: “Create me a picture of a dog.”
“The question now is, is that sufficient authorship? Do you have sufficient agency in creating that particular image? And the answer probably is no.”
Why? Because all the real choices — the breed, the color, the pose, the style — are made by the model.
“That decision has been taken off your hands as the ‘author’ and passed on to a large language model.”
The United States has already planted a flag here. The US Copyright Office and the courts have taken the view that:
“It has to be a human author. You can’t copyright something that’s generated from a large language model.”
That creates a deeply awkward situation for startups trying to build generative‑AI products: if the AI’s output isn’t protectable as your IP, what exactly are you selling?
Kiley uses a simple analogy:
“It’s like me and you walking down the street and we spot a Ferrari on the side of the road and I say, ‘Here, that’s your Ferrari.’ I can say that to you, but you can’t own it because I never owned it in the first place.”
If you don’t own the rights in the output, you can’t transfer those rights to your customers.
That nuance, he suggests, is almost entirely missing from most investment conversations.
“The fundamental question is: who owns the copyright to the output of that model? And if you don’t own it, then you can’t pass it on.”
AI as a Power Tool, Not a Product Shortcut
One of the most helpful ways to re-frame AI, Kiley notes, is to stop thinking of it as the product and start thinking of it as a power tool.
Power tools didn’t kill house building; they made it faster, safer, and more creative. But you still needed architects and builders. The same should be true of AI.
“What you really should be doing is using the AI as a tool to refine your thinking and your own product, rather than actually trying to use it to deliver the product.”
That mindset difference matters legally.
If AI is supporting your original, human‑created work — and you’re layering your own creativity and judgment on top — you’re far more likely to have something you can protect.
In fact, the US has already hinted at this approach in practice. Kiley notes cases where AI‑generated images were used as a base, but the human contribution (like writing the comic text) still qualified for copyright.
“If you generate, let’s say, a comic strip… and you add the text to it, that then becomes copyrightable because you’ve now exercised your creative authorship onto that document.”
His mantra for clients is simple:
“Human in the loop is still sort of the thing that I preach. You can’t just allow it to just run by itself.”
The Quiet Risk in Using Third-Party AI Tools
For startups that aren’t building their own models, another category of risk looms large: using third‑party AI tools without due diligence.
In the early ChatGPT boom, countless products appeared that were essentially wrappers around commercial LLMs, plugged into email, CRM systems, or document stores.
Founders (and employees) often rushed these into production with a single click.
Kiley is wary.
“There’s a huge amount of risk simply because, A, you’re sending data to a big black hole that you have no clue where that big hole’s going to and from.”
Even if a vendor promises not to train on your data, that doesn’t answer basic compliance questions:
- Where is the data stored?
- Is any of it personal data?
- Is it being transferred overseas?
- What encryption or security is applied?
“You’re still sending data to a third party where you have no clue… and you’re not doing the due diligence that you would normally do.”
For Kiley, the bright red line is personal data.
“I will only raise the red flag where there’s personal data involved.”
If there is personal data, then the moment you use these tools, you’re in the realm of data protection, cross‑border transfer rules, and potentially heavy penalties if something goes wrong.
Even when the data isn’t personal, another risk quietly appears: text and data mining of someone else’s copyrighted material.
“Someone sends you a document and you now want to put that document through so that you can get insights from that document… That is technically text and data mining… and this is technically prohibited under the Copyright, Designs and Patents Act.”
From a legal standpoint, Kiley’s advice is clear:
“I would say no, you can’t do it. But at the end of the day, I don’t get to make that decision. It’s a business decision.”
His job is to lay out the risk; founders have to decide how much of it they’re willing to own.
How He Actually Helps Companies: Questions, Contracts, and Reality Checks
Inside The Legal Director, Kiley approaches AI questions with a simple framework:
- Is there personal data involved? “The first thing to say is, is it personal data? No? Okay. Yes? Then we need to think more about how we’re doing that personal data and what the other party… is doing to process that personal data.”
- Where is that data going? “Is it going to be within the UK or externally? Then we need to think about what else we need to put in place.”
- What were you told vs. what’s in the contract? Many of her interventions are about sanity‑checking sales promises against legal wording. “The client will say, ‘Well, they said this to us in the discovery meeting…’ And then when I read the contract, it says something completely different. ”At that point, he has to give them a hard truth: “They’ve misrepresented it? Well, that may be the case, but now you have to prove the fact that it was misrepresented because you signed the document.”
- Do you need to change your own customer terms? If a business starts using AI in delivery, there are downstream commercial implications.“Do we need to start thinking about changing our terms and contracts with our customers in relation to how we do work?”
Often, what companies really need is not more AI, but more clarity about how and where they’ll use it.
The Internal Use Dilemma: Employees + AI = Who’s Doing the Work?
Another area Kiley is increasingly asked about is internal use of AI by employees — the “shadow IT” problem.
Tools like ChatGPT, Midjourney, or AI email assistants are just a browser tab away for any staff member. For small, owner‑managed businesses, she sees real upside.
“If you’re an owner‑managed business, then having these productivity tools is really a boon for you because you can move quicker… there are some positives that can be drawn from using large language models in that context.”
But for organizations whose value proposition is expertise — marketing agencies, recruiters, consultancies — things get sticky.
“You want your staff there for a particular reason… That takes time, it takes effort, it takes experience.”
If AI can draft a marketing plan in seconds, what does that say about the value of your senior strategists? And how much can you let staff rely on AI without eroding quality and trust?
Kiley sees most clients accept some use but recoil at blind dependence.
“They accept that there will be some reliance, there’ll be some use, but they would say you need to check that work to make sure it’s correct.”
His own preferred policy is structured and conservative:
“If it were up to me, I would say first draft has to be your work, has to be your original piece of work. And if you do want to then review it using a large language model, then you can, but you are responsible for that final draft at the end of the day.”
That policy can then be written down formally — not just as a tech rule, but as part of a wider risk and quality strategy.
When AI Acts Autonomously: The Agentic Question
The next frontier — and the one that makes Kiley visibly uneasy — is agentic AI: autonomous agents that can make decisions and take actions with minimal human oversight.
What happens if an AI agent sends a defamatory email? Or moves money to the wrong account? Or posts something damaging to a brand?
Who’s liable?
From a defamation perspective, the current law is straightforward:
“The keyword here is publish. So who published it? The fact that you sent the email from your email address, you’re the publisher.”
You might then try to recover your losses from whoever built the agent, which pulls you straight into the world of contracts and limitations of liability.
“What are the terms and conditions between you and the large language model? Do they accept any liability in relation to the outputs, and if so, what is the limit of their liability?”
For agent builders, this is a nightmare scenario. But Kiley is clear: from his client’s perspective, responsibility sits with the people who built and sold the system.
“If you’re the developer and you’re saying, ‘I’m going to develop this agent for you and it’s going to do A, B, C,’ what we would do is we would put that into a contract… and we would then put liability on that person creating the agent if anything were to go wrong.”
In practice, that means developers will be forced to take testing far more seriously.
“The developer cannot simply provide you with a bot, an agent, without properly testing it… what have they done to ensure that this agent does what it says it’s going to do on the tin?”
A whole new discipline is emerging here: explainable agentic AI — not just explaining model outputs, but explaining why the agent behaved as it did.
Is AI Overhyped — and Does It Matter?
With so many legal unknowns, is AI just a giant, overblown story?
Kiley is nuanced.
“Is it overhyped? Probably. The jury is still out in relation to AI.”
He points to an FT article about S&P 500 companies:
“Most of them can’t describe the upsides of the use of AI. Believe it or not… and most of them point to risks such as cybersecurity, legal risk and potential failure of implementing AI in your organization.”
At the same time, mid‑sized businesses see AI as a lever for growth — and that’s not necessarily wrong, as long as leaders understand the stakes.
“If they are the business leaders and they realize that that’s the risk and then they’re going to do it nonetheless, then fine — you’ve at least thought about it and you’ve moved it forward based on that.”
What worries him more are the companies who treat AI as a magic black box.
“I think what is worse is people who don’t know the risk and think, ‘Oh yeah, it’s straightforward, it’s simple, we can do this because that’s AI and we rely on that.’ That’s worse. At least find out what the risk is before you make that decision.”
A World of Incomplete Laws
Is any country handling AI “properly”? Kiley doesn’t think so.
“No easy answer… laws in every country are made on a knee‑jerk reaction, looking backwards, not forwards.”
Some jurisdictions with weaker copyright regimes may actually become havens for AI experimentation simply because there’s less legal friction.
“Certain countries will see this as not an issue because they don’t have sufficiently strong copyright laws in place. Therefore my startups can do whatever they want and it will allow them to grow very, very quickly.”
By contrast, the UK is wrestling with 30‑plus‑year‑old laws that unintentionally hobble AI innovation.
“Until we do that [reform], unfortunately I think most of the advancement of LLMs is going to either happen in Europe, in the States, or in places where copyright protection is not strong.”
For founders, that means two things can be true at once:
- The law is messy and out of date
- You still have to live within it
Kiley’s Advice to Founders Starting Their AI Journey
If he could step back into her earlier founder days, armed with everything he now knows, what would he do differently — and what does he want today’s entrepreneurs to understand?
First, don’t overlook your internal data.
“Internal data is all yours. Use it, analyze that to the nth degree, do whatever you want to it, because that is your gold mine.”
Operational data about your business — stock, utilization, throughput, timings — can be transformed into real efficiency wins with relatively low legal risk.
He gives a vivid example from a bun shop chain in Southeast Asia.
Customers select buns on a tray from dozens of visually similar products. Traditionally, checkout staff have to memorize each type and price, introducing delay and human error.
This chain installed a simple computer‑vision system.
“You put your bun in a tray, you put the tray at the counter and there is a downward‑facing camera that looks at the buns… recognizes the shapes and puts all the prices onto a screen.”
No personal data. No copyright headaches. No complex LLMs. Just focused image recognition.
“It was quick, it was efficient and it made a difference because you don’t have user error anymore… Use it in simple ways that will just basically make it work in your organization. You don’t have to do anything fancy with it.”
Second, use generative AI with real caution.
“Use generative AI with care. Absolutely, with care.”
He has already seen AI‑drafted contracts in the wild and can “spot it a mile away.” Not only because the quality is often poor, but because there simply isn’t enough high‑quality legal drafting data in public circulation to train on.
“Agreements written by lawyers do not tend to be in general circulation. It’s all confidential… Therefore, there is no training data out there sufficiently large enough to train a large language model to do it.”
Even dedicated legal content providers, with their own document repositories, are struggling to get AI drafting right.
“It is also very complicated to train large language models on drafting… It is not a simple thing. And humans still have, at this point, a value.”
Third, move sooner, and be braver — but with your eyes open.
When asked what advice he’d give her younger self 20 years ago, he pauses, then answers:
“I think I would have started out on my entrepreneurial journey a lot sooner… be more risk‑taking — calculated risk‑taking nonetheless — and be willing to stretch yourself with that.”
That blend — of boldness and calculation — is exactly how he approaches AI today.
The Legal Director: A Different Way to Get Legal Help
Kiley’s work at The Legal Director is shaped by his own rejection of traditional law firm culture.
“I used to work in a conventional traditional law firm and I got sick of the model. It was basically a lot of politics, internal politics, and I just decided it wasn’t for me.”
He joined The Legal Director in 2015 and “never looked back since.”
The model is simple: act as fractional general counsel, not just a distant law firm.
“You can treat us like a conventional law firm — we will do work on a transactional basis… But where we work best is where we work with clients as their ad hoc or fractional general counsel. So we are part of the organization.”
That can mean two or three days a week with larger clients, or a day or two a month for smaller ones, often on a retainer.
“Being able to grow with the business, being able to be there with the business, not worry about, ‘Oh, is he going to charge me for taking my call?’ Because you’re already paying for it. It’s like a mobile phone contract.”
That long‑term presence has a side benefit in fast‑changing areas like AI: institutional memory.
“I have clients with me since 2016 till today… I now am one of the most senior people there. I know where the skeletons are buried. I know why certain agreements are drafted in a particular way, and I can explain that to the existing team who had no idea.”
In a world where AI is evolving faster than the law, that continuity — someone who understands both the technology and the legal trail behind it — is becoming a strategic asset in its own right.
In the end, Kiley’s story isn’t about AI as magic or AI as threat. It’s about AI as power tool — one that can unlock value if you understand where the wiring is and which parts of the building code still apply.
Analyze your own data. Start with simple, operational wins. Keep humans in the loop. Be honest about the legal gray areas. And don’t mistake an LLM’s enthusiasm for a lawyer’s advice.
Humans, as he says with a wry smile, “still have, at this point, a value.”
