Interview with Pascal Hetzscholdt, Senior Director Content Protection at Wiley

Hi Pascal, thanks for joining. We are here to discuss the upcoming Responsible AI Summit and your work and contribution within this space. So, first question is, could you summarise your journey with Responsible AI thus far?

It has been a quite interesting journey. I would say we have been starting within our content protection team to study large language models (LLMs) from about two years ago, really.

From my perspective, what the impact of those models could be on our business when it comes to things like, you know, copyright infringement, data privacy protection and questions like that. And very soon we noted that there are some, let's say, challenges relevant to these models that requires a lot of more in-depth discussion to see how we can resolve them. And just to highlight we think that things like hallucination and inaccuracies in our case are really important for publishers, topics like missing misattribution and citation of sources, it's quite a challenge, right?

Since LLMs are probabilistic, they tend to make mistakes relevant to connecting the right authors to the right works and sometimes reflecting what those works are about. But there are also simple things like spelling and grammar related issues as well as you know, reflecting the right mathematical formulas and results, right?

So, I would argue for use in scholarly publishing scientific research. There are a few challenges that hopefully we will get to discuss at the summit as well to see whether other people have found solutions because we're keen on finding solutions and embracing AI.

And are there any tangible results or successes that you can share?

Yes, and I think you know the most important one being that AI makers are now acknowledging the importance of these topics. And I think there's a simple reason.

They're not doing this for only altruistic reasons, right? They also want to make money in the end and so they have to know what is important for their customers, right. And I just mentioned our sector scientific research in scholarly publishing, but there's also legal, finance and healthcare. With I think healthcare being an important one where you don't want to make any mistakes.

Same applies to the financial sector. I do think they recognize it and they now talk about it. There are also differences as to the backgrounds of the AI makers. Some of them are software companies. Others are hardware companies like chip manufacturers, and they are very open to discussing these issues. I'm hoping that some of them will attend the summit.


If you're happy to, could you share some of your biggest challenges thus far?

Yes, I already mentioned, things like accuracy and reliability, making sure that the responses are factually correct. Truth is very important for us, but I would say more generally, it's about making sure that people are transparent to the training data that has been used for the LLMs.

This is not only about the use of copyrighted information. There are some models that have been trained on flawed and retracted research, so research that has been in papers have been corrected by us or others. So having transparency about what has been used for training is important for all sorts of reasons. And then the explainability of the model, how it works, and then finally clarity around the model output.

I don't think that many people realize that humans are a key factor in producing that output as well through moderation of system prompts and filters and restrictions. Just to give you one example, most models are not allowed to give financial advice about cryptocurrencies. But it's important for people to know that those restrictions have been put in place, right?

So, you can have discussions about why these protocols exist and censorship and things like that. I would say challenges are usually related to the need for clarity and transparency around those key factors.

And any further tips on overcoming these challenges?

There are many discussions now being had about retrieval augmented generation, the use of specific databases to tap into. Not only to check for spelling and grammar, but also maybe an encyclopaedia right just to double check the model output with databases that are already out there actually makes sense.

And you can do the same thing for author databases. Relevant to research papers and books and fiction, and you know there are a lot of books published by Wiley, such as the For Dummies series that everybody knows well, but when the model talks about that, you want that information to be accurate.

So, what kind of tools can we use to create that level of accuracy? Make people, authors and creators happy? Especially for authors and creators to make sure their work is being reflected in the right way. And I think that's still work in progress and is going to take some time.

Because effectively, I think the AI makers are creating humans or human like machines. That's what they want to do, and as you know, we are humans. We can all make mistakes. So, we will need some additional tools to create some guardrails if you will.

Could you tell us a little bit about how you're navigating generative AI? I know you know, there's been a lot of pilot into production talk and it's a big challenge.

Great question, it is a big challenge and we have created a special AI Governance Committee. The committee consists of employees, from legal departments, technical departments and security experts. They are checking all kinds of proposals internally and from entities outside Wiley. To see whether it's safe to engage in those projects or initiatives and then we match whatever it is that they want to the EU AI Act, US Regulations, GDPR.

It's actually really exciting because you learn very quickly that you know a wide variety of stakeholders want to try these tools with our content in combination to do really exciting things like it can be plagiarism checking, it can be scientific research relevant to COVID, and future threats in the same context.

So that makes it all the more important that you really are diligent when auditing and checking as to whether this makes sense to do that. And there's also, of course, the cost factor, right, which is really important and you will have seen that every AI maker is saying that they have the best tool or they want to make the best tool, but we cannot use them all internally for the Wiley employees.

How has the EU AI act or other regulation impacted your work specifically?

I think for me it's a little bit more easier to get my head around it because I'm used to dealing with the GDPR related restrictions, right? So, I've witnessed the whole development of that up close and I must say I'm really happy and positively impressed by the fact that Brussels has started to work on this many years before.

The most popular LLMs came on the scene, and for me it's simply, regulation that we can use to match those internal and external AI activities against, I think in some places it's quite stringent. This is good because it teaches people that you need to look at these things holistically, that you need to describe all your processes, your own role, how you use that data for training purposes, but also what happens with the model output and the rights that then individuals have whose data is inside the machine.

And I think that's really useful. It pushes people to think about those things are sufficiently compliant with those regulations, how can we ensure that AI can be used in a robust and compliant way, that are in line with the requirements that the customers of the AI makers.

So even when tech companies think, well, let's just focus on development first and make sure that everything works if they want their customers to use their tools, they need to make sure that these tools are compliant by design, right? That's why I like that regulation.

Are there any potentially surprising or unexpected changes you made or things you implemented that have made a difference? Or a big difference to your responsible AI efforts that you might recommend to others.

AI can be used for so many purposes and goals, and by so many different actors and stakeholders that I think my main recommendation would be to reverse engineer from the goals. You want to succeed with your business or as an individual, and maybe if it's about businesses referred to what I call narrow AI development or targeted AI development, right?

I can mentioned maybe some of the companies in the industry like Bloomberg and Thomson Reuters as they are making tools relevant to financial information and legal information respectively. They are optimizing and enhancing their offering, really tailoring it to that audience, right. And I think that's a very wise decision. Try to narrow down for what purpose you want to use the AI and then evolve and develop everything around that to accommodate that rather than trying to do everything at the same time.

For us, some of our tools will be focusing on scientific research and helping out researchers which by the way AI is amazing at, quickly reading information, analysing and summarizing. I'm very happy because I think I've had at least 700 conversations with like 4 LLMs already over time that are really going in-depth, and I think in relation to that it's very good where the challenges arise and when you ask it to give advice, it has to make decisions for you.

One could argue that a LLMs need understanding for that right to produce knowledge, because knowledge is effectively, information or data, plus understanding and interpretation. And if it can do that, then maybe we can also use it for other spaces like judiciary, healthcare and more. But to that point, I think that's going to be more of a long-term trajectory.

Could you give us a little teaser for your session at the upcoming Responsible AI Summit?

Yes, so some of the efforts mentioned, I will definitely go into, but maybe to add to that, you know the legal compliance is very important for us.

So having clarity and discussions about the use of system prompts, again, you know the instructions that AI developers give to the systems to do certain things or not do certain things. That's really an important for us to know, filtering, moderating etc.

But in addition to that, what I think maybe people would like to hear is that we are also creating auditing workflows, and we do that for two reasons. One reason is really to see how reliable the output is. Relevant to what we have already discussed. But the second thing, I mean just a day ago, I saw a news article about how there is a travel guide available on Amazon that has been created by AI, and the AI has just mixed-up pictures of different cities claiming it's a specific town or a specific country. And even the person who allegedly has created those travel guides doesn't exist, right?

So, for us to be able to upload certain AI produced works and to see whether they are based on something that we have been publishing is really interesting. I'm hoping that the audience wants to hear a little bit more about that as well.

What are you looking forward to at the Responsible AI Summit?

I am excited to find like-minded people who are willing to look at this technology holistically, that want to help make it more robust and ensure that solutions and legal compliance, accuracy and truth can be guaranteed. And I think if we do that, then AI will thrive, right? So it will help us to protect valuable and truthful knowledge. Protect the rights of vulnerable communities and I think for that we need to discuss a few elephants in the room.

Want to join Pascal at the Responsible AI Summit? Download the agenda here and see what sessions are lined up for this event

Return to Home