Skip to content Skip to footer

Chatbots Caught in the (Legal) Crossfire | by Tea Mustać | Dec, 2023


1. Choosing a Chatbot

As simple as this one may sound, it is far from a trivial question. The options are manifold and include choosing to build your own chatbot using open-sourced code.[1] Using one of the gazillion chatbot APIs offered on the market, that allow you the simplest and quickest ready-set-go set-up.[2] Finetuning your own chatbot based on one of those APIs.[3] Finetuning your chatbot using various chatbot tools.[4] Or just paying someone to do it all for you by opting for a Chatbot as a Service. [5]

Choosing any single one of these options doesn’t come without its ripple effects. And these ripple effects of course include performance and flexibility in setting up the bot, but also the particularities of conforming with legal obligations. So for instance, developing your own bot from scratch or relying solely on open-sourced code is definitely the safest option data protection-wise, as you control all the training data and the data isn’t flowing anywhere else. However, this is not without its downsides and one should only jump into this frying pan if one has enough expert resources to set the thing up and running while guaranteeing a certain level of performance. Conversely, relying on APIs always entails a certain level of risk of possible data leakage. Not to mention you rely on someone else’s performance and are at least in the first line responsible for their mistakes as well (GDPR joint-controllership alert). The situation of course getting even more complex when yet another tool is used for finetuning for instance.

The simplest option then probably turns out to be leaving the mess to someone else and just buying the product or rather service. However, aside from being the most expensive way to go (especially if you want a highly personalized bot), this option also has its pitfalls and one should then choose which particular bot to hire VERY carefully while taking into consideration all publicly shared information on the data processing practices, training data used etc. Or again land back in the fire for failing to comply with due diligence obligations.

2. Fine-tuning a Chatbot

Once you’ve chosen your bot, and presuming you’ve chosen an option including some finetuning from your side, congratulations! You just jumped from the frying pan straight into the fire. Regardless of whether you use one of the tools for automated finetuning or you take open-sourced code, roll up your sleeves and get the hands dirty yourself, which data you feed into the model is just as important as choosing the model.

We are all already familiar with the whole garbage-in-garbage-out agenda, but there is another maybe more important agenda to be considered. And that is the legally-problematic-stuff-in-non-negligible-risk-of-legal-action-out. We already familiarized ourselves with this concept through the lawsuits of artists and newspapers against the biggest LLM providers. And the very likely scenario is that once the legal situation has cleared up there, the lawsuits may proliferate to anyone 1. Using their products or services and 2. Doing a similar thing. The key takeaway being, of course, to keep track of the legal developments in the field and to not feed your model with (likely) unlawful data. We can also add one bonus takeaway to this, avoid feeding your model personal data at all times. Aside from the copyright debate for a second, using personal data where not absolutely necessary will always get you into trouble.

One final possibility and potential problem to consider is that nowadays you don’t even need to finetune your model. You can continually finetune it so to say, by performing further API calls or website calls where you can fetch the data for the bot’s responses. If that is the case, make sure to respect any limitations to the use of data imposed by the original website provider. These limitations can come in the form of robots.txt files but also just be stated in their Terms and Conditions. Yes, even crawling and linking has its limits.

3. The Disclaimers

If there is one thing that law experts cannot get enough of that is ‘disclaimers’. So make sure to implement a fair number of those together with your chatbot. Two absolute non-negotiables being that the person interacting with an AI system needs to be made aware of the fact before they can even interact with it, as well as of the fact that outputs can be inaccurate and shouldn’t be relied upon. These two can be nicely packed together in the form of a pop-up, but should also remain continuously visible somewhere on the website or the user could be repeatedly reminded of their existence. Better overly transparent than sorry applies here.

And the same goes for the privacy notice, the whole notice itself being a sort of disclaimer. Although the workings of a large language model require a computer science degree to be somewhat understandable, you are still required to try and make them understandable within the limited scope of the privacy notice. Imagine explaining what the model does to your six-year-old or maybe your grandparents and take it from there. Pictures, videos and graphics are most welcome. On the other hand, if you are using any of the APIs or automated tools mentioned in Step 1, you are of course free to link the privacy notices of the relevant service provider(s), but that still doesn’t mean you’re off the hook. In this particular context, you are the one offering the service and being the first contact point for questions and complaints. Therefore, it is your responsibility to explain where the users’ data is flowing, why that is necessary and how they can stop the processing. And this again requires some skill as well as creativity, in order to be done transparently and adequately. Good luck cracking your brains over that one!

4. The Outputs

Now we finally made it to the outputs, so surely we must be approaching the end. If you were thinking that, you were correct! Well at least somewhat. This one still is a whole separate mountain to climb. And apart from the already mentioned disclaimer, stating that the results might be incorrect, there are a couple more things to consider, because there are multiple reasons for the possible incorrectness. The first one is of course the infamous hallucinations of LLMs, due to their inherent lack of understanding of the data we so graciously feed them. And, besides praying that some very smart people figure out how to fix that, there is not much else we can do about the issue other than implementing our disclaimer.

On the other side of the coin, however, we have something different, which will apply to all chatbots crawling other websites to find and output information. So now you have to ask yourself what happens if the scrapped information is false or even illegal. For situations like these, it might be best to rely on the so-called Hosting exception contained in Article 14 of the now already ancient e-commerce directive. This exception, which also applies to search engines for example, guarantees that hosts and intermediaries are not liable for the content they simply provide access to. This, however, only applies if it wasn’t obvious that the content was unlawful. So, to maximally simplify this down. First, only crawl and scrape trustworthy information sources you checked beforehand (don’t try and play Google). Second, make sure to integrate references in all your chatbot’s outputs, so the original sources for all information are immediately visible.

One last thing worth considering and putting some extra coding hours into is integrating follow-up questions for situations when the user’s initial input was very broad or unclear. In this way, your bot can re-prompt the user so to say, so that the user offers a better prompt in response. This will in turn make the model produce better outputs as a result. Both accuracy and performance-wise.

5. Quality over speed

And for the end just to nail this one down again, because it appears it always comes down to this. Pay special attention to the quality of your bot’s outputs, as this is one of their most prominent and definitely most noticeable issues. It was the controversy in the Italian ChatGPT temporary ban, where inaccurate outputs were meant to prove the inaccuracy of the training data.[6] Hallucinations, as an output deficiency, were and always remain one of the main concerns, also still preventing the chatbots from entering the domain of search engines.[7] And we are not going to even enter the algorithmic bias/garbage-in-garbage-out debate.[8]

The accuracy and quality of the outputs, aside from hallucinations, which remain a separate riddle, can be greatly enhanced by paying special attention to the accuracy and quality of the training data. As well as the relevance of that data. Furthermore, in case you are actively fetching data through API calls or in any other way for that matter, the data you are fetching should also be double-checked for accuracy, representativity, as well as appropriateness. Finally, you should have appropriate mechanisms in place for identifying any necessary updates or any changes necessitating an update of your data sets and, of course, some mechanisms for adequately responding to such identified events.

Quality is an ongoing concern, not a one-time box to be ticked off the checklist. All this comes at a cost, primarily timewise, making the development process slower. However, quality should always come before speed, as not everyone can afford to ‘move fast and break things’.[9] At least not, if they are trying to develop a sustainable and responsible business model.



Source link