GPT grocery bot generating poison recipes is a prime example of AI’s pitfalls in customer service



summary
Summary

Large language models bring a new dimension to customer service automation – but also new risks.

Unlike previous chatbots, large language model chatbots do not follow a predefined path. Rather, they follow a general direction that is much more focused on the user’s request and allows for a degree of flexibility in response.

This is an opportunity, of course, because it makes automated services seem more personal. But it’s also a risk if users don’t follow the rules.

AI yummy may hurt your tummy

This is what happened to the New Zealand supermarket chain PAK’nSAVE, which offers the recipe bot “Savey Meal-Bot” based on GPT-3.5.

Ad

The bot generates creative recipe ideas based on the input of at least three food items. The basic idea of the bot is that you type in what you have in your fridge and get a recipe to go with it.

Twitter user Liam Hehir came up with the idea of asking the bot what you can make with water, bleach, and ammonia. The bot’s answer: an “aromatic water mix”. The recipe the bot generated is suitable for making deadly chlorine gas.

 

Other users copied Hehir’s chatbot attack, creating even more absurd recipes with deadly ingredients like ant poison or just plain gross dishes like the “Mysterious Meat Stew” with 500 grams of human flesh.

Recommendation

OpenAI’s regular ChatGPT with GPT-3.5 blocks a request for a recipe with water, bleach, and ammonia, citing a possible health hazard from toxic gases.



Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top