AI technology has become a “turtles all the way" down problem.
It's a dilemma in which AI technology is created to solve a particular problem. But in order to test the first AI tool, the tester has to use another AI technology, and then a third, and so on.
According to Johna Till Johnson, CEO of advisory and IT consulting firm Nemertes Research, most enterprises try to avoid this problem by first providing proprietary data to the input of the first AI technology and testing the output, eliminating the need to have an AI tester test constantly.
"The problem is, as you expand your AI outside of private data, the outputs can vary much more wildly," Johnson said during an interview on the Targeting AI podcast from TechTarget News. "You still need some form of AI to test the outputs and then you need some form of AI to test the AI that's testing the outputs, and you get your turtles all the way down again."
Enterprises looking to get away from this endless feedback loop might need to stick with performing manual testing of the output of the initial AI technology, Johnson continued.
Moreover, enterprises must ensure that the data they input into the technology from the beginning is trustworthy, she said.
So using an AI tool like OpenAI's ChatGPT is not advisable.
"ChatGPT has been abused horribly," Johnson said, adding that if the tool is used at her small business, it will need to be checked by a human, a time-costly activity. "If you think about the best use of ChatGPT at the moment, it's writing really bad term papers."
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.
Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast series.
Create your
podcast in
minutes
It is Free