The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.
I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”
Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.
I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.
Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).
The company I work for (we make scientific instruments mostly) has been pushing hard to get us to use AI literally anywhere we can. Every time you talk to IT about a project they come back with 10 proposals for how to add AI to it. It’s a nightmare.
I got an email from a supplier today that acknowledged that “76% of CFOs believe AI will be a game-changer, [but] 86% say it still hasn’t delivered mean value. Ths issue isn’t the technology-it’s the foundation it’s built on.”
Like, come on, no it isn’t. The technology is not ready for the kind of applications it’s being used for. It makes a half decent search engine alternative, if you’re OK with taking care not to trust every word it says it can be quite good at identifying things from descriptions and finding obscure stuf… But otherwise until the hallucination problem is solved it’s just not ready for large scale use.
I think you’re underselling it a bit though. It is far better than a modern search engine, although that is in part because of all of the SEO slop that Google has ingested. The fact that you need to think critically is not something new and it’s never going to go away either. If you were paying real-life human experts to answer your every question you would still need to think for yourself.
Still, I think the C-suite doesn’t really have a good grasp of the limits of LLMs. This could be partly because they themselves work a lot with words and visualization, areas where LLMs show promise. It’s much less useful if you’re in engineering, although I think ultimately AI will transform engineering too. It is of course annoying and potentially destructive that they’re trying to force-push it into areas where it’s not useful (yet).