Artificial intelligence has captured the imagination of the tech industry. Using enormous datasets scraped from the Internet OpenAI built ChatGPT, an AI that seems to act human. OpenAI’s dominance of the field has come into question with the surprise ouster of Sam Altman. Are AIs proponents over-promising its capabilities? Are the critics missing key advantages and limitations? I have been trying to use it to code for the past few months, and am surprised at how far it still has to grow.
Why It Matters
New tools create new efficiencies and can mean the growth of companies along with loss of people’s jobs. Used irresponsibly AI can harm folks. AI can mean a golden era of prosperity or a new dark ages where people have no jobs and are subject to the whims of HAL 9000.
Where AI Stumbles
- Asking it to translate an HTML page to Markdown is a simple task for a person but ChatGPT does not do it.
- Writing a regular expression to remove the empty line at the start of Ruby comment blocks flummoxed ChatGPT but I did this in 15 minutes.
- Feeding it a large output of log dump and asking it to find and highlight errors is rarely successful.
Where AI Excels
- For coding libraries where the training data covers it, it can often quickly summon lists of FFMPEG flags or Tailwind classes based on a description.
- For general questions in a search engine Kagi’s “Quick Answer” feature can highlight a more relevant even if it is not otherwise the top result in Kagi’s ranking scheme.
- Specialized AIs for tasks like image recognition, text recognition, transcription, translation, etc. work better than their non-AI predecessors.
General AI is a Distraction and Mistake
The major mistake most companies and users in the AI space are making is their focus on chat bots and general AI. Throwing large amounts of scraped data at AI creates what I view as a fat model. It is broad in its knowledge but shallow in its expertise. It is also horribly inefficient: the AI has to go through a lot of junk to find the diamonds.