But if it’s used to solve the wrong problems, its solutions will quickly prove fragile and even dangerous. An off-the-shelf AI model can do a wide range of tasks more quickly than a human can. Learning to identify these races is becoming an essential technical skill, and it’s harder than it looks. In the races AI can win, there is no second place. The resulting application would likely be slower, too, and even less coherent in unfamiliar situations. It would be inconceivably difficult to imitate a system as complex as language or art using standard algorithmic programming. This means AI, though not quite the cure-all it’s been marketed as, is far from useless. If something can be reduced to patterns, however elaborate they may be, AI can probably mimic it. Some art is truly random, but most of it follows a recognizable grammar of lines and colors. The thing nearest our desires and intentions is speech, and AI has learned the structure of speech.Īrt, too, is moderately stochastic. Humans don’t naturally communicate by typing commands from a predetermined list. But as a replacement for, say, a command-line interface? They’re a massive improvement. As a replacement for human beings, they fall short. And yet, somehow, their model of language is good enough to give us what we want most of the time. If LLMs were better at citing their sources, we could trace each of their little faux pas back to a thousand hauntingly similar lines in online math courses, recipe blogs, or shouting matches on Reddit. There’s no thought or understanding under the hood, just our own babbling mirrored back to us-the famous Chinese Room Argument is correct here. This understanding of generative AI explains why it struggles to solve basic math problems, tries to add horseradish to brownies, and is easily baited into arguments about the current date. It’s a massive statistical model of linguistic habits. Even if it’s never seen an exact combination of words before, it knows some words are more or less likely to appear near each other, and certain words in a sentence are easily substituted with others. Having scanned billions of pages of text written by humans, it knows what things a human being is likely to say in response to something. It captures more of its essence to say it’s an imitation engine. So even though an LLM uses similar technology to the “suggestion strip” above your smartphone keyboard and is often described as a predictive engine, that’s not the most useful terminology. But on the whole, it’s not possible to accurately predict what someone will say next. When we speak or write, we’re not truly choosing words at random-there is a method to it, and sometimes we can finish each other’s sentences. Stochastic refers to something that’s random in a way we can describe but not predict. AI is a pattern printerĪI in its current state is very, very good at one thing: modeling and imitating a stochastic system. Putting aside science fiction and speculation about the next generation of LLMs, a realistic understanding of generative AI can guide us to its ideal use case: not a decision-maker or unsupervised agent tucked away from the end user, but an interface between humans and machines-a mechanism for delivering our intentions to traditional, algorithmic APIs. And there are concerns to be had AI bears the deceptive appearance of a free lunch and, predictably, has non-obvious downsides that some founders and VCs will insist on learning the hard way. AI isn’t nearly as frivolous-it has several novel use cases-but many are rightly wary of the resemblance. In almost all cases, blockchain technology serves no purpose but to make software slower, more difficult to fix, and a bigger target for scammers. In some ways, the fervor around AI is reminiscent of blockchain hype, which has steadily cooled since its 2021 peak. It’s no surprise that people are framing generative AI as the beginning of a glorious sci-fi future, making human labor obsolete and taking us into the Star Trek era. We’ve believed that flying cars, teleportation, and robot butlers were on the horizon for decades now, and we’re hardly discouraged by the laws of physics. We should give the skeptics their due, though: human beings are easily swept up in science fiction. The excitement around these advancements has been intense. Computers are starting to behave less like tools and more like peers. Likewise with generative art models such as Stable Diffusion, which can create believable art from the simplest of prompts. For the first time ever, we can converse with a computer program in natural language and get a coherent, personalized response. Large language models (LLMs) feel like magic.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |